Elizabeth Schmitt Freire

Elizabeth Schmitt Freire is a Brazilian psychologist, with an M.A. in Clinical Psychology. She is currently completing a Ph.D. in Developmental Psychology at Universidade Federal do Rio Grande do Sul (Porto Alegre, Brazil) and developing part of her doctoral studies at University of Strathclyde in Glasgow, UK. Elizabeth is a person-centered therapist and the Coordinator of the Person-Centered training program of Delphos Institute, in Brazil. She is the author of a book in Portuguese with Newton Tambara about the theory and practice of client-centered therapy and has articles and chapters published in English.

This paper discusses the epistemological underpinnings of the use of the randomized controlled clinical trial (RCT) in psychotherapy research. It is argued that underlying the therapy-choice dispute overtly targeted by the RCT is an epistemologically controversial (and covert) theory-choice dispute. It is found that the RCT is not a theory-neutral evaluative methodology. It is rather a research methodology shaped by assumptions that originate in behaviorist theories of therapy. Since there is no neutral language or basic vocabulary shared by the competing theories which would enable the comparison of their observation reports, and since behaviorist and non-behaviorist therapies are grounded in incommensurable theories, it is argued that an RCT cannot be used to compare them. Therefore, RCT cannot be held up as the definitive method for investigating psychotherapy. The current perspective of sociology of science is provided in order to understand the social factors embedded in the use of the RCT to generate lists of “empirically supported treatments”.

Keywords: empirically supported treatments, psychotherapy research, psychotherapy efficacy.

The development of guidelines for the provision of “empirically supported” psychological services is a recent international trend. It is part of a movement towards “evidence-based” practice, which involves certification of specific psychotherapies by governmental and professional bodies and by managed care insurers (Rowland & Goss, 2000). The most notorious efforts of this “empirically supported treatments” (EST) movement have been those developed by Division 12 of the American Psychological Association (the Division of Clinical Psychology), which appointed a ‘Task Force on Promotion and Dissemination of Psychological Procedures’ to establish a set of criteria for levels of empirical support and to identify specific effective treatments for specific problems. This Task Force (now called the ‘Standing Committee on Science and Practice’) generated and disseminated lists of treatments that met the criteria for different levels of empirical support (Chambless, 1996; Chambless et al., 1996; Chambless & Hollon, 1998).

The criteria used by most of the EST lists and guidelines rank the randomized controlled clinical trial (RCT) at the top of the hierarchy of methods for evaluating psychotherapy (Nathan, 1996). RCT is touted by the EST advocates as the definitive method for investigating psychotherapy (Chambless & Hollon, 1998; Chambless & Ollendick, 2001). The control procedures of randomized clinical trials were derived from experimental science by clinical researchers with the objective of separating the “effects” of the therapy per se from changes that may result from other factors, such as patient expectancy of change, the passage of time, and therapist attention. RCT researchers claim that these “extraneous” factors must be controlled in order to have confidence that the intervention was responsible for any observed change (Kendal, Holmbeck & Verduin, 2004).

The EST movement has stirred up considerable controversy and many arguments opposed to it have been written in the last decade (e.g., Bohart, 2002; Bohart, O’Hara & Leitner, 1998; Bozarth, 2002; Fonagy & Target, 1996; Henry, 1998; Wampold, 1997). The dissemination of lists of “empirically supported treatment” has received a lot of criticism from both practitioners and psychotherapy researchers and has become highly controversial. Lambert, Bergin, and Garfield (2004), for instance, assert that these lists are static and seem to offer only a false guarantee of effectiveness: “although many practitioners and the public may be comforted by the notion that they are offering or receiving an empirically supported psychotherapy, the fact is that the success of treatment appears to be largely dependent on the client and the therapist, not on the use of “proven” empirically based treatments” (p. 9).

In the Special Section of Psychotherapy Research about the EST controversy, Elliott (1998) summarized the numerous arguments against the EST project. Some of the key criticisms of the EST project outlined by Elliott (1998) are:

• EST criteria are too restrictive because they exclude results derived from other methodological approaches to research (e.g. open clinical trials, posttreatment client reports, process-outcome studies, and pre-DSM-III research).
• Lack of clinical utility or external validity (i.e., effectiveness) of most of the treatments cited in the EST lists.
• EST lists show a pattern of systematic discrimination and bias against certain treatments, such as psychodynamic and experiential treatments, “which don’t share cognitive-behavioral assumptions and traditions regarding nature of treatment, model of human nature, manualizability of treatments, philosophy of science, and research design” (p. 119).
• By setting the randomized clinical trial as the gold standard design, EST criteria restrict the range of research approaches which can be used to study treatment effects, thereby discouraging and stifling other kinds of therapy research.
• Randomized clinical trials have serious methodological flaws, e.g. selective attrition, limited range of outcome measures, absence of long-term follow-up data, allegiance effects, and reduced generalizability to actual conditions of practice.
• EST makes invalid assumptions about the specificity of diagnosis, since it ignores the comorbidity of problems/disorders.

In August 2000, in the face of these criticisms, the American Psychological Association launched the Criteria for Evaluating Treatment Guidelines (APA, 2002) which is a significant revision of the earlier Template for Developing Guidelines (APA, 1995). In this document, there is an acknowledgment of the value of alternative methodologies and a deemphasis on RCTs as the “gold standard” in psychotherapy outcome research. More recently, in August 2005, the APA Council of Representatives approved a new Policy Statement on Evidence-Based Practice in Psychology (APA, 2005) which further recognizes the contribution of multiple research designs to evidence-based practice. Best research evidence is considered to draw from a variety of research designs and methodologies.

Notwithstanding these significant revisions of the APA’s policy, RCTs and their logical equivalents are still considered to be the “standard for drawing causal inferences about the effects of interventions” (APA, 2005, p.8) and “the more stringent way to evaluate treatment efficacy” (APA, 2002, p.1054). Also, in spite of all the controversy, the current “Committee on Science and Practice” of the American Psychological Association continues to pursue the goal of developing a single list of empirically supported treatments (Weisz, Hawley, Pilkonis, Woody & Follette, 2000).
It is not the purpose of this article to dwell on the whole body of criticism of the EST movement, but rather to contribute to this debate with some remarks about the epistemology of the RCT from the perspective of the post-positivist philosophy of science.

RCT and the theory-choice dispute

The use of RCT in psychotherapy research is analogous to its use in medicine to test the efficacy of new drugs (Bohart, 2002; Persons & Silberschatz, 1998; Stiles & Shapiro, 1989). Since the “RCT is the ‘gold standard’ in medicine, it is a priori also assumed to be the gold standard in psychotherapy research” (Bohart, 2002, p. 262). Therefore, as in drug research, the RCT is regarded as the “method of choice” which enables the practitioner to choose which therapy to use in each situation.
There is, however, a fundamental difference between drug research and psychotherapy research. Unlike competing drugs, competing therapies are embedded in competing theories of personality and therapeutic change. Therefore, underlying the therapy-choice dispute overtly targeted by the RCT is an epistemologically controversial (and covert) theory-choice dispute.

• The empiricist myth of objectivity

The use of RCT in psychotherapy research is grounded in the received view conception of objectivity that takes for granted the neutrality of pure sensation-reports and assumes that there is a separate and neutral “observation-language” against which the theoretical statements of science are tested. Thus, the epistemological assumption of RCT is that psychotherapies can be compared by recourse to a basic, common vocabulary (e.g., “disorder” and “technique”) which is attached to nature in ways that are unproblematic and independent of theory.

Nevertheless, post-positivist philosophy of science argues that this assumption is an “empiricist myth” for there is an intimate and inevitable entanglement of scientific observation with scientific theory (e.g. Feyerabend, 1970; Kuhn, 1970; Popper, 1980). It is now well-established in the philosophy of science that there are no pure “facts” but only facts as couched in one conceptual system or another. There are no pure observations but, rather, observations couched in a theory-laden vocabulary. Feyerabend (1981) argues that the observation language is part of the theoretical language rather than something self-contained and independent.

The observation statements in science are heavily theory-laden, especially if a great deal of theory is needed to set up the experiment in order to collect the data. Experimental work, even from its inception, is dominated by theory: thus, it is never neutral (Popper, 1980). Research projects are guided by antecedent assumptions about the structure of the phenomena which shape the eventual empirical findings. Kuhn (1970) demonstrated that when the scientist is engaged with a normal science research problem, he or she must premise current theory as “the rules of his game.” Lakatos also declares: “the choice of a theory is equally the choice of a research programme ” (Lakatos, 1970, p. 262).

• Behaviorist theories of therapy as the “rules of the game” of RCT

Accordingly from the post-positivist view of science, the RCT is not a neutral experimental work. In fact, it can be argued that RCT is rather a research methodology shaped by assumptions originated in behaviorist theories of therapy. These assumptions include the following:

I. A psychological problem is a disorder analogous to a medical problem.

An RCT must be conducted on a sample identified with some specificity. Therefore, RCT includes only patients with psychological problems that can be operationalized as specific “symptom-focused” disorders (Chambless & Ollendick, 2001). Therapeutic efficacy in the RCT is thus defined in a manner analogous to drug efficacy in the medical model; that is, therapy is said to be efficacious if it leads to the “cure” of a disease or a “remedy” for a particular targeted problem or disorder. This “specificity” assumption is also the premise of behaviorist theories of therapy since they view psychotherapy through a medical model analogy wherein the “removal of symptoms” is the central goal of treatment, and “the more specifically tailored the treatment is to the disorder, the more likely it will be to be effective” (Bohart, 2002, pp. 260-261).

This assumption of specificity and the concept of “symptom-focused” disorders, however, are not shared by non-behaviorist theories of therapy. Humanistic therapies, for instance, deal with psychological dysfunction in the broad context of clients’ engagements in life and ways of being in the world. They adopt a holistic attitude, wherein the therapist works with the whole person of the client to help him or her remove obstacles to living a better life (Bohart, 2002). Therefore, in humanistic therapies, there is no such thing as a standardized problem. The focus is on the uniqueness of the individual, and problems are embedded in the whole person-situation context.

II. Technology is the primary cause of change on psychotherapy.

The RCT purports to establish linear, efficient causal relationships between treatment application, i.e., the independent variable, and its (dependent variable) “effects” (Chambless & Hollon, 1998). Its basic assumption is that treatment efficacy “must be demonstrated in controlled research in which it is reasonable to conclude that benefits observed are due to the effects of the treatment and not to chance or confounding factors” (Chambless & Ollendick, 2001, p. 7). According to this perspective, “uncontrolled studies of therapeutic results that fail to pinpoint the effects that can be accurately attributed to therapy provide, at most, speculative knowledge” (Kendall, Holmbeck & Verduin, 2004, p.17).
Bohart (2002) depicts that perspective as a technological and mechanistic view of therapy. Stiles, Honos-Webb and Surko (1998) also call it “the ballistic view” because it assumes that once a treatment is applied to a disorder, everything then follows in a mechanistic chain-reaction way. In this technological view of therapy, even the therapeutic relationship and the therapist’s and client’s characteristics are viewed as mechanistic and linear variables that either “mediate” or “moderate” the association between dependent and independent variables (Kendall, Holmbeck & Verduin, 2004). Founded by this technological view of therapy, RCT essentially aims to answer the question: “what therapist behaviors are effective with which types of clients in producing which kinds of patient change?” (Kendall, Holmbeck & Verduin, 2004, p.16).
The logic of this technological view of therapy fits perfectly with the behaviorist therapies since they are conceived of as the application of specific procedures to alleviate specific disorders. But this ballistic or technological view of therapy does not fit with non-behaviorist therapies which rely on a flexible, creative, and dialogical relationship between therapist and client, and which refuse to provide specific treatment packages for specific disorders. For the non-behaviorist approaches, the therapeutic relationship is an interpersonal/dialogical process and a complex holistic phenomenon which cannot be dismantled into component, linear causal parts. In this model of therapy, there is no operationalizable independent variable in therapy research, and interventions do not map in a one-to-one fashion into client effects. For these approaches, a therapy is said to be efficacious if it provides a certain kind of opportunity or experience that clients seek “to make a variety of personal changes, none of which are connected in a linear, mechanistic way with what the therapists do” (Bohart, 2002, pp. 259-260).

• The incommensurability of competing theories
Underneath the explicit comparison of competing therapies, the RCT also performs an implicit comparison of competing theories of therapies. To be valid, such a comparison would require a neutral observational language into which the empirical consequences of both theories could be translated without loss or change. As we have seen, however, such neutral language does not exist. Since the meanings of observation-statements are determined by theory, terms in very different theories cannot share the same meaning. Therefore, in the translation of one theory to another words change their meanings or conditions of applicability. Kuhn (1970) argues that “though most of the same signs are used, (…) the ways in which they attach to nature are different. These theories are incommensurable” ( p. 267).
This is the case with behaviorist and non-behaviorist theories of psychological change. Terms like “therapy”, “therapeutic relationship” and “psychological problem” have very different meanings and conditions of applicability. Accordingly, these are incommensurable theories. Consequently, since behaviorist and non-behaviorist therapies are grounded in incommensurable theories, the RCT cannot be used to compare them.
Since there is no neutral language or basic vocabulary shared by the competing theories which would enable the comparison of their observation reports, Kuhn (1970) concludes that the terms “truth” and “proof” cannot be applied to inter-theoretical contexts. In a debate over choice of theory, neither party has access to an argument which resembles a proof in logic or formal mathematics. The disagreement among scientists is rather about the meaning or applicability of a stipulated rule. Consequently, Kuhn concludes that “to rely on testing as the mark of a science is to miss what scientists mostly do and, with it, the most characteristic feature of their enterprise” (Kuhn, 1970, p. 10)
Therefore, the claim that the RCT would be the definitive “proof” to evaluate the efficacy of psychotherapies is epistemologically unsound. RCT must be treated as only one among many methods for investigating psychotherapy, rather than being held up as the definitive method (Task Force for the Development of Practice Recommendations for the Provision of Humanistic Psychosocial Services, 2005).

• Alternatives to RCT in psychotherapy outcome research

Some of the RCT’s advocates claim that the only legitimate form of clinical psychology is one that adheres to the “highest standards” of “legitimate science.” They claim that “anything less is pseudoscience” (Lohr, Fowler & Lilienfeld, 2002; McFall, 1996). However, this effort to demarcate genuine science from pseudo science has been seriously undermined since logical positivism has come under challenge from naturalist and historicist approaches in philosophy of science. Current developments in both philosophy of science and in history and social studies of science following the collapse of the logical empiricist consensus contend that there is no unified scientific method that provides a “cookbook” on the basis of which to conduct scientific research.
According to Feyerabend (1970), logical empiricist methodologies inhibit scientific progress by enforcing restrictive conditions on new theories. He argues that the only general methodology which would not inhibit the progress of science would contain just one rule, the useless suggestion: “anything goes.” Science, Feyerabend insists, is a collage, not a system or a unified project. It includes plenty of components derived from distinctly “non-scientific” disciplines. Science is a collection of theories, practices, research traditions and world-views whose range of application is not well-determined, and whose merits vary to a great extent: “Science is not one thing, it is many” (in Preston, 2005, Feyerabend in the Nineties, para. 4).
Therefore, a methodological pluralism and\or methodological synthesis would make research more useful to providers and clients than the creation of empirically supported “treatment-for-disorder” lists. Psychotherapy outcome research should be pluralistic, embracing qualitative methodologies such as empirical phenomenology (e.g., Giorgi, 1970; Wertz, 1982), narratology (e.g., Bruner, 1986; Polkinghorne, 1988), and grounded theory (Glaser & Strauss, 1967; Rennie, Phillips & Quartaro, 1988) which provide ways of addressing the meaning and value of psychotherapy in the lives of individual human beings that are impossible to achieve within the mode of natural science (Rennie, 1995).
An alternative and pluralistic perspective of psychotherapy research was developed by the Task Force for the Development of Practice Recommendations for the Provision of Humanistic Psychosocial Services (2005) of the APA Division 32 (Humanistic Psychology), which published a document entitled “Recommended Principles and Practices for the Provision of Humanistic Psychosocial Services: Alternative to Mandated Practice and Treatment Guidelines”. This document presents an alternative perspective on psychotherapy process and outcome research from within the humanistic-experiential tradition. According to this perspective, psychotherapy involves human issues like meanings, values, freedom and indeterminancy that call for multiple research methods, some of which are unique to human science. These research methods should be able:
• to approach the person not just as a diagnostic category but as a whole;
• to consider therapy as an open dialogical process that is unpredictable and unmanipulable;
• to capture the nonquantifiable and the meaningful;
• to consider the participating individual as an agent and interpreter of the therapeutic situation;
• to focus descriptively and interpretively on individual persons in depth.
(Task Force for the Development of Practice Recommendations for the Provision of Humanistic Psychosocial Services, 2005, pp. 16-17)

Alternative rigorous and systematic methodologies of psychotherapy outcome research have been recently developed by exponents of humanistic psychology, such as “Hermeneutic Single-Case Efficacy Design” (Elliott, 2001) and “Multiple-Case Depth Research” (Schneider, 2001). These approaches address real-life phenomena with all their intrinsic complexity and richness, thus requiring the researcher to deal with complexities, contradicitions and ambiguities which are invisible in positivistic designs as RCT.

• Social dimensions of scientific practice

The impact of the RCT on the field of practitioners of psychotherapy is enormous. Managed care and insurance companies are using the outcomes of RCTs as ammunition in their efforts to control costs by restricting the practice of psychological health care (Seligman & Levant, 1998). In addition, adherence to treatments which are validated through an RCT is likely to become a major criterion in accreditation decisions and approval of continuing education sponsors. Therefore, since behaviorist theories are the “rules of the game” of RCT research, behavioral and cognitive-behavioral therapies are being “officially” recognized as effective, reimbursed by insurance and actively promoted in training program, whereas psychotherapeutic approaches that are incommensurable with them are being disenfranchised (Bozarth, 2002; Bohart, O’Hara & Leitner, 1998).

In regard to the humanistic-experiential therapies, for instance, there is a substantial body of research data supporting their effectiveness. Even studies using RCT designs have provided data sufficient to justify their effectiveness (Elliott, 2002; Elliott, Greenberg & Lietaer, 2004). Nonetheless, the EST lists generated on the criteria of RCT “enshrine preconceptions about the supposed ineffectiveness of experiential [experiential-humanistic] therapies as both scientific fact and healthcare policy” (Elliott et. al., 2004, p. 495).

Current sociology of science provides a useful understanding of these facts. According to this perspective, scientific judgment is determined by social factors such as professional interests and political ideologies. The social organization of the scientific community has a bearing on the knowledge produced by that community. The individuals participating in the production of scientific knowledge are historically, geographically, and socially situated, and their observations and reasoning reflect their situations. Feyerabend (1970) goes even further, suggesting that rhetoric, propaganda, personal whims and social factors have a decisive role in the development of scientific knowledge.

In conclusion, it is necessary to unveil the rhetorical propaganda and social factors underpinning the “myth” that the RCT is the definitive method for evaluating psychotherapy efficacy in order to bring to an end the unwarranted disenfranchisement of non-behaviorist therapies.

American Psychological Association (1995). Template for developing guidelines: Interventions for mental disorders and psychosocial aspects of physical disorders. Washington, DC: Author.

American Psychological Association (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57(12), 1052-1059.

American Psychological Association (2005). Policy statement on evidence-based practice in psychology. Retrieved September 16, 2005, from http://www.apa.org/practice/ebpstatement.pdf

Bohart, A. (2002). A passionate critique of empirically supported treatments and the provision of an alternative paradigm. In J. Watson, R. Goldman & M. Warner (Eds.). Client-centered and experiential psychotherapy in the 21st century: Advances in theory, research and practice (pp. 258-277). Ross-on-Wye: PCCS Books.

Bohart, A., O’Hara, M. & Leitner, L. (1998). Empirically violated treatments: Disenfranchisement of humanistic and other psychotherapies. Psychotherapy Research, 8, 141-157.

Bozarth, J. (2002). Empirically Supported Treatment: Epitome of the “Specificity Myth.” In J. Watson, R. Goldman & M. Warner (Eds.). Client-centered and experiential psychotherapy in the 21st century: Advances in theory, research and practice (pp. 182-203). Ross-on-Wye: PCCS Books.

Bruner, J. (1986). Actual minds, possible worlds. Cambridge: Cambridge University Press.
Chambless, D. L. (1996). In defense of dissemination of empirically supported psychological interventions. Clinical Psychology: Science and Practice, 67, 491-501).

Chambless, D. L., Sanderson, W. C., Shoham, V., Johnson, S. B., Pope, K. S., Crits-Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S., Beutler, L. E., Williams, D. A., & McCurry, S. (1996). An update on empirically validated therapies. Clinical Psychology, 49(2), 5-14.

Chambless, D. L. & Hollon, D. H. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7-18.

Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

Elliott, R. (1998). Editor’s introduction: A guide to the empirically-supported treatments controversy. Psychotherapy research, 8(2), 115-125.

Elliott, R. (2001). Hermeneutic single-case efficacy design: An overview. In K. J. Schneider, J. F. T. Bugental & J. F. Pierson (Eds.), The handbook of humanistic psychology (pp. 315-324). Thousand Oaks: Sage.

Elliott, R. (2002). Research on the effectiveness of humanistic therapies: A meta-analysis. In D. Cain & J. Seeman (Eds.), Humanistic psychotherapies: Handbook of research and practice (pp. 57-81). Washington, DC: APA.

Elliott, R., Greenberg, L. S. & Lietaer, G. (2004). Research on experiential psychotherapies. In M. J. Lambert (Ed.), Handbook of psychotherapy and behavior change (pp. 493-539). New York: Wiley.

Feyerabend. P. (1970). Against Method: Outline of an anarchistic theory of knowledge. In M. Radner & S. Winokur (Eds.) Theories & methods of physics and psychology: Minnesota studies for the philosophy of science (pp. 17-130). Minneapolis: University of Minnesota Press.

Feyerabend, P. (1981). Realism, rationalism, and scientific method: Philosophical papers, vol. 1. Cambridge: Cambridge University Press.

Fonagy, P. & Target, M. (1996). Should we allow psychotherapy research to determine clinical practice? Clinical Psychology: Science & Practice, 3, 245-250.

Giorgi, A. (1970). Psychology as a human science: A phenomenologically based approach. New York: Harper & Row.

Glaser, B. & Strauss, A. (1967). The discovery of grounded theory. Strategies for qualitative research. Chicago: Aldine.

Henry, W.P. (1998). Science, politics, and the politics of science: The use and misuse of empirically validated treatment research. Psychotherapy Research, 8, 126-140.

Kendal, P. C., Holmbeck, G. & Verduin, T. (2004). Methodology, design, and evaluation in psychotherapy research. In M. J. Lambert (Ed.), Handbook of psychotherapy and behavior change (pp. 16-43). New York: Wiley.

Kuhn, T. (1970). Reflections on my critics. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge, (pp. 231-278). London: Cambridge University Press.

Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91-196). London: Cambridge University Press.

Lambert, M. J., Bergin, A. E., & Garfield, S. L. (2004). Introduction and historical overview. In M. J. Lambert (Ed.), Handbook of psychotherapy and behavior change (pp. 3-15). New York: Wiley.

Lohr, J. M., Fowler, K. A., & Lilienfeld, S. O. (2002). The dissemination and promotion of pseudoscience in clinical psychology: The challenge to legitimate clinical science. The Clinical Psychologist, 55, 4-10

McFall, R. M. (1996). Manifesto for a science of clinical psychology. The Clinical Psychologist, 44, 75-88.

Nathan, P.E. (1996). Validated forms of psychotherapy may lead to better-validated psychotherapy. Clinical Psychology: Science & Practice, 3, 251-255.

Persons, J. B. & Silberschatz, G. (1998). Are results of randomized controlled trials useful to psychotherapists? Journal of Consulting and Clinical Psychology, 66, 126-135.

Polkinghorne, D. (1988). Narrative knowing and the human sciences. Albany: State University of New York Press.

Popper, K. (1980) The logic of scientific discovery. London: Hutchinson. (Original work published 1959).

Preston, J. (2005). Paul Feyerabend. In E. N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. Retrieved May 20, 2005, from http://plato.stanford.edu/archives/spr2005/entries/feyerabend/

Rennie, D. (1995). On the rhetorics of social science: Let’s not conflate natural science and human science. The Humanistic Psychologist, 23, 321-332.

Rennie, D. L., Phillips, J. F. & Quartaro, G. K. (1988). Grounded theory: A promising approach to conceptualization in psychology. Canadian Psychology, 29, 139-150.

Rowland, N., & Goss, S. (Eds.).(2000). Evidence-based counseling and psychological therapies: Research and applications. Philadelphia: Routledge.

Schneider, K. J. (2001). Multiple-case depth research: Bringing experience-near closer. In K. J. Schneider, J. F. T. Bugental & J. F. Pierson (Eds.), The handbook of humanistic psychology (pp. 305-314). Thousand Oaks: Sage.

Seligman, M. P., & Levant, R. (1998). Managed care policies rely on inadequate science. Professional Psychology: Research and Practice, 29, 211-212.

Stiles, W. B. & Shapiro, D. A. (1989). Abuse of the drug metaphor in psychotherapy process-outcome research. Clinical Psychology Review, 9, 521-544.

Stiles, W. B. , Honos-Webb, L. & Surko, M. (1998). Responsiveness in psychotherapy. Clinical Psychology: Science and Practice, 5, 439-458.

Task Force for the Development of Practice Recommendations for the Provision of Humanistic Psychosocial Services (2005). Recommended principles and practices for the provision of humanistic psychosocial services: Alternative to mandated practice and treatment guidelines. Retrieved 20 May, 2005, from http://www.apa.org/divisions/div32/pdfs/taskfrev.pdf

Wampold, B.E. (1997). Methodological problems in identifying efficacious psychotherapies. Psychotherapy Research, 7, 21-43.

Weisz, J. R., Hawley, K. M., Pilkonis, P. A., Woody, S. R., & Follette, W. C. (2000). Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest. Clinical Psychology: Science and Practice, 7(3), 243-258.

Wertz, F. J. (1982). The findings and value of a descriptive approach to everyday perpectual process. Journal o Phenomenological Psychology, 13, 169-195.

Published in Journal of Humanistic Psychology, Jul 2006; 46: 323-335.