Young - EST, MCE, MCC: The Abbreviating of Psychology
EST, MCE, MCC: The Abbreviating of Psychology
Linda J. Young, Ph.D.
Warning: You will not receive continuing education credits as a reward for reading this opinion piece. Nor are you likely to be able to demonstrate that reading it has increased your competency as a clinician. Finally, I offer no empirical evidence that reading this will reduce the incidence of any DSM IV symptomatology you or anyone else might diagnose you as having. And folks, that’s the good news. Because for now, the constricting mandates of “appropriate” treatment, education and competency still leave open the possibility of that which lies beyond the pale and that which challenges and interrogates the status quo. And indeed, it is the status quo that worries me. As I see it, the abbreviations in the title of this opinion piece are working to define and delimit the status quo in such a way that options and freedoms currently enjoyed and taken for granted are being significantly curtailed. Some of these forces are subtle, others less so. I fear that once in place within our political and professional landscape, they will have a half-life that far exceeds what can be easily envisioned now, and that our professional lives as clinicians will be foreshortened and abbreviated in all kinds of ways.
For those of you who are unaware of what these abbreviations mean or what the big deal is, I offer you a cursory sketch of a piece of history that may be relevant. EST (empirically supported treatment) refers to those few therapies that use tightly controlled research methods to claim efficacy in treating specific disorders diagnosed as Axis 1 disorders of the DSM IV. The idea of creating a list of empirically supported treatments initially grew out of concerns that practice guidelines were giving priority to psychopharmacology over psychotherapy in the absence of sound evidence (Barlow, 1996). In 1995, the EST movement, which involved a mandate to train professionals exclusively in the use of these empirically validated therapies, was bolstered with the publication of what was to be the first of several task force reports by the American Psychological Association (Task Force on Psychological Interventions Guidelines, 1995). This is important, because since then, whether called empirically validated therapies (EVT), empirically supported therapies (EST), or evidence based practice (EBP), these generally refer to whatever therapeutic treatments a particular research methodology has deemed empirically sound and valid—hence, justifiable forms of practice. By implication, those therapies that do not make the list are viewed as not being empirically supportable, and alternative approaches to research are treated as nonexistent or irrelevant. As pointed out by Edwards, Dattilio and Bromlely (2004), terms such as evidence-based practice and empirically supported treatment often have been appropriated by a rhetoric that promotes simplistic and misleading assumptions, as this rhetoric has been heavily influenced by particular ideological perspectives and market interests. There has been pressure for modes of psychological treatment to be evaluated similarly to the way pharmacology treatments are evaluated—namely, by means of double-blind, randomly controlled studies. In these (RCT), one or more treatments are pitted against one another and a placebo. Furthermore, that which is deemed “treatment” is necessarily operationalized by means of a treatment manual, specifying what it is that needs to be done in each session and just how many sessions there should be (Edwards et al, 2004).
It is important to keep in mind that no matter how vociferous the claims of scientific neutrality and rigor (challengeable in themselves as I will describe shortly) these studies are being developed and contextualized by entrepreneurial forces that interpret and make use of the “results” in the service of increasing profits. For instance, health insurance companies themselves create a tremendous market pressure for ESTs to be both short and standardized as it is financially in their best interest to determine the minimum amount of treatment that can be deemed scientifically “effective.” Additionally, it is too often wrongly assumed that unless a treatment has been tested in a randomly controlled trial, it has no empirical basis, and therefore lacks validity as a treatment.
I will not spend a great deal of time enumerating the many significant ways in which much scientific research on treatment efficacy is flawed, even when evaluated by its own scientific assumptions and standards. I refer the reader to an excellent article by Westen, Novotny, and Thompson-Brenner (2004) on “The Empirical Status of Empirically Supported Psychotherapies: Assumptions, Findings, and Reporting in Controlled Clinical Trials.” One of the most important ideas, for me, in their article is the clear illustration of how, in most scientific research, the goal is to minimize variability on all fronts. Variability (aka) individual differences, across subjects, across treatment protocols and across treating clinicians is anathema to the randomly controlled trial. Variability due to individual differences messes up and ‘noises’ up the results, making it impossible to extract a pure and standardized treatment protocol capable of being performed uniformly by any clinician trained to apply it.
What this has meant is that, very frequently, only individuals meeting the narrowest criteria are considered eligible to be subjects—subjects who in many ways are not at all representative of ‘real’ people with the kinds of complex and inter-related difficulties that most clinicians see in their offices. Additionally, the creativity and artistry involved in communication between two uniquely thinking and feeling beings (therapist and client) is treated as bothersome noise that only confounds the goals of the research.
And let us be clear. Very frequently, the “goal” of such psychological research is to be able to certify certain treatment approaches as credible and valid, using the evidence accrued in scientific study. To implement cleanly the research protocol, and ultimately to implement the “evidence based treatment” in a manner which emanates from and supports the research upon which it is founded, very specific treatment protocols and manuals are being developed.
Much of this has been felt to be of little consequence for those clinicians who view their work as being more humanistic and/or psychodynamic in orientation and consequently more open-ended, and tailored to the individual client. For many of these clinicians the emphasis is on process, as much or more than it is on outcome, and the goals of psychotherapy are determined in ongoing interaction with the client, ultimately to be decided by the client. These goals very frequently involve strivings not amenable to quantitative assessment, such as the expansion of psychic freedom and the elucidation of meanings. Additionally, for so many of these clinicians therapy involves a dialogue—an interrogation of meaning and an unfolding opportunity for exploration—wherever that may lead, and for whatever amount of time the client wishes to spend doing this.
Therefore, for these clinicians, the very notion of developing a treatment manual for the work is utterly at odds with the fabric of what they do. The very definition of “evidence,” which may very well be grounded in the patient’s associations across time is not in any way synonymous with the criteria for “evidence” found in most scientific study. In fact, the very notion of a treatment protocol that can be developed with pre-determined goals and interventions to be uniformly applied to individuals, may seem so preposterous as to be downright laughable. Sadly, it’s time to stop laughing or the last laugh may be on us. A full decade ago when the Division of Clinical Psychology of the APA created an EST task force, what was envisioned included:
(1) Establishment of criteria for designating specific effective treatments for specific problems.
(2) Generation of two lists of ESTs for specific problems: “well established” and “probably efficacious.”
(3) Dissemination of lists of ESTs to Ph.D. programs and predoctoral internships in clinical psychology.
(4) Incorporation of ESTs into guidelines for accrediting doctoral training programs and internships in clinical and counseling psychology (Strupp, 2001).
Since then, there has been a groundswell of professional interest in the systematic incorporation of “evidence” into educational training and clinical programs. Just a few examples include: the American Psychiatric Association’s 2003 efforts to develop EBP guidelines; state health agencies’ establishment of infrastructures to support EBP initiatives in behavioral health, including the development of centers for EBP involving networks of consumers, practitioners and local and state authorities; the U.S. Surgeon General (1999) calling for measures to ensure delivery of “state of the art treatments” for mental illnesses; the Annapolis Coalition for Behavioral Health Workforce Education and the American College of Mental Health Administration declaring that behavioral health researchers have identified “evidence-based” approaches but that these principles and practices go largely ignored when it comes to training the workforce (McCabe,2004).
As noted in the APA 2005 Presidential Task Force on Evidence-Based Practice—Draft Policy Statement:
EBP has become a central issue in the broader health care delivery system, both nationally and internationally. Numerous initiatives at the state level have sought to encourage and/or require the use of “evidence-based” mental health treatments within state Medicaid programs. Currently a major joint initiative of the NIMH and the DHHS Substance Abuse and Mental Health Services Administration is focusing on promoting the implementation of evidence-based mental health treatment practices into state mental health systems. (p. 3)
The trickle-down effects of this are already being felt in doctoral and master’s level training programs, insurance and managed care companies, outpatient clinics and individual consulting rooms.
It should be noted that there has not been an absence of vociferous opposition to these trends, and that several authors describing the divergent worldviews held between and among various researchers and clinicians have identified what has been described as a “culture war.” Additionally, a significant movement to elaborate upon and to diversify definitions of evidence-based practice is currently underway. These efforts attempt to challenge traditional notions of research, expose unsubstantiated assumptions upon which much of this research is based, and offer new and innovative ways of defining, investigating, and conceptualizing psychological work. Systematic attention to process as opposed to outcome, studying “theories of change” which can be integrated into “empirically informed therapies (Westen et al., 2004), and championing the role of case-based research with sample sizes of one (Edwards, Dattilio, & Bromley, 2004), are some prominent examples.
These attempts are laudable. They may even buy us some time. But to my mind, they do not really get to the heart of the matter. The following is one example of a proposed alternative to the randomized controlled trial form of research. In their article, Developing Evidence Based Practice: The Role of Case-Based Research, Edwards et al state:
Like all researchers, those using case-based methodologies must build in strategies for safeguarding accuracy, checking replicability, and ensuring the validity of arguments. In psychotherapy research, this goal can be achieved by making sure that all sessions are tape-recorded, using independent judges to check that the reduction of the raw data into case narratives is not biased (Barker, 1994 cited by Edwards et al., p. 8, italics mine)
It is not surprising, really, to find terms such as “accuracy,” “replicability” and “not-biased” even when describing single case studies, as these concepts are essential to most definitions of science. And, as pointed out in the APA position paper on EBP, “good practice and science call for the timely testing of psychological practices in a way that adequately operationalizes them using rigorous scientific methodology.” (p. 6) But what about those ways of practicing that do not necessarily rest on scientific principles and assumptions as traditionally defined? For instance, what about a therapy that does not claim to know a priori what the goals and objectives to be attained are? Or a therapy that does not rest on objectivist, positivistic notions that there can be “objective,” “independent” measures of an individual’s condition that can be measured without the “bias” of individual perception? What about a therapy that never claims to be able to predict the future, including results of treatment, and that is predicated instead on the idea that what occurs over time between two unique individuals can never be “replicated” in another situation? What if the scientific goal of closer and closer approximations to a “truth out there” has nothing to do with a highly subjective and idiosyncratic exploration and interrogation of the inner truth of subjective meanings?
In addition to its scientific assumptions, so much of psychological research rests on assumptions derived from a medical model. In this research, a person’s psychic world is viewed through a lens of health and sickness. The DSM-IV categorizes dilemmas of everyday life as mental illness and disorders, and psychological conundrums are framed as “symptoms.” Treatment is understood as the application of a procedure, like the administration of a medical procedure or the prescribing of medicine, and its goal is to alleviate or eradicate “symptoms.” It is nearly impossible to find research on psychological therapy that does not use the DSM-IV as the basis upon which treatment outcome and processes are measured.
The very language of empirically supported treatment and evidence-based practice, even including studies that extend beyond randomly controlled trials, speaks the assumptions of science and medicine. And in an effort to not be disincluded from the club, even clinicians who might not agree philosophically with these underlying assumptions are scrambling to certify what they do in the language of the times.
This language of the times includes, literally, the two other abbreviations in the title of this piece –MCE (Mandatory Continuing Education) and MCC (Mandatory Continuing Competency). As most of you are probably aware, there is a pressing movement in states around the country to impose mandatory continuing education requirements as well as ongoing competency requirements as pre-requisites for the renewal of professional licenses. Despite active efforts on the part of Michigan’s state psychological association, MCE has not become a requirement for psychologists. Currently in its place however, is a plan for various professions, including psychology, to require professionals to demonstrate ongoing professional development and competency through the obtaining of competency credits. Despite the fact that no compelling information has been presented either to substantiate the need for this program or to demonstrate its effectiveness in improving care or in routing out malpractice, the Michigan Department of Community Health, as a result of an agreement forged by the MPA’s lobbyist with the Governor’s office, has agreed to place psychology in a pilot project involving the mandatory demonstration of ongoing competency. (Ad Hoc Committee on MCE 2005). I will not go into the many arguments both for and against this initiative. But I will say that my understanding of the proposed reasons put forward for this initiative center on the notion that other states are doing this and therefore Michigan should too, lest it be seen as inferior and behind the times in its safeguards to protect the public. It doesn’t seem to matter much that all kinds of training and licensing criteria are already in place before someone is eligible for obtaining a license or that proscriptive and remedial procedures for dealing with psychologists who are complained about seem to be sufficiently in place. Nor does it seem to matter that the vast majority of professionals already engage in all manner of activity that enhances their learning without having to be forced to do so by the state.
At a recent monthly educational presentation of MSPP, Melanie Brim, B.S., M.H.A Director, Bureau of Health Professions, Michigan Department of Community Health, presented information on the pilot program and responded to questions and comments from the audience. Several individuals raised concerns. One such concern involved the idea that however mandated hours of competency credits were achieved e.g. through attendance at conferences, supervision, study groups, professional reading, coursework, this could in no way really assure the public that a particular psychologist was competent and that therefore, to claim this is to sell the public a false bill of goods. Another concern, very much concurred with by Ms. Brim, was that attendance at a conference in no way guaranteed that an individual would necessarily learn anything or even stay awake, for that matter. There seemed to be a shared sense of weary resignation over the fact that you can bring a psychologist to water but you cannot make him/her drink.
Frankly, I am concerned that this is what concerns people. Personally, it doesn’t much matter to me if the person sitting next to me at a conference may get his credit while snoozing through the presentation, rather than drinking it in. What alarms me is the water being served and the fact that this aliment will both derive from and subsequently contribute to the growing body of “evidence” used as nutriment for proliferating standards of care and guidelines used in defining appropriate ‘state of the art’ practice.
Although it is claimed, at present, that there will be no stipulations regarding the specific “content” of that which will pass muster as “competency credit,” it is already written into the Public Health code that psychologists must receive some training each year on “pain management.” Conversely, it is stipulated that there can be no greater than a certain number of courses on the topic of practice management. How one feels about these stipulations in their specificity is not really the point. The fact that there already exist stipulations with regard to the content, is. A precedent has been set. Purportedly, the psychology licensing board will be drafting guidelines enumerating the stipulations, in conjunction with the Department of Community Health. However, as stated in the MSPP meeting with Ms. Brim, these stipulations are likely to be guided by standards of care and practice that exist in the profession. And now we circle back to that first abbreviation—EST. Consider this, as a hypothetical example: If it is determined that only certain kinds of treatment approaches e.g. manualized cognitive behavioral treatment are valid for working with a particular “disorder” e.g., anxiety disorder, I wonder how long it will be before a case conference exploring a psychoanalytic approach to working with an individual with anxiety is denied the privilege of receiving competency credits. And that isn't even the biggest problem, for whether or not a case conference receives competency credits will not necessarily deter those who feel it is a valuable expenditure of their time. What does make me shake in my boots is the scenario that goes something like this: I or a colleague of mine works psychoanalytically with an individual complaining of anxiety. The individual decides to sue the therapist. The court, presented with guidelines developed by the profession on evidence based treatment determines that, in using a psychodynamic rather than a cognitive-behavioral approach, the clinician has engaged in malpractice. Furthermore, specific conferences and study groups on anxiety that have received competency credits and those that have been denied them, are cited as further supporting “evidence” used in this determination.
So, for me the problem with MCE and MCC lies not in its “silliness,” “wastefulness,” or “empty assurances to the public,” or even in the impossibility of enforcing it in any meaningful way (such as forcing attendees to stay awake). That line of argument faults these programs for their inadequacies--that is, what they are not. My worry has more to do with what they are. And my concern is that what they are, are additional nails in the coffin that psychology seems to be building for itself in its efforts to legitimize itself as a medical, scientific health profession.
While I may be diagnosed by some as being well on the way to developing a phobia of abbreviations, my plea is to not be treated with “exposure therapy.” We as a profession already are being inundated with and exposed to an ever-growing array of regulations and standards that will only abbreviate our freedoms further. And however irrational I may be deemed, I choose to hold onto the belief that the work I am passionately committed to doing involves intimate exploration of personal meanings. It is new and different with every individual with whom I work. It can never be made abstract, for its subtleties exist in the precise articulations of a single meaning-making individual. It can never be manualized, for there is no procedure to follow or regimen to apply. It can never be generalized, for its truth grows out of the specific context that is created between a specific individual and me, at a particular moment in time. It can be deemed competent or incompetent, but only by the two individuals who have agreed to engage in the work and by whose definitional meanings and standards the work can be evaluated. And while it may make for messy research, I choose to rejoice in the variability that makes each of us as individuals undeniably and non-replicably unique. For me, the “noise” of the experimental research setting is not something to be gotten rid of. And as irrational as it may seem, given the socio/political matrix in which we live, I believe it is worth trying to maintain as "independent variables" --ourselves, our work, and the individuals who consult with us. Scientific research teaches us that variables that can be controlled for lose their significance in the determination of results. Maybe we can use that scientific tenet as we fight to preserve the significance of our own independence as well as our independent variability, before the results abbreviate the dimensions of our professional lives.
For information about the Academy and/or to discuss this opinion piece further, Dr. Young can be contacted at (734)665-9692 or at linadjoy@provide.net
References
Ad Hoc Committee on MCE. Letter to Michigan Psychologists, February 21, 2005.
Barlow, D. H. (1996) The effectiveness of psychotherapy: Science and policy. Clinical Psychology: Science and Practice 1, pp.109-122
Edward, Dattilio, & Bromely (2004) Developing Evidence-Based Practice: The Role of Case-Based Research; Professional Psychology: Research and Practice. Vol. 35 (6) December 2004, pp. 589-597. American Psychological Association.
McCabe (2004) Crossing the Quality Chasm in Behavioral Health Care: The Role of Evidence-Based Practice; Professional Psychology: Research and Practice. Vol. 35 (6) December 2004 pp. 571-579. Washington, DC:American Psychological Association
Messer, (2004) Evidence-Based Practice: Beyond Empirically Supported Treatments; Professional Psychology: Research and Practice. Vol. 35 (6) December 2004 pp. 580-588. Washington, DC: American Psychological Association
Strupp (2001). Implications of the Empirically Supported Treatment Movement for Psychoanalysis. New York: Analytic Press.
Wampold, Bhati (2004) Attending to the Omissions: A Historical Examination of Evidence-Based Practice Movements; Professional Psychology: Research and Practice. Vol. 35 (6) December 2004, pp. 563-570. American Psychological Association
Westen, Novotny, Thompson-Brenner (2004) The Empirical Status of Empirically Supported Psychotherapies: Assumptions, Findings, and Reporting in Controlled Clinical Trials; Psychological Bulletin 2004, vol. 130 no. 4, 631-663
2005 Presidential Task Force on Evidence-Based Practice, American Psychological Association Statement; Draft Policy Statement on Evidence-Based Practice in Psychology.
Dr. Young received her undergraduate degree from Brown University, and her Ph.D. in clinical psychology from the University of Michigan. She currently has a psychoanalytic private practice in Farmington Hills, Ann Arbor and Northville, Michigan. Until its closure in October of 1997, she worked for ten years as a staff psychologist at the Detroit Psychiatric Institute where she taught, supervised and served as senior psychologist on the Adult Inpatient Service. Dr. Young is a consultant at the Veterans Administration Hospital in Detroit. She is the current President, Past Vice President and a founding member of the Academy for the Study of the Psychoanalytic Arts, and a Past Vice President of the MSPP. She can be contacted with e-mail at: linadjoy@provide.net or by telephone: (248) 348-1100.