Lonnie Rowell knows all about the benefits of evidence-based counseling practice, a subject that has consumed much of his life for the past 10 years. Not everyone, however, is quite so enthusiastic.
“I was told by a counselor educator yesterday that she didn’t want anybody to look too closely at what she does,” says Rowell, an associate professor and codirector of the counseling program at the University of San Diego and a member of the American Counseling Association. “’We know what we’re doing is effective, and we don’t want administrators looking at our results,’ she told me. It’s an extreme example, but it is an example of counselors being pushed out of their comfort zones.”
ACA member Patricia Kyle recognizes the merits of evidence-based practice — using therapy that is tested, scrutinized and then applied in clinical settings. However, she questions local agencies that rely almost exclusively on certain types of evidence-based practice primarily because their bills are paid by federal and state agencies that accept those modalities and require the use of evidence-based practice.
“I had a student in my family counseling class who chose a theoretical model and made choices based on that model. The student did an excellent job,” says Kyle, an assistant professor of psychology at Southern Oregon University. “She went out, interviewed for a job and was told, ’We can’t hire you because this isn’t an evidence-based practice.”
The model chosen by the student?
“I’m not sure,” Kyle says. “But it wasn’t cognitive behavioral. That much I know.”
Trying to answer an enduring question
Evidence-based practice is not a new concept, with the Federal Action Agenda providing the impetus for it a decade ago. But evidence-based practice has picked up steam in recent years as counselors and other mental health workers seek to improve their efficacy, insurance companies look for quantifiable results and the people who control governmental funding attempt to determine where monies can best be apportioned.
Then there is that age-old question still looming in the public’s mind: Does therapy really work? Evidence-based practice and the testing of the efficacy of counseling modalities is a response to that question. Testing whether a particular practice meets client needs may also help to change the public’s perception of psychotherapy as a “soft” science.
“There is no perfect test out there,” says ACA member Paul West, an assistant professor of counseling education at Alvernia University. “But I can do my best with what we have. That is a whole lot better than doing no outcomes research at all.”
John Murphy, a professor of psychology and counseling at the University of Central Arkansas, agrees. “Sometimes, I’ll hear people say, ’I just know intuitively how my client is doing.’ That scares me,” says Murphy, a member of ACA. “Without denying the part that art and experience play in this profession, I think it would be arrogant for someone to rely solely on their judgment — and not some sort of testing — to determine whether counseling is effective.”
What should counselors do in the age of evidence-based practice? That has been the subject of much talk and a number of presentations at ACA conferences in recent years.
Kyle, along with Lani Fujitsubo and Paul Murray, both professors of psychology at Southern Oregon, presented on “Counselors Dealing With the Impact of Evidence-Based Practice” at the 2009 ACA Conference in Charlotte, N.C. She cited the American Psychological Association’s definition of evidence-based practice in her presentation: “Practice is evidence-based which utilizes scientific research findings and/or methods of assessing therapy process and outcome in some way to inform clinical practice.” Kyle also noted some of the attributes of evidence-based practice, including transparency, standardization, research, replication and attaining meaningful outcomes.
However, Kyle still isn’t a true believer in evidence-based practice, at least not in the way it’s being utilized today. “We all support the underlying concept that counselors should be accountable to their clients and use strategies that have an evidence base,” she says, “but we (her copresenters and other critics) have concerns about how evidence-based practice is being implemented on the federal and state levels.”
Specifically, Kyle is concerned that agencies receiving federal and state funding will end up relying solely on approved programs, which, she points out, haven’t necessarily cornered the market on effectiveness. In many cases, she says, other interventions (“non-approved programs”) simply haven’t been reviewed according to the standards required by the Substance Abuse and Mental Health Services Administration (SAMHSA) at the federal level or other agencies at the state level.
According to Kyle, humanistic, gestalt, existential, Jungian and Adlerian approaches largely have not been reviewed. At this writing, SAMHSA’s National Registry of Evidence-based Programs and Practices (nrepp.samhsa.gov) included 137 interventions. Plug in the word “gestalt” in SAMHSA’S search engine, and nothing comes up. Insert “existential” and, again, nothing appears. Insert “cognitive behavioral,” however, and 19 interventions appear.
“Existentialists don’t even like the label ’existentialists,’ much less going out and finding data,” Kyle says.
Compiling that data, doing research and submitting the findings to federal and state agencies is exactly what Kyle advocates though. “The list of approved practices is narrow,” she says. “But it doesn’t have to be so narrow.”
Kyle mentions other problems she thinks are endemic with tests that lead to the approval of certain practices, including a lack of research on the impact of the therapeutic alliance and what she sees as the process of science squeezing art out of therapy.
In spite of those reservations, however, she comes down on the side of evidence-based practice overall. “Any ethical counselor wants to make sure they are using strategies that have positive outcomes,” Kyle says.
Quality control
The way West looks at it, all counseling is evidence based. What’s in question is the quality of the evidence.
“You have your client satisfaction surveys,” West says, “but we don’t know if they are just evidence of the strength of the client-counselor relationship. A client may say you’re the best thing since soft butter, but it might not be evidence of the effectiveness of counseling.”
OK, so why not ask the therapist if counseling has been effective?
“It’s a question of bias,” says West, who presented on “The Role of Evidence-Based Therapy Programs in the Determination of Treatment Effectiveness” at the 2009 ACA Conference. “Say I’ve developed this treatment plan, and the client has completed this and this and this. Who’s to say the treatment plan helped bring about change? And if it doesn’t work, do I look at myself, or do I say the client screwed up?”
Then why not go by a book that outlines the best practice for particular situations?
“Yes, there is evidence that certain treatment approaches work for certain problems,” West says. “An example of that is in substance abuse. Everybody does a type of 12-step program that is supposed to be effective. But we know that only 16 to 17 percent respond to substance abuse treatment. Is that really best practice?”
For West, empirical evidence with pre- and post-tests that are tied to clinical results should be the foundation of counseling. “If I’m doing therapy correctly, the test I use should have good reliability and validity and should include a broad range of issues. If I let this guide my assessment, I should be able to tell whether there is change at the finish.”
West concedes, however, that the use of testing and evidence-based practice has encountered resistance. “And not just in counseling, but with human services across the board,” he says. “First of all, there is no mandate for it. One, the insurance companies don’t all require people to prove that their efforts work. Even in the ACA Code of Ethics , there is nothing that says you have to do outcomes research.”
Additional reasons for resistance are that many counselors simply do not like research or are not offered adequate training in school, West says. “If you only have one research course in school, is that really enough?” he asks.
Fear of failure also comes into play, West says. Some counselors are worried that outcomes research will show that what they’ve been doing all along isn’t working. For these counselors, engaging in evidence-based practice may be akin to “slitting (their own) throat and possibly losing their job,” he says.
How should counselors measure success then? According to West, it isn’t about measuring the outcome of Program A versus Program B. Likewise, it isn’t about theoretical models. “Because we really haven’t found that one theory is better than another,” he says.
It is, however, about the quality of the research, West says. “Does Program A do valid research and follow guidelines based on the population it treats? The idea of a client getting well has to be defined. It might not be about making a behavioral change, say, for someone suffering from chronic mental illness. It might just be about whether the client is taking his medication.”
Action research
West suggests that mental health agencies could and should make better use of colleges and universities to formulate evidence-based practices.
Rowell, who presented on “Action Research in Counseling: Closing the Gap Between Research and Practice” at the 2008 ACA Conference in Honolulu, agrees. He is a proponent of “collaborative action research” and sends his students out into real-world clinical settings to work hand-in-hand with practitioners.
Why action research? In part because of public skepticism of the effectiveness of counseling, Rowell says, and in part because of the lack of adequate graduate school training programs. “And within the field itself, practitioners often don’t use research correctly,” Rowell says. “It’s a well-established fact within school counseling, but it also happens within clinical mental health settings. (Mental health professionals) read journals, but they don’t read them carefully.”
Rowell’s students help school counselors formulate questions, do the research, test whether interventions are working and, if not, determine what may work instead. Rowell believes action research also could help underfunded, overburdened community mental health settings.
The goal is twofold, he says: to help students learn the value of research and to help practitioners come up with successful, empirically tested counseling methods. “Psychology, as a profession, needs to hold itself accountable,” Rowell says.
Client feedback
For Murphy, all the evidence needed in formulating evidence-based practice is sitting in the chair across from the counselor during therapy sessions. “Who better to ask about the effectiveness of treatment than the client himself? The consumer should be the primary measure, if not the sole measure,” contends Murphy, who presented on “Client Feedback Tools: A Fast Track to Better Outcomes in Counseling” at the 2009 ACA Conference.
Murphy advocates starting off each session with a short question-and-answer period. During this time, clients relate how they are doing individually, interpersonally, socially and overall and then tell the counselor what they want to work on. This introduction, he says, should last about five minutes.
The counselor concludes each session by asking the client to fill out a form explaining how the session went. This takes about one minute, he says. “I tell them to be honest. If I’m not connecting with them, I sincerely want to know about it,” Murphy says.
Murphy points to studies showing the client’s perception of the client-therapist relationship as the most reliable predictor of the success of therapy. So Murphy wants to know about those perceptions early and often. “It allows you to create a conversation around areas the client is concerned with,” he says. “It’s an ongoing thing. If things are going well, great. If things aren’t going well, you talk about what you can do differently.”
Aside from the efficacy of therapy, Murphy sees another reason for having the client take the lead. “If you’re blaming the client when things aren’t going well, it’s not only counterproductive,” he says, “it’s downright bad manners.”
ACA member Chris Morkides is a psychoterapist in private practice in Swarthmore, Pa. Contact him at cmorkides@aol.com.
Letters to the editor: ct@counseling.org.