(Photo:Wikimedia Commons)

Promotion of the evidence-based practice concept is widespread across the mental and behavioral health professions. Intrinsic motives include placing the well-being of our clients/patients/students at the forefront, desiring to discover and use the best practices available, and wanting to be respected as highly proficient professionals. Extrinsic motives include being eligible for insurance reimbursements, avoiding ethical and legal challenges, and saving one’s job from funding cuts or other negative employment decisions. Unfortunately, too few counselors either conduct research or read research findings. Although they may value research intellectually, many lack confidence in their ability to use research findings.

The responsibility for engaging in evidence-based practice falls primarily on counseling practitioners. Evidence-based practice requires application of practices for which the evidence was the product of rigorous scientific empirical studies — that is, outcome research. Outcome research is the domain and responsibility of trained researchers, who are usually employed in university settings. Therefore, counselor educators are included among those responsible for producing the evidence.

The corresponding responsibility for counselors is to be willing and able to locate and use evidence-based interventions. Consequently, the two concurrent challenges are 1) having counselor educators (outcome researchers) produce sufficient volumes of evidence and 2) training counselor practitioners to find, interpret and use the evidence. Ironically, the circumstances create a codependency.

Counselors are dependent on counselor educators to conduct the research and teach them how to find and use the evidence with confidence. Counselor educators are dependent on counselors to respond to training efforts enthusiastically, search for the evidence constantly, use the evidence appropriately and help the counselor educators to conduct the needed outcome studies. These challenges limit the range of interventions available to counseling practitioners.

Accountability and action research

Counseling practitioners who evaluate their local interventions can use the findings to improve their practices and to be accountable to their stakeholders. This accountability process involves action research as opposed to outcome research.

Action research focuses on generating local rather than generalized knowledge (as is the case with outcome research). There are numerous approaches to and definitions of action research, but the common theme seems to be that it is not outcome research. That is, the demand for attention to rigorous research design controls and inferential data analyses is often not a requirement in action research. Action research seems to cover all data collection activities that lead to findings that are useful for evaluating local programs. Goals for action research include acquiring useful local knowledge for program improvement, involving local stakeholders in the process, being open to the viability of a variety of data sources and anticipating that constructive actions/
decisions will follow the data.

Although action research typically is less rigorous and sophisticated than outcome research, the brunt of the responsibility for conducting the accountability process is also on counseling practitioners. Historically, counselors have been perceived as resistant to evaluation and accountability and needing to be coaxed or assisted in the process by counselor educators and local supervisors. This resistance was usually attributed to a number of supposed impediments, including a perceived lack of the requisite sophistication, insufficient time to do it, uncertainty about the value of the kinds of data being collected, the perceived cost of the process, uncertainty (and possibly fear) about how stakeholders would use the findings and a dislike of being evaluated.

Counselor educators share the accountability challenges with counseling practitioners. Counselor educators can and should address all the causes of resistance when training entry-level counselors. The evaluation/accountability competencies covered in the standards of the Council for Accreditation of Counseling and Related Educational Programs fall within the action research domain. Therefore, the ability to conduct action research to achieve accountability goals is not beyond the sophistication of entry-level counselors. Teaching the necessary skills and influencing appropriate attitudes about action research and accountability are important responsibilities held by counselor educators.

Linking evidence-based practice with accountability and action research

Evidence-based practice and accountability appear to depend on different research paradigms and focus on different viewpoints. While evidence-based practice is a product of outcome research findings, accountability activities employ action research methods. Evidence-based practice is synonymous with aptitude testing, having a focus on how previously collected data can be applied to future performances. On the other hand, accountability is akin to achievement testing. The focus is on past performance to seek evidence of how well interventions have worked. Therefore, evidence-based practice and accountability appear to be two different concepts, each of which is very important for the counseling profession but apparently difficult for counseling practitioners to do well.

My thesis is that the two concepts can be combined in a manner that might make it easier for counseling practitioners to be accountable and engage in evidence-based practice. The keystone of this idea is to view evidence-based practice more broadly than is currently the case.

As mentioned earlier, evidence-based practice is the product of rigorous, sophisticated outcome research studies. My view is that evidence-based practice can also be the product of local action research studies that are a part of the counseling practitioner’s evaluation/accountability function. If counseling practitioners are able to collect volumes of evidence that their local interventions work, then those data could also qualify as evidence to support their local evidence-based practice.

The typical format for outcome research is to conduct tightly controlled studies with random sampling from targeted populations, control groups and inferential statistical analyses. Often, these studies are not replicated, and the findings are generalized to a population similar to the one used in the sample. These samples and populations may or may not be similar to those in many local settings.

On the other hand, although local action research may be less rigorous, the samples and populations are indeed of interest to local counseling practitioners. And if the interventions are repeated and evaluated many times, evidence accumulates. Therefore, the action research paradigm provides volumes of relevant local data, as opposed to the findings of a single rigorous outcome study that may not always have applicable samples and populations.

To be clear, I do not intend to replace or diminish the value of outcome research. It is important to understand, however, that rigorous outcome research has its limitations as a source of evidence-based practice in the counseling profession. My goal is to add local action research data to the evidence-based practice information that counseling practitioners are already able to locate in outcome research publications.

Two examples

Two of the most common interventions are individual counseling and psychoeducational group interventions. How the action research framework can be used for evidence-based practice is described briefly for each practice.

Individual counseling: My recommendation for individual counseling is to apply the AB single-subject design to counseling interventions and to encourage and assist clients in engaging in self-monitoring between counseling sessions. Counseling goals would determine what behaviors are to be changed (for example, reducing the number of negative thoughts per day) and what attitudes are to be influenced (for example, rating one’s negative or positive affect about his or her job on a scale of 1 to 10). The clients would record the self-monitoring data. The data could be presented graphically, starting with a baseline and then continuing throughout the counseling process. The axes of the graph would be number of data-gathering points (horizontal axis) and points on the behavior or attitude scale (vertical axis). The evidence would be visible in the graphic representations of the self-monitoring process.

This process is clearly within the sophistication domain of entry-level counselors and could be used in a number of individual counseling interventions. The data could be accumulated over time to provide both accountability data and evidence of the
effectiveness of one’s practice.

Psychoeducational group interventions: I would recommend a pre-experimental pretest/posttest design for psychoeducational group interventions. Control groups are unnecessary in action research because multiple groups are presented with the same proactively planned interventions. Similar to the instruments that schoolteachers develop to test their students, practitioners can design instruments to assess knowledge and attitudes. Simulations can be established to assess acquisition of targeted behaviors. Pretest data can be collected before the intervention begins, and posttest data can be collected at the end of the intervention program.

Correlated t tests can be used to compare the pretest and posttest scores to determine if desired changes occurred. Although the correlated t tests require application of statistical knowledge, that knowledge is within the competency range of entry-level counselors.

Answering the challenge

In his 2009 critique of the state of published research in counseling journals and of the attitudes of counselor educators toward research, David Kaplan, chief professional officer of the American Counseling Association and a past president of ACA, called on the profession to be primarily engaged in evaluating the effects of our counseling interventions. I do not know if he was promoting outcome or action research, but both paradigms are applicable to answering his call.

Publication in the counseling journals requires outcome research studies conducted primarily by counselor educators. To meet Kaplan’s challenge, however, a larger volume of counseling practice evaluations likely need to be addressed via the action research paradigm — and done so by counseling practitioners.

It therefore behooves our profession to inform practitioners that action research is a road both to accountability and to evidence-based practice, and to encourage them to travel that road. It may currently be the road less traveled, but it does not have to remain so.

 

****

Stanley B. Baker is a professor of counselor education in the Department of Curriculum, Instruction and Counselor Education at North Carolina State University. Contact him at sbaker@ncsu.edu.

Letters to the editor: ct@counseling.org.

Comments are closed.