By Professor Henry Jackson FAPS University of Melbourne

The development of evidence-based practice is one typically associated with medicine. For example, Sackett and colleagues (1996) describe evidence-based medicine (EBM) as: "the conscientious, explicit and judicious use of current best evidence in making decisions about the case of individual patients" (p.71). But as regards psychology, the first step in this direction was the publication of a list of empirically validated psychological treatments by the Task Force on Promotion and Dissemination of Psychological Procedures, Division of Clinical Psychology (Division 12), American Psychological Association (APA, 1995). The Division 12 approach was adopted by the College of Clinical Psychologists of the Australian Psychological Society (APS) with University-based postgraduate clinical psychology training courses being expected to teach their students to be aware of, and develop competence in delivering, these empirically validated treatments.

The Division 12 Task Force (APA, 1995) provided an explicit description of their criteria for judging the efficacy of a particular intervention. Table 1 sets out the priority given to the various methodologies employed in various research studies/reports. The Table ranks methodologies according to the degree to which they control for competing influences (and are stronger as regards internal validity), thereby supposedly increasing the confidence that we can place in the obtained results. Randomised controlled trials (RCTs) are accorded top spot. Moreover, the Guidelines discriminate within RCTS, giving the greatest weighting to RCTs which compare the index therapy with an alternative therapy(ies) or placebo, rather than merely comparing the index therapy with a waiting list control group.

In their original paper, the Division 12 Task Force (APA, 1995) concluded on the basis of the criteria set out in Table 1 that examples of well-established treatments included: CBT for depression, phobic disorder, generalised anxiety disorder, chronic pain; exposure treatments for various phobic conditions; exposure and response prevention for obsessive compulsive disorder; interpersonal therapy for bulimia; parent training programs for children; and family education programs for schizophrenia, amongst others. They also listed a number of treatments that were considered to be probably efficacious according to the criteria set out in Table 1. These were: applied relaxation for panic disorder; brief psychodynamic therapies; behaviour modification for sex offenders; dialectical behaviour therapy for borderline personality disorder; emotionally-focussed couples therapy; habit reversal and control techniques; and Lewinsohn's psychoeducational treatment for depression.

Subsequent developments included the publication of two key books in the area, namely: A Guide to Treatments that Work (Nathan & Gorman, 1998, 2002), and What Works for Whom? (Roth & Fonagy, 1996, 2004). These excellent books will be known to a number of readers of this publication. The evidence base has grown over the last decade and both books have updated the evidence base for specific disorders.

Allied to these beginnings are the emergence of Clinical Practice Guidelines (Parry, Cape, & Pilling, 2003), and even guidelines about how to develop guidelines! (National Health and Medical Research Council, 1999). At this juncture, it is important to distinguish between Treatment Guidelines which are typically focussed on the efficacy of specific treatments for specific disorders, e.g., panic disorder or depression, and Clinical Practice Guidelines which are broader and involve other aspects of clinical activities including case formulation, decision-making and assessment.

Such Clinical Practice Guidelines typically involve a group of experts who are asked to identify all the relevant published evidence, to weigh the evidence, and to take into account expert consensus opinion. The approach emphasises transparency and fairness in weighing and combining evidence in arriving at Recommendations. The American Psychological Association has not published Guidelines as such but the American Psychiatric Association has published a number of such Guidelines. One example is their Practice Guideline for Eating Disorders (APA, 1993). This follows a specific structured approach, containing a Statement of Intent, a Reference Coding system (which assigns grades to particular studies with A = "Randomized controlled clinical trial, or crossover design with randomly assigned treatment sequence" down to J = "Other, e.g., published instrument, published abstract, published letter", p. 210), and the Literature Review Process. Then follows sections on Disease Definition, Epidemiology, and Natural History; Treatment Principles and Alternatives; Recommendations; Areas for Future Research; Eating Disorders Guideline Reviewers and Consultants; Organizations Submitting Comments; and References.

Criticisms of the evidence-based approach and developments since 1995

It is clear that in both the construction of the APA (1995) list of empirically-validated treatments and the APA (Psychiatry) Practice Guidelines (1993), the highest weighting is given to randomised trials. Now the general objection raised by a number of commentators is that whilst such trials are strong as regards internal validity, i.e., that the effects produced were due to the tested treatment and not due to other competing variables, they are weaker as regards external validity. This is because the clients may be atypical and highly selected, the therapists are trained in a specialised approach, manuals are used and the settings are in Universities or specialist teaching hospitals.

Seligman (1995) beautifully summarised the major elements of RCTs which are the sine qua non of mainstream research practice as regards testing the efficacy of therapeutic interventions (see Table 2). I have taken some liberty in further parsing them. Seligman argues that RCTs inform us as to what treatment is likely to work for a given disorder under highly regulated (ideal) conditions, but this is not necessarily the same as might occur in clinical practice where less than optimal conditions might prevail. Maintaining that: "The efficacy study is the wrong method for empirically validating psychotherapy as it is actually done because it omits far too many crucial elements of what is done in the field", Seligman (1995, p. 966, his italics) concluded that the effectiveness study is of greater importance as it reflects more accurately the way in which psychotherapy is practised in the field.

Various other authors argued that the focus on efficacy needed to be supplemented by studies of effectiveness and in fact in 2002, a report published in the American Psychologist (APA, 2002) stipulated that the evidence base needs to be evaluated in terms of two dimensions, e.g., efficacy and effectiveness or clinical utility. As the APA (2005) later described it, efficacy "lays out criteria for the evaluation of the strength of evidence pertaining to establishing causal relationships between interventions and disorders after treatment" (p.2). Clinical utility "includes a consideration of available research evidence and clinical consensus regarding the generalisation, feasibility (including patient acceptability) and cost and benefits of interventions" (p.2).

The most recent step was the publication in 2005 of a report by the APA Presidential Task Force on Evidence-based Practice (APA, 2005). This Task Force was set up to try to resolve issues which surrounded evidence-based practice in psychology (EBPP), which they defined as: ".... the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences" (p.5). The three major issues addressed by the Task Force were: (1) to consider how a broad range of research and evidence can be interpreted in a consideration of evidence in the practice of psychology; (2) to examine the role of clinical expertise in treatment decision-making; and (3) to consider specific treatment factors and applicability to those with co-morbidity, personality, race, ethnicity, culture, religion, age and acceptability of treatment to patients.

As regards the first issue, the Task Force (APA, 2005) endorsed multiple types of research evidence including meta-analyses, RCTs, process outcome studies, effectiveness studies, single case experimental designs, clinical observation, systematic case studies and qualitative research. Similarly to the APA Report (2002), the Task Force acknowledged the distinction between efficacy and clinical utility. Secondly, and most importantly, the Task Force emphasised the importance of clinical expertise in being: "essential for identifying and integrating the best research evidence with clinical data (e.g., information about the patient obtained over the course of treatment) in the context of the patient's characteristics and preferences to deliver services that have the highest probability of achieving the goals of therapy." (pp. 9-10). The Task Force spelt out eight components of clinical expertise. Some of those components included: assessment, diagnostic judgement, systematic case formulation, and treatment planning; interpersonal expertise; clinical decision-making, treatment implementation, and monitoring of patient progress; evaluation and use of research evidence; and, understanding the influence of individual, cultural and contextual differences on treatment.

Thirdly, the Task Force emphasised the importance of taking into account patient characteristics when selecting and planning treatment with a given patient. This acknowledges that the characteristics of this patient may differ from those participating in research trials in terms of variables including gender, age, developmental phase, religion, and race. In short, the clinician needs to decide whether the patient before him or her will benefit from a specific therapy allowing for the fact that it has been conducted with patients of a different age and ethnicity.

Objections to EBPP: An anecdotal account

On a personal note, I have heard many objections to the listing of empirically-validated treatments and the development of clinical practice guidelines. Let me try and outline some of these objections:

(1) the first I have labelled the individualist anarchic approach: "I don't want anyone telling me what to do. I am registered to practice as a psychologist and that is what I will do"; (2) the second objection is an ideological one: "The EBPP approach is biased towards certain kinds of therapy, e.g., CBT, and not others. I practice from a psychoanalytic perspective and the changes effected in my approach cannot be measured." and, (3) a third objection frequently espoused is that most of the therapeutic outcome is delivered by common factors.

I repudiate the first two objections. The first is I believe, simply irresponsible. I also believe the second objection to be unsustainable; ultimately, all therapeutic approaches need to be subjected to the EBPP approach. Indeed this has occurred outside of CBT with pharmacotherapy, and to some extent with interpersonal therapy and brief psychodynamic therapies (Roth & Fonagy, 2004). One needs to demonstrate measurable changes in domains such as symptomatology, adjustment and well-being, subjective satisfaction and the like with all therapeutic approaches, although the focus for change may reside in different domains. Finally, no one would disagree that common factors are highly important in therapy, e.g., the therapeutic relationship, therapist warmth, and so forth (Roth & Fonagy, 1996). Nevertheless, a very major reason why we have RCTs (which also permit the estimation of effect sizes between conditions within such trials), is to rule out common factors.

What are my personal views?

I was uncomfortable with the initial emphasis on evidence obtained from RCTs (APA, 1995) and I agreed with critics of this approach who saw the data provided by such trials as constituting only part of the story (e.g., Seligman, 1995). I am much more comfortable with the directions taken by the American Psychological Association in its most recent publications (APA, 2002, 2005). I believe that there should be many more trials of effectiveness to supplement trials of efficacy. Pragmatic RCTs need to be conducted in non-specialist settings with everyday clinical patients with jobbing clinicians practising without manuals. There is also a need to look at natural trials as per Seligman's (1995) Consumer Reports study - this is a closer approximation to therapy as practised in real life where people select out a given therapist or therapy. Patient characteristics and clinical expertise deserve greater emphasis and economic analyses are required.

It is clear that for some disorders, e.g., anxiety disorders, the extant research evidence is much more compelling than for other disorders, e.g., personality disorders (PDs). In the former case, the clinician might give more weighting to treatment guidelines, whereas in the near absence of research evidence for non-borderline PDs, clinical judgement and expertise would carry much greater weight in treatment planning. Whatever the case, the integration of these various forms of evidence needs to be pursued and I believe we must pursue this goal to the best of our ability, however difficult.

The reasons for this are numerous but I will outline two. Obviously, a very key driver behind the development of both treatment guidelines and practice guidelines is accountability. In 1996 Barlow sounded a warning about the demands emanating from governmental circles and third party payers for mental health professionals to be accountable for their therapeutic practises. So it is in the interests of the various professions, and irrespective of whether they do or do not enjoy governmental or private health insurance rebates, to demonstrate that the work they undertake is of benefit to clients with particular problems. My belief is that if members of various associations, e.g., APS, cannot agree as to what constitutes acceptable standards of practice and cannot communicate this to governments and third party payers, then we will lose credibility as a profession and reduce our chances of obtaining remuneration from those bodies for our services.

Most importantly, we need to provide our clients with the best quality care. This is as it should be and psychologists of all persuasions would agree with this statement. I just happen to believe EBPP is the best way to ensure that we provide such care. Postgraduate clinical psychologists in particular have been trained in the scientist-practitioner model. It behoves us to critically evaluate the research evidence base and determine its applicability to practice but also to contribute to the research evidence from a clinical perspective. We need to be continually updating our skills and acknowledging that Clinical Guidelines need to be periodically reviewed - nothing stays the same! The point is that we need to be delivering the best treatments we can today in the full awareness that what is gold standard evidence-based practice today may well go the same way as occurred with insulin coma therapy in psychiatry. This is the way of science.

Conclusion

We need to provide the best kind of therapy to the public. We need to enure that it is the best kind of therapy for that person given the state of knowledge at that time. This means that practitioners need to keep up to date, be informed of new understandings and new therapeutic advances and receive ongoing training through continuing education programs. We need to do this not only because we will be judged by governments, consumers and third party insurers, but because it is ethically and morally right.

References

American Psychiatric Association Practice Guidelines. (1993) Practice guideline for eating disorders. American Journal of Psychiatry, 150, 207-228.

American Psychological Association (1995). Task Force on Promotion and Dissemination of Psychological Procedures, Division of Clinical Psychology, Training in and dissemination of empirically-validated psychological treatments: Report and recommendations. The Clinical Psychologist, 48, 3-23.

American Psychological Association. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57, 1052-1059.

American Psychological Association. (July 1, 2005). Report of the 2005 Presidential Task Force on Evidence-Based Practice Washington, D.C.: American Psychological Association.

Barlow, D.H. (1996). Health care policy, psychotherapy research, and the future of psychotherapy. American Psychologist, 51, 1050-1058.

Nathan, P. E., & Gorman, J. M. (1998). A guide to treatments that work. London: Oxford University Press.

Nathan, P. E., & Gorman, J. M. (2002). A guide to treatments that work (2nd ed.). London: Oxford University Press.

National Health and Medical Research Council (1999). A guide to the development, implementation and evaluation of clinical practice guidelines. Canberra: Commonwealth of Australia.

Parry, G., Cape, J., & Pilling, S. (2003). Clinical practice guidelines in clinical psychology and psychotherapy. Clinical Psychology and Psychotherapy, 10, 337-351.

Roth, A., & Fonagy, P. (1996). What works for whom? A critical review of psychotherapy research New York: Guilford Press.

Roth, A., & Fonagy, P. (2004). What works for whom? A critical review of psychotherapy research (2nd ed.). New York Guilford Press.

Sachett, D.L., Rosenberg, W.M.C., Gray, J.A.M., Haynes, R.B., & Richardson, W.S. (1996). Evidence-based medicine: What it is and what it isn't. British Medical Journal, 312, 71-72.

Seligman, M.E.P. (1995). The effectiveness of psychotherapy: The Consumer Reports Study. American Psychologist, 50, 965-974.

Table 1: American Psychological Association Criteria for Empirically Validated Treatments

Well-Established Treatments

I. At least two good between-group experiments demonstrating efficacy in one or more of the following ways:
  A. Superior to pill or psychological placebo or to another treatment.
 

B.

Equivalent to an already established treatment in experiments with adequate statistical power (about 30 per group, cf. Kazdin & Bass, 1989).
OR  
II. A large series of single case design experiments (n >9) demonstrating efficacy. These experiments must have:
  A. Used good experimental designs and
 

B.

Compared the intervention to another treatment as in I.A.

Further criteria for both I and II:
  III. Experiments must be conducted with treatment manuals
  IV.

Characteristics of the client samples must be clearly specified

  V. Effects must have been demonstrated by at least two different investigators or investigatory teams

Probably Efficacious Treatments

I. Two experiments showing the treatment is more effective than a waiting-list control group
OR  
II One or more experiments meeting the Well-Established Treatment Criteria I, III, IV, but not V
OR  
III A small series of single case design experiments (n > 3) otherwise meeting Well-Established Treatment Criteria II, III, and IV

Table 2: Seligman's (1995) Description of the Ideal Efficacy Study

1. Random assignment of patients to treatment and control conditions.
2. Control conditions may use placebos, attempt to control for attention, etc.
3. All patients receive a fixed number of sessions.
4. Treatments are manualised and adherence (or fidelity) to treatment is measured via videotapes of sessions.
5. Target outcomes are defined clearly.
6. Assessors are blind.
7. Single diagnosed disorders are overwhelmingly the focus. Co-morbid conditions are excluded.
8. Patients are followed up for a predetermined time following the end of treatment.