By Dr Denise Charman MAPS Victoria University and Professor Michael Barkham University of Leeds, UK

Much debate has ensued about which treatment is an evidence-based treatment. Professional organisations all over the world have endorsed or recommended particular psychological treatments for specific psychological disorders, for example, the Australian Psychological Society (2003), American Psychological Association (Chambless & Ollendick, 2001; Nathan & Gorman, 1998) and the United Kingdom (Treatment Choices, 1999; Roth & Fonagy, 1996 & 2005). The consistent message has been that best practice is determined by research evidence derived from comparing contrasting treatments.

In generating an evidence base, the treatments themselves have been operationalised in treatment manuals. Research therapists have been trained to adhere to these manuals and therapists' treatment skills and treatment delivery assessed. The manuals were developed integral to the research process. As such, the manuals have partially filled the need to articulate practice and practice decision-making. However, the manuals are not widely distributed and their contents have not been evaluated.

Yet, research findings based on the application of treatment manuals have led to the endorsement of psychological treatments based on the use of brand names (for example, CBT or IPT). To endorse the use of brand-named treatments assumes that these are practised in a manner consistent with the research treatment manuals. In effect, the endorsement of a brand name treatment is a shortcut to, and a means of, defining de facto clinical practice guidelines.

In fact, we know relatively little about how a brand name treatment relates to research treatment manuals and we know even less about how research treatment manuals relate to real world clinical practice. These issues have become highly salient because of a developing evidence base being derived from naturalistic practice settings using large numbers (e.g, Barkham et al., in press). This latter paradigm has been termed practice-based evidence (see Barkham & Mellor-Clark, 2000) and contrasts with traditional evidence-based practice in that whereas the latter is premised on a 'top-down' model, a hallmark of the practice-based evidence paradigm is that it starts with practitioners and builds 'upwards' utilising the national or widespread adoption of common outcome measures as exampled by the CORE-OM in the UK (see Barkham et al., 2001, 2005) and the OQ-45 in the US (see Lambert et al., 1996).

Set against this background, the current article provides a perspective on how research findings can inform practice but not define what practice should be. Further, it outlines recent developments in policy that will work towards a greater synergy between research and practice that will enable research and practitioner communities to build a more robust knowledge base for the psychological therapies and psychology practice.

Treatments and their evidence

Research-to-practice endeavours have reached an interesting stage. It has been established that psychological treatments (when operationalised in manuals and delivered by trained therapists) are effective, with certain forms of treatments being more effective than others for specified disorders (Wampold, 2001; Roth & Fonagy, 1997; Nathan & Gorman, 1998). We know that manualised treatments can reduce therapist variability and help ensure treatment is delivered with integrity (Luborsky & Barber, 1993). We can better appreciate the significance of ensuring effective relationships with clients (Norcross, 2002). In short, our knowledge about what works, and with whom, has been advanced.

But these research-to-practice endeavours have also generated considerable controversy focussing primarily on two areas. First, the actual amount of research evidence, particularly derived from RCTs, is considerably greater for certain psychological treatments compared with others (see Roth & Fonagy, 1996). Second, irrespective of any such imbalance, there are differing views as to the interpretation of the research (e.g., the effectiveness of treatments versus the effectiveness of the common factors - Asay & Lambert, 1999; Norcross, 2002).

For practitioners, the controversies have centred on the requirement to adopt certain specified treatments, often CBT or IPT because these are the most researched treatments, when such treatments have been researched on selected samples and not real world clients (Seligman, 1998). That is, a treatment's efficacy may have been established under optimal conditions and when set against another selected treatment. This is not to say that the same treatment is similarly effective when transported and delivered within routine practice settings. Practitioners are also concerned that they might be pressured to deliver treatments for which they as practitioners may not have an allegiance and clients may not accept.

Levels of evidence under revision

Up until now, evidence has been classified into discrete levels according to quality, with higher quality evidence defined as systematic reviews and RCTs. Furthermore, systematic reviews and RCTs are to be undertaken and reported according to quality guidelines. The main reasons for issuing such method guidelines has been to ensure quality, reduce bias in reviews, and enable comparisons across RCTs.For example, systematic reviews for the Cochrane Library need to follow a tightly prescribed method for systematic reviews. The method often results in significant reduction in the studies meeting the criteria for inclusion in a review.

Consolidated Standards of Reporting Trials (CONSORT) consists of guidelines to improve the quality of the reporting of RCTs (Bennett, 2005). Even so, some researchers have complained that the RCT methodology itself needs reviewing to accommodate, for example, patient preferences. Consistently, practitioners and researchers alike complain that the external validity of findings derived from RCTs is low.

One response to such disquiet has been an attempt to revise the hierarchy of evidence.A revision has been undertaken by the peak research body in Australia (NHMRC, 2005). In a new pilot framework of evidence, each research study is to be evaluated based on strength of the evidence, size of effect and relevance to the Australian population and to the patient. An entire body of evidence made up of multiple studies is to be evaluated using criteria based on volume of evidence, consistency of findings, clinical impact, generalisability and applicability to the Australian health care context.

Thus, the revised hierarchy of evidence has less emphasis on any single study and any single methodology.The evaluation criteria have also broadened to consider the suitability of research findings for application in local contexts with patients from routine practice settings.

Evidence and the social gradient

Defining and differentiating patients who populate routine practice settings as opposed to those entering RCTs becomes crucial. The Australian National Survey of Mental Health and Well-being confirms that people with less income, less education and lower levels of employment have poorer health. That is, much of health is accountable from the social gradient. It has also found that a sizeable proportion of people with a mental health disorder have at least one chronic physical illness. A smaller but "more frequent than expected" proportion has additional mental health diagnoses (Andrews, Slade & Issakidis, 2002). Practitioners quickly become aware of the extent of co-morbidities combined with significant psychosocial issues in their clients' presentations.

Thus far, few research findings have been interpreted in terms of their relevance to patients with lower socio-economic status or socio-economic position (a term used by NHMRC, 2002). This fact too can be frustrating to practitioners, especially those who work in public mental health settings where the demographics of service users consistently demonstrate that their patients have multiple needs and present with severe and complex psychosocial and health conditions.

A little known initiative to address the distortions that might occur when applying evidence from selected samples to practice in the real world was undertaken by NHMRC (2002). The outcome of this initiative was published in a paper titled Using socio-economic evidence in clinical practice guidelines. This paper argues that since health is related to the social gradient, then an evidence base should be interpreted from a socio-economic position.

Since much of the research on evidence-based psychological treatments is based on selected samples, the need to interpret the evidence from a socio-economic position will require a commitment by those who have the responsibility to nominate endorsed treatments. The NHMRC (2002) paper provides examples on how this can be done.

Clinical practice guidelines and their validation

Clinical practice guidelines, as introduced above, are formal advisory statements developed generally via consensus or expert opinion (National Guidelines Clearinghouse, 2005). A series of toolkits have been compiled to help guideline developers (for example, Shekelle, Woolf, Eccles & Grimshaw, 1999; NHMRC, 2005). WHO (2003) has also advocated for clinical practice and public health guidelines and for these guidelines to adhere to underlying principles. The WHO principles are a population perspective; with scientific integrity; sensitive to local contexts; and, open to transparency.

Guidelines must use all the available evidence whether derived from RCT designed studies or not. Otherwise, the guidelines would be full of statements such as "no RCT evidence could be identified to answer this clinical question," which are not very helpful for guiding practice (Tooher, personal communication). Frequently guideline recommendations are forced to rely on consensus/opinion where there is no available research evidence (even lower level evidence) for important clinical questions.

Rather than determining their effectiveness by research, clinical practice guidelines are 'validated'. A range of methods are available to establish validity and include 'test driving' the recommendations in an actual clinical setting; comparing recommendations to those issued by different groups; evaluation by reviewers that do not belong to the same organisation that developed the guideline; evaluation by internal reviewers that belong to the same organisation that developed the guideline; or peer reviewed.

Treatment, practice and training

Evidence-based treatments delivered according to clinical practice guidelines have been promulgated as part of the professional identity of psychologists. As such, a key aspect of espousing the role of a scientist-professional psychologist is being able to read, interpret and generate research findings related to professional practice. In clinical psychology, thus far, most research has focussed on comparing psychological treatments developed within specific theoretical schools (e.g., cognitive and behaviour theory, psychodynamic theory, humanistic psychology).

This approach to evidence-based treatments has led to graduate-practitioners being allied to these theoretical schools and identifying themselves with particular brand names (e.g., CBT or psychodynamic). As a result, students can graduate with strong views about what constitutes evidence-based practice, and what does not, depending upon their exposure to a range of treatments (or not) and their professional socialisation within their training programs.

However, this model has some downsides. For example, it has tended to restrict the kinds of research evidence and brands of therapies which populate current research activity because these components have largely been determined by those which funding agencies will support. Although newer funding agencies have been instigated which are more service oriented (e.g., the UK's NHS Service Development & Organisation funding stream), these are still highly competitive and out of the reach of the vast majority of routine practitioners who aspire to build and contribute to a developing evidence base.

Thus, the profession is in a dilemma. In terms of training, it is imperative that students gain expertise in at least one psychological treatment in their training. On the other hand, an early allegiance to a particular treatment may be limiting for students when it comes to real world practice. Furthermore, tensions become manifest when psychology practice is funded according to the use of specific treatments (e.g., CBT or IPT) which may not necessarily be best practice with a particular patient and whose efficacy with a patient group are unknown.

Evidence-based treatments and practice-based evidence

As yet, we know very little about how treatment endorsement affects psychology practice in terms of client acceptance of treatment and client attrition. We do know that while some labelled treatments are funded (because they are endorsed), the number of sessions funded are often fewer than the number of sessions in treatment-based protocols. Examples include five sessions funded by Victims Support Agency and six sessions funded by some Drug Rehabilitation Services.

Moreover, practice is constituted of more than just treatments. Psychologists working in routine clinical settings are involved in a range of practices including interventions of many kinds, diagnosis, intake and screening, and prediction. Evidence for psychological practices has been limited.

A recent policy document released by the American Psychological Association (2005) has recognised that to practice from an evidence base, findings based on research are not sufficient. This policy document now highlights evidence-based practice as incorporating evidence about treatment alongside expert opinion and an appreciation of patient characteristics. These three components, evidence for treatment, expert opinion, and patient characteristics, are essential to writing clinical practice guidelines and thereby enhancing the delivery of evidence-based treatments.

In Australia, the pilot NHMRC (2005) framework acknowledges the diversity of practice and offers an alternative to identifying treatments - namely, submitting clinical practice guidelines to a quality assurance process. But such an alternative still places the primary source of evidence as deriving from researchers and research settings, which is then used as a driver to shape delivery in routine practice.

There is a strong argument to be made for a complementary approach to building an evidence base which is located in routine practice. With the availability of widespread 'copyleft' (i.e., free or public domain) outcome measures, increasingly IT-literate services, and a common agenda of establishing the effectiveness of the full range of complex interventions and clinical populations that comprise routine practice, there is the opportunity to build an evidence base rooted in practice that could complement the Cochrane data base and together yield a more robust knowledge base for the psychological therapies.

Conclusions

Drawing on a paradigm of evidence-based practice, there are key steps identifying evidence-based treatments based on the outcomes from RCTs and systematic reviews. While this is an important step towards articulating evidence-based practice, it is only one step. Recent developments in the evidence-based practice movement have identified many levels of, and methods for, grading evidence. It also values expert opinion and acknowledges the need to adjust practice according to the needs and preferences of the client and their socio-economic position. However, a complementary paradigm termed practice-based evidence provides a means for practitioners to own and generate an evidence base rooted in routine practice. We argue that both these paradigms are needed as the aim for all practitioners and researchers alike, in the end, is best practice.

References

American Psychological Association (August, 2005): 2005 Presidential Task Force on Evidence Based Practice. Policy statement on evidence-based practice in psychology.

Andrew, G., Slade, T., & Issakidis, C. (2002). Deconstructing current co morbidity: Data from the Australian National Survey of Mental Health and Well-Being. British Journal of Psychiatry, 181(4), 306-314.

Asay, T., & Lambert, M. (1999). The empirical case for the common factors. In B. L. Duncan, M. A. Hubble & S. D. Miller (Eds.), The heart & soul of change: What works in therapy (pp. 23-55). Washington, DC: American Psychological Association.

Australian Psychological Society. (2003) Endorsed Psychological Treatments in Mental Health www.psychsociety.com.au/members/evidence/default.asq (downloaded March 1, 2004).

Barkham, M., Gilbert, N., Connell, J., Marshall, C. & Twigg, E. (2005). Suitability and utility of the CORE-OM and CORE-A for assessing severity of presenting problems in psychological therapy services based in primary and secondary care settings British Journal of Psychiatry, 186, 239-246.

Barkham, M., Margison, F., Leach, C., Lucock, M., Mellor-Clark, J., Evans, C., Benson, L., Connell, J., Audin, K. & McGrath, G. (2001). Service profiling and outcomes benchmarking using the CORE-OM: Towards practice-based evidence in the psychological therapies. Journal of Consulting and Clinical Psychology, 69, 184-196.

Barkham, M. & Mellor-Clark, J. (2000). Rigour and relevance Practice-based evidence in the psychological therapies. In N. Rowland & S. Goss (ed.). Evidence-based counselling and psychological therapies: Research and applications (pp.127-144). London: Routledge.

Barkham, M., Connell, J., Stiles, W. B., Miles, J.N.V., Margison, J., Evans, C., & Mellor-Clark, J. (in press). Dose-effect relations and responsive regulation of treatment duration: The good enough level. Journal of Consulting and Clinical Psychology.

Bennett, J. A. (2005). The Consolidated Standards of Reporting Trials (CONSORT): Guidelines for reporting randomised trials Nursing Research, 54(2), 128-132.

Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: controversies and evidence Annual review of psychology, 52, 685-716.

Luborsky, L. & Barber, J. P. (1993). Benefits of adherence to psychotherapy manuals, and where to get them.In N. E. Miller, L. Luborsky, J. P. Barber &

J. P. Docherty (Eds.). Psychodynamic treatment research: A handbook for clinical practice. New York, NY: BasicBooks.

Lambert, M.J., Burlingame, G.M., Umphress, V., Hansen, N.B., Yancher, S.C., Vermeersch, D., & Clouse, G.C. (1996) The reliability and validity of a new psychotherapy outcome questionnaire. Clinical Psychology and Psychotherapy, 3, 249-258.

Nathan, P. E. & Gorman, J. M. (Eds.) (1998). A guide to treatments that work. New York: Oxford University Press.

National Health and Medical Research Council [NHMRC] (2002) Using socio-economic evidence in clinical practice guidelines. Australian Government, Canberra.

National Health and Medical Research Council [NHMRC] (2005) NHMRC standards and procedures for externally developed guidelines. Australian Government, Canberra.

Norcross, J. C. (2002). Psychotherapy relationships that work Therapist contributions and responsiveness to patients. New York: Oxford Press.

Roth, A. & Fonagy, P. (1996). What works for whom? A critical review of psychotherapy research. New York: Guilford Press.

Roth, A. & Fonagy, P. (2005). What works for whom? A critical review of psychotherapy research. (2nd ed.) New York Guilford Press.

Seligman, M. E. P. (1998). Afterword-A plea. In P. E. Nathan & J. M. Gorman (Eds.). A guide to treatments that work. New York: Oxford University Press.

Shekelle, P.G. Woolf, S. H., Eccles, M. & Grimshaw, J. (1999) Clinical guidelines: Developing guidelines. British Medical Journal, 318, 593-596.

Treatment Choice in Psychological Therapies and Counselling Evidence Based Clinical Practice Guidelines. (1998). U.K. Department of Health.

Wampold, B. E. (2001). The great psychotherapy debate: Models, methods, and Findings. Mahwah, New Jersey: Lawrence Erlbaum Associates.

World Health Organisation (2003). Guidelines for WHO Guidelines. Global Program on Evidence in Health Policy WHO, Geneva, Switzerland.