Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Evidence based practice (EBP) is an approach to health care delivery in which professionals use the best evidence possible, i.e. the most appropriate information available, to make clinical decisions for individual patients. EBP promotes the collection, interpretation, and integration of valid, important and applicable patient-reported, clinician-observed, and research-derived evidence. The best available evidence, moderated by patient circumstances and preferences, is applied to improve the quality of clinical judgments and facilitate cost-effective care.
Evidence-based practice in psychology requires practitioners to follow psychological approaches and techniques that are based on the best available research evidence (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). Evidence suggests that some therapy approaches work better than others. Criteria for empirically supported therapies have been defined by Chambless and Hollon (1998). Accordingly, a therapy is considered efficacious and specific if there is evidence from at least two settings that it is superior to a pill or psychological placebo or another bona fide treatment. If there is evidence from two or more settings that the therapy is superior to no treatment it is considered efficacious. If there is support from one or more studies from just a single setting, the therapy is considered possibly efficacious pending replication. Following these guidelines, cognitive behavior therapy (CBT) stands out as having the most empirical support for a wide-range of symptoms in adults, adolescents, and children. Unfortunately, the term "evidence-based practice" is not always used in such a rigorous fashion, and many psychologists claim to follow "evidence-based approaches" even when the methods they use do not meet established criteria for efficacy (Berke, Rozell, Hogan, Norcross, and Karpiak, 2011). In reality, not all mental health practitioners receive training in evidence-based approaches, and members of the public are often unaware that evidence-based practices exist. Consequently, patients do not always receive the most effective, safe, and cost effective treatments available. To improve dissemination of evidence-based practices, the Association for Behavioral and Cognitive Therapies (ABCT) and the Society of Clinical Child and Adolescent Psychology (SCCPA, Division 53 of the American Psychological Association) maintain updated information on their websites on evidence-based practices in psychology for practitioners and the general public.
The approach involves complex and conscientious decision-making based not only on the available evidence but also on patient characteristics, situations, and preferences. It recognizes that care is individualized and ever changing and involves uncertainties and probabilities. Ultimately EBP is the formalization of the care process that the best clinicians have practiced for generations – from the country “Doc” who knew his patients to the practitioner who knew his patients over decades.
Evidence-based practice (EBP) develops individualized guidelines of best practices to inform the improvement of whatever professional task is at hand. Evidence-based practice is a philosophical approach that is in opposition to rules of thumb, folklore, and tradition. Examples of a reliance on "the way it was always done" can be found in almost every profession, even when those practices are contradicted by new and better information.
Evidence-based design and development decisions are made after reviewing information from repeated rigorous data gathering instead of relying on rules, single observations, or custom. Evidence-based medicine and evidence-based nursing practice are the two largest fields employing this approach. In psychiatry and community mental health, evidence-based practice guides have been created by such organizations as the Substance Abuse and Mental Health Services Administration and the Robert Wood Johnson Foundation, in conjunction with the National Alliance on Mental Illness.
This model of care has been studied for 30 year in universities and is gradually making its way into the public sector. It effectively moves away from the old “medical model” (You have a disease, take this pill.) to a “evidence presented model” using the patient as the starting point in diagnosis. EPBs are being employed in the fields of health care, juvenile justice, mental health and social services among others.
Key elements in using the best evidence to guide the practice of any professional include the development of questions using research-based evidence, the level and types of evidence to be used, and the assessment of effectiveness after completing the task or effort. One obvious problem with EBP in any field is the use of poor quality, contradictory, or incomplete evidence. Evidence-based practice continues to be a developing body of work for professions as diverse as education, psychology, economics, social work and architecture.
According to Norcross et al (2006) "the burgeoning evidence based practice movement in mental health attempts to identity, implement, and disseminate treatments that have been proven demonstrably effective according to the empirical evidence". However, Norcross et al (2006) also state that perhaps it is more useful to identify what does not work - discredited psychological treatments and tests, and has conducted survey research on discredited psychological treatments. Examples of discredited psychotherapies include: the use of pyramid structures, orgone therapy, crystal healing, past lives therapy, chiropractic manipulation, neurolinguistic programming and Erhard Seminars Training.
Levels of evidence and evaluation of research[edit | edit source]
Because conclusions about research results are made in a probabilistic manner, it is impossible to work with two simple categories of outcome research reports. Research evidence does not fall simply into "evidence-based" and "non-evidence-based" classes, but can be anywhere on a continuum from one to the other, depending on factors such as the way the study was designed and carried out. The existence of this continuum makes it necessary to think in terms of "levels of evidence", or categories of stronger or weaker evidence that a treatment is effective. To classify a research report as strong or weak evidence for a treatment, it is necessary to evaluate the quality of the research as well as the reported outcome
Evaluation of research quality can be a difficult task requiring meticulous reading of research reports and background information. It may not be appropriate simply to accept the conclusion reported by the researchers; for example, in one investigation of outcome studies, 70% were found to have stated conclusions unjustified by their research design .
Although early consideration of EBP issues by psychologists provided a stringent but simple definition of EBP, requiring two independent randomized controlled trials supporting the effectiveness of a treatment , it became clear that additional factors needed to be considered. These included both the need for lower but still useful levels of evidence, and the need to require even the "gold standard" randomized trials to meet further criteria.
A number of protocols for the evaluation of research reports have been suggested and will be summarized here. Some of these divide research evidence dichotomously into EBP and non-EBP categories, while others employ multiple levels of evidence. As the reader will see, although the criteria used by the various protocols overlap to some extent, they do not do so completely.
The Kaufman Best Practices Project approach did not use an EBP catgory per se, but instead provided a protocol for selecting the most acceptable treatment from a group of interventions intended to treat the same problems . To be designated as "best practice", a treatment would need to have a sound theoretical base, general acceptance in clinical practice, and considerable anecdotal or clinical literature. This protocol also requires absence of evidence of harm,at least one randomized controlled study, descriptive publications, a reasonable amount of necessary training, and the possibility of being used in common settings.
A protocol suggested by Saunders et al.  assigns research reports to six categories, on the basis of research design, theoretical background, evidence of possible harm,and general acceptance. To be classified under this protocol, there must be descriptive publications, including a manual or similar description of the intervention. This protocol does not consider the nature of any comparison group, the effect of confounding variables, the nature of the statistical analysis, or a number of other criteria. Interventions are assessed as belonging to Category 1, well-supported, efficacious treatments, if there are two or more randomized controlled outcome studies comparing the target treatment to an appropriate alternative treatment and showing a significant advantage to the target treatment. Interventions are assigned to Category 2, supported and probably efficacious treatment, based on positive outcomes of nonrandomized designs with some form of control, which may involve a non-treatment group. Category 3, supported and acceptable treatment, includes interventions supported by one controll ed or uncontrolled study, or by a series of single-subject studies, or by work with a different population than the one of interest. Category 4, promising and acceptable treatment, includes interventions that have no support except general acceptance and clinical anecdotal literature; however, any evidence of possible harm excludes treatments from this category. Category 5, innovative and novel treatment, includes interventions that are not thought to be harmful, but are not widely used or discussed in the literature. Category 6, concerning treatment, is the classification for treatments that have the possibility of doing harm, as well as having unknown or inappropriate theoretical foundations. For example, they found that Dyadic developmental psychotherapy is an evidence-based treatment approach.
A protocol for evaluation of research quality was suggested by a report from the Centre for Reviews and Dissemination, prepared by Khan et al. and intended as a general method for assessing both medical and psychosocial interventions . While strongly encouraging the use of randomized designs, this protocol noted that such designs were useful only if they met demanding criteria, such as true randomization and concealment of the assigned treatment group from the client and from others, including the individuals assessing the outome. The Khan et al. protocol emphasized the need to make comparisons on the basis of "intention to treat" in order to avoid problems related to greater attrition in one group. The Khan et al. protocol also presented demanding criteria for nonrandomized studies, including matching of groups on potential confounding variables and adequate descriptions of groups and treatments at every stage, and concealment of treatment choice from persons assessing the outcomes. This protocol did not provide a classification of levels of evidence, but included or excluded treatments from classification as evidence-based depending on whether the research met the stated standards.
An assessment protocol has been developed by the U.S. National Registry of Evidence-Based Practices and Programs (NREPP) . Evaluation under this protocol occurs only if an intervention has already had one or more positive outcomes, with a probability of less than .05, reported, if these have been published in a peer-reviewed journal or an evaluation report, and if documentation such as training materials has been made available. The NREPP evaluation, which assigns quality ratings from 1 to 4 to certain criteria, examines reliability and validity of outcome measures used in the research, evidence for intervention fidelity (predictable use of the treatment in the same way every time), levels of missing data and attrition, potential confounding variables, and the appropriateness of statistical handling, including sample size.
Protocols for evaluation of research quality are still in development. So far, the available protocols pay relatively little attention to whether outcome research is relevant to efficacy (the outcome of a treatment performed under ideal conditions) or to effectiveness (the outcome of the treatment performed under ordinary, expectable conditions).
Journals[edit | edit source]
See also[edit | edit source]
- Adverse drug reaction
- Adverse effect (medicine)
- Clinical decision support system (CDSS)
- Consensus (medical)
- Critical appraisal
- Dynamic treatment regimes
- Evidence-based design
- Evidence-based education
- Evidence-based management
- Evidence-based medicine
- Evidence-based medical ethics
- Evidence-based nursing
- Evidence based policy
- Evidence based practice
- Guideline (medical)
- Hospital accreditation
- Medical algorithm
- Policy-based evidence making
- Practice-based evidence
- Quality control
- Source criticism
- Systematic review
References[edit | edit source]
- Rubin, A., & Parrish, D. (2007). Problematic phrases in the conclusions of published outcome studies. Research on Social Work Practice, 17 (3), 334-347.
- Chambless, D., & Hollon, S. (1998). Defining empirically supportable therapies. Journal of Consulting and Clinical Psychology, 66, 7-18.
- Kaufman Best Practices Project. (2004). Kaufman Best Practices Project Final Report: Closing the Quality Chasm in Child Abuse Treatment; Identifying and Disseminating Best Practices. Retrieved July 20, 2007, from http://academicdepartments.musc.edu/ncvc/resources_prof/reports_prof.thm.
- Saunders, B., Berliner, L., & Hanson, R. (2004). Child physical and sexual abuse: Guidelines for treatments. Retrieved September 15, 2006, from http://www.musc.edu/cvc.guidel.htm
- Khan, K.S., et al. (2001). CRD Report 4. Stage II. Conducting the review. phase 5. Study quality assessment. York, UK: Centre for Reviews and Dissemination, University of York. Retrieved July 20, 2007 from http://www.york.ac.uk/inst/crd/pdf/crd_4ph5.pdf
- National Registry of Evidence-Based Practices and Programs (2007). NREPP Review Criteria. Retrieved March 10, 2008 from http://www.nrepp.samsha.gov/review-criteria.htm
- Norcross, JC, Garofalo.A, Koocher.G. (2006) Discredited Psychological Treatments and Tests; A Delphi Poll. Professional Psychology; Research and Practice. vol37. No 5. 515-522
[edit | edit source]
- Evidence Based Practice Definitions
-  - The Joanna Briggs Institute - International Collaborative on Evidence-based Practice in Nursing.
- Evidence Communicate leading research in order to promote international cooperation and evidence-based treatments
- Evidence based Practice clinic intitiatives
- Evidence Based Practice studies at the University of Massachusetts
- Evidence Based Practice Definitions
- Indiana Center for Evidence Based Nursing Practice: A JBI Collaborating Center
- Evidence based Practice clinic intitiatives
- Center for the Advancement of Evidence-Based Practice (CAEP) at Arizona State University College of Nursing and Healthcare Innovation
- The National Nursing Practice Network