Practice-Based Psychotherapy Research
To Improve The Wellbeing Of Our Community

PPRNet Blog: December 2013

Giorgio A. TascaAt the PPRNet conference in November 2012 over 100 psychotherapy clinicians, researchers, and educators were very keen to receive ongoing information about psychotherapy research that is practice-oriented and presented in an easily readable format. And so the PPRNet Blog was born.

About once a month I will review and summarize two or three published psychotherapy research articles. As part of the summary, I will highlight the practice implications of the research.

Because of copyright issues, we cannot post the full text of the articles, but we will provide a link to the abstract on the publisher's web site. I will also post the author's email address. Most authors are very happy to share their work. So if you want a copy of the article send the author an email with a request for a pdf or reprint.

At the bottom of each review you can post a comment, and comment on your colleagues' comments. I will update these as frequently as possible.

If you have ideas for an article to review or a topic you would like to see covered, please send me an email at

Giorgio A. Tasca

blogHow Much Do Psychotherapists Differ in Their Outcomes and Why Does this Matter?

Handbook of Psychotherapy and Behavior Change

Starting in March 2013 I will review one chapter a month from the Handbook of Psychotherapy and Behavior Change in addition to reviewing psychotherapy research articles. Book chapters have more restrictive copy right rules than journal articles, so I will not provide author email addresses for these chapters. If you are interested, the Handbook table of content and sections of the book can be read on Google Books:

How Much Do Psychotherapists Differ in Their Outcomes and Why Does this Matter?

Baldwin, S. & Imel, Z.E. (2013). Therapist effects. In M.E. Lambert (Ed.), Bergin and Garfield's Handbook of Psychotherapy and Behavior Change, 6th Edition (pp. 258-297). New York: Wiley.

Does it matter that some therapists are more effective than others? Can less effective therapists be trained to improve their outcomes and relationship quality with patients?  These are important questions not only for our patients' well-being but also for the long term survival of psychotherapy as a health enterprise. If we do not measure outcomes and help therapists who are less effective, stakeholders (i.e., clients, families, agencies, insurance companies) may stop paying for the services. In the September 2013 blog I discussed a large study that showed that a few therapists were reliably harmful and some therapists were reliably helpful to their patients. That study also reported that most therapists were effective in 5 of 12 problem domains for which their patients sought help. What these findings and the Handbook chapter by Baldwin and Imel (2013) show is that there are significant between-therapist effects (i.e., therapists differed from each other on patient outcomes) and within-therapist effects (i.e., therapist outcomes within their own caseload differed based on the patients' problems). Baldwin and Imel (2013) reported on their meta analysis in which between-therapist differences accounted for 5% of the outcome variance. That seems small, but it's not. One study, for example, estimated that for each 100 patients that would be treated, the worst therapist compared to the best therapist would have 6 more patients who deteriorated. I would prefer my loved ones to be seen by the best therapist, even if the difference between best and worst is only 5%. Nevertheless, 95% of the variance in outcomes is within the therapist's caseload. That is, the patient, other contextual variables, and the therapist-patient relationship are by far the biggest contributors to outcome. As Baldwin and Imel point out, not only are some therapists more effective for some patients and not others, but also some therapists are better at developing a therapeutic relationship with some patients than with others. Baldwin and Imel reported that, on average, 9% of the variance in the quality of the therapeutic alliance is associated with the therapist – that's a clinically meaningful effect.

Practice Implications

As Baldwin and Imel (2013) state, ignoring therapist accountability is detrimental to patients and to the mental health field in general. If stakeholders do not see evidence of positive outcomes, then they will withdraw funding, and patients will have even less access to services. Therapists differ in their outcomes, and outcomes also differ within each therapist's caseload. If a primary goal is to improve therapist performance and patient outcomes, then therapists need to measure outcomes and therapeutic relationship quality. This knowledge about performance with specific patients can help therapists seek continuing education and training to improve outcomes and therapeutic alliances with specific patients for whom the therapist is less effective. This may require continuous outcome monitoring and real-time feedback to therapists regarding their patients' outcomes (see my September 2013 blog in identifying clients who might deteriorate).

Send Us Your Comments

blogAre The Parts as Good as The Whole?

Bell, E. C., Marcus, D. K., & Goodlad, J. K. (2013). Are the parts as good as the whole? A meta-analysis of component treatment studies. Journal of Consulting and Clinical Psychology, 81, 722-736.

Component studies (i.e., dismantling treatments or adding to existing treatments) may provide a method for identifying whether specific active ingredients in psychotherapy contribute to client outcomes. In a dismantling design, at least one element of the treatment is removed and the full treatment is compared to this dismantled version. In additive designs, an additional component is added to an existing treatment to examine whether the addition improves client outcomes. If the dismantled or added component is an active ingredient, then the condition with fewer components should yield less improvement. Among other things, results from dismantling or additive design studies can help clinicians make decisions about which components of treatments to add or remove with some clients who are not responding. For example, Jacobson and colleagues (1996) conducted a dismantling study of cognitive-behavioral therapy (CBT) for depression. They compared: (1) the full package of CBT, (2) behavioral activation (BA) plus CBT modification of automatic thoughts, and (3) BA alone. This study failed to find differences among the three treatment conditions. These findings were interpreted to indicate that BA was as effective as CBT, and there followed an increased interest in behavioral treatments for depression. However, relying on a single study to influence practice is risky because single studies are often statistically underpowered and their results are not as reliable as the collective body of research. One way to evaluate the collective research is by meta analysis, which allows one to assess an overall effect size in the available literature (see my November, 2013 blog on why clinicians should rely on meta analyses). In their meta analysis, Bell and colleagues (2013) collected 66 component studies from 1980 to 2010. For the dismantling studies, there were no significant differences between the full treatments and the dismantled treatments. For the additive studies, the treatment with the added component yielded a small but significant effect at treatment completion and at follow-up. These effects were only found for the specific problems that were targeted by the treatment. Effects were smaller and non-non-significant for other outcomes such as quality of life.

Practice Implications

Psychotherapists are sometimes faced with a decision about whether to supplement current treatments with an added component, or whether to remove a component that may not be helping. Adding components to existing treatments leads to modestly improved outcomes at least with regard to targeted symptoms. Removing components appears not to have an impact on outcomes. The findings of Bell and colleagues' (2013) meta analysis suggest that specific components or active ingredients of current treatments' have a significant but small effect on outcomes. Some writers, such as Wampold, have argued that the small effects of specific components highlight the greater importance of common factors in psychotherapy (i.e., therapeutic alliance, client expectations, therapist empathy, etc.). This may be especially the case when it comes to improving a patient's quality of life.

Click here for study abstract:
Author email:

Send Us Your Comments

blogCognitive-Behavioral Therapy and Psychodynamic Therapy are Equally Effective for Severely Depressed Patients

Driessen, E., Van, H.L., Don, F.J., Peen, J., Kool, S. ....Dekker, J.J. (2013). The efficacy of cognitive-behavioral therapy and psychodynamic therapy in the outpatient treatment of major depression: A randomized clinical trial. American Journal of Psychiatry, 170, 1041-1050.

Psychotherapy is one of the most widely used treatments for major depression. Unfortunately there is no commercial entity like the pharmaceutical industry to support research and development of psychotherapy. As a result, researchers have limited ability to conduct larger-scale studies of comparative treatment effectiveness, of which there are only a handful. Although psychodynamic therapy (PDT) has been used to treat depressed patients for decades, randomized controlled trials of its efficacy are relatively infrequent. A concurrent problem with outcome research in psychotherapy is that sample sizes tend to be too small to actually test if two treatments are equivalent in what is called an “equivalency trial”. Without large samples, all one can conclude is that two treatments are “not significantly different” (a statistical note: an equivalency trial is planned from the outset to have a large enough sample to test the hypothesis that, with 95% certainty, the effect of one treatment falls within a narrow, predetermined margin of the effect of another treatment). The study by Driessen and colleagues was conducted in several sites in Amsterdam, in which 341 patients seeking outpatient psychotherapy for depression in psychiatric clinics were randomized to PDT or cognitive behavioural therapy (CBT). This is largest trial ever of PDT. Participants received 16 weeks of therapy and then were followed up for 1 year. About 40% of patients started with severe depression. Therapists were 93 experienced and well trained therapists who provided one of the two treatments. The main outcome was remission from depression, defined by achieving a low score on a validated observer rating scale. Post treatment remission rates were 21% for CBT and 24% for PDT, indicating that the treatments were equivalent.

Practice Implications

Cognitive-behavioral therapy (CBT) and short-term PDT provided similar outcomes for patients with a major depressive episode, but remission rates at the end of treatment were low for both treatments. Lower remission rates were likely due to the greater level of severity for these patients compared to those seen in primary care settings. The results highlight that even the best available psychological (and pharmacological) treatments yield modest outcomes for more severely depressed patients. Nevertheless, this rare equivalency trial found that both CBT and PDT were equivalent in terms of outcomes for these patients.

A copy of the Cognitive-Behavioral Therapy and Psychodynamic Therapy are Equally Effective for Severely Depressed Patients article is available.

Author email:

Send Us Your Comments