Practice-Based Psychotherapy Research
To Improve The Wellbeing Of Our Community
PPRNet Blog: March 2016
At the PPRNet conference in November 2012 over 100 psychotherapy clinicians, researchers, and educators were very keen to receive ongoing information about psychotherapy research that is practice-oriented and presented in an easily readable format. And so the PPRNet Blog was born.
About once a month I will review and summarize two or three published psychotherapy research articles. As part of the summary, I will highlight the practice implications of the research.
Because of copyright issues, we cannot post the full text of the articles, but we will provide a link to the abstract on the publisher's web site. I will also post the author's email address. Most authors are very happy to share their work. So if you want a copy of the article send the author an email with a request for a pdf or reprint.
At the bottom of each review you can post a comment, and comment on your colleagues' comments. I will update these as frequently as possible.
If you have ideas for an article to review or a topic you would like to see covered, please send me an email at firstname.lastname@example.org.
Giorgio A. Tasca
Goldberg, S.B., Rousmaniere, T., Miller, S.D., Whipple, J., Nielsen, S.L., Hoyt, W.T., & Wampold, B.E. (2016). Do psychotherapists improve with time and experience? A longitudinal analysis of outcomes in a clinical setting. Journal of Counseling Psychology, 63, 1-11.
Do therapists get better in providing psychotherapy as they gain more experience? This is a long standing question in psychotherapy, and most studies that compare therapists of different experience levels have not provided encouraging findings. This large longitudinal study in a practice setting by Goldberg and colleagues is unique because they follow therapists over a number of years during their careers. That is, the authors do not focus on outcome differences between therapists with different levels of experience, but rather they see if a therapist improves over time as the therapist accrues experience. Data were collected on 170 therapists and 6,591 patients over 18 years in a large practice in the U.S. Patients were distressed adults who attended an average of 8 sessions (range = 3 to 153) across 13 weeks. Over the 18 years of the study, on average therapists saw 39 patients, saw their first patient of the study after their 5th year post graduate school, and had been working at the practice for about 5 years. On average patients got better, so that their psychological symptoms declined significantly over the course of treatment (i.e., 50% reliably improved). These rates of improvement are similar to benchmarks set in clinical trials. Contrary to expectations, therapists tended to have slightly poorer patient outcomes as the therapists gained experience. This result remained significant even when patient baseline severity, therapist caseload size, and other factors were controlled. However, more experienced therapists tended to have fewer early unplanned terminations (< 2 sessions) than less experienced therapists.
This is the first large longitudinal study that followed therapists over several years of their career. Therapists became less effective over time, although the magnitude of the deterioration was very small. At the very least, one can say that patients did not achieve better outcomes as their therapists became more experienced. The authors note that the results of this study are in contrast to a large therapist survey in which most practitioners reported that their skills improved with passing time, and in contrast to another study in which therapists tended to over-estimate their effectiveness and under-recognize failing cases. Ways for therapists to improve their skills and patient outcomes might include: engaging in regular progress monitoring, targeted learning of fundamental therapeutic skills, training with standardized patients, and setting aside time for reflection and clinical consultation.
View a copy of the Do Psychotherapists Improve with Experience? abstract.
Author email: email@example.com
Miller, D.J., Spengler, E.S., & Spengler, P.M. (2015). A meta-analysis of confidence and judgement accuracy in clinical decision making. Journal of Counseling Psychology, 62, 553-567.
People can make errors in judgements based on decision making rules that are biased. Clinicians also may be prone to making such errors. In their Nobel Prize winning work, Kahneman and Tversky outlined a number of heuristics (i.e., mental shortcuts) that lead to cognitive biases, which in turn affect accuracy of decisions. For example, when making a differential diagnosis clinicians may: rely too heavily on only one piece of information which may be the most available (e.g., “I vividly remember a patient with conversion disorder who had the same history”); or ignore that a particular event (e.g., conversion disorder) is very rare; or seek confirming rather than disconfirming evidence (e.g., the patient has PTSD symptoms that can explain some symptoms). Complicating these biases is the tendency for clinicians to be over-confident. For example, in one study the average psychotherapist rated their performance as better than 80% of their peers, and no therapist rated him or herself in the lower 50th percentile among peers. In their meta analysis, Miller and colleagues reviewed 36 studies of the relationship between clinician confidence ratings and accuracy of decisions among 1,485 clinicians. The authors were particularly interested in the overconfidence bias, which occurs when individuals report higher confidence in their judgments than is warranted by their actual accuracy. For example, studies have assessed the impact of clinician confidence on clinical accuracy in: detecting random responding on a psychological test, diagnosing a brain disorder verified by medical test using neuropsychological test data, predicting future violence and recidivism in offenders, and patient progress in psychotherapy. Most studies find that clinicians are quite confident in their judgments. But, is this confidence warranted? Miller and colleagues’ meta analysis found a significant but small (r = .15) association between confidence and accuracy. This suggests that clinician confidence is only slightly indicative of decision-making accuracy. The effect was a little larger for more experienced clinicians (r = .25), indicating that more experience and training resulted in somewhat more consistency between a clinician’s confidence and their clinical accuracy. Further, higher confidence leads to poorer accuracy when clinicians have to make repeated decisions without feedback, when feedback is not written, and when an event is rare.
Clinicians, like everyone else, are sometimes subject to making errors when they only look at confirming evidence, when they rely only on their own memory rather than objective data, and when they are over-confident. Accuracy can be increased when clinicians use decision-making aids that provide quality corrective feedback. Aids to help in decision making might include the use of: objective standardized test data, repeated measurements with feedback to assess patient progress in psychotherapy, and actively looking for disconfirming evidence before making a clinical judgement. As the authors conclude, confidence is not a good substitute for accuracy.
View a copy of the Does Clinician Confidence Lead to Accurate Clinical Judgement? abstract.
Author email: firstname.lastname@example.org
Owen, J., Drinane, J. M., Idigo, K. C., & Valentine, J. C. (2015). Psychotherapist effects in meta-analyses: How accurate are treatment effects? Psychotherapy, 52(3), 321-328.
One of the ongoing debates in the psychotherapy research literature has to do with the relative efficacy of psychotherapies. Is psychotherapy brand A (CBT, for example) more effective than psychotherapy brand B (psychodynamic therapy, for example)? The most common way to test this question is with randomized controlled trials (RCTs), in which clients are randomly assigned to treatment condition (brand A or B). This study design controls for systematic bias in the results that may be caused by differences between clients. But what about therapists? We know for example that therapist effects (i.e., differences between therapists) account for approximately 5% to 10% of client outcomes. Therapist effects are often larger than the effect of the empirically supported treatment that is being offered. Yet it is almost unheard of for therapists to be randomized to treatments, so therapist effects are not controlled in most psychotherapy trials. As a result the effects of the differences between therapists get statistically rolled into the treatment effects. As Owen and colleagues point out, the impact of not controlling for therapist effects is that some differences between treatments in an RCT will appear statistically significant when in fact they are not. One can control for the effect of therapist differences, thus providing a more accurate estimate of treatment effects, but this is rarely done in published RCTs. So, when these RCTs are summarized in a meta analysis, the meta analysis results are also affected by ignoring therapist effects. In their study, Owen colleagues did something very clever. They took data from 17 recent meta analyses of RCTs that found differences between two interventions. These included meta analyses of studies comparing: CBT vs alternative treatments, bona fide treatments vs non-bona fide treatments, culturally adapted treatments vs those that were not adapted, etc. There are many other meta analyses that show no differences between treatments, but the authors wanted to focus specifically on the 17 that did show differences. Owen and colleagues statistically estimated what would happen to the original study findings of significant differences between treatments if therapist effects on patient outcomes were controlled. They controlled for three different sizes of therapist effects that accounted for: 5% (small), 10% (medium), or 20% (large) of patient outcomes. Even small therapist effects (5%) reduced the number of significant differences between treatments from 100% to 80%. When psychotherapist effects were estimated to be medium (10% - which is the best estimate based on research), the number of significant differences between treatments dropped to 65%. For large therapist effects (20%), the number of significant treatment differences was only 35%.
I have argued previously that the psychotherapist matters. Placing more time and effort in developing good reflective practice based on quality information and developing therapist skills like empathy, progress monitoring, and identifying and repairing alliance ruptures will result in better patient outcomes. As Owen and colleagues note, when reading an RCT that claims to find significant differences between psychotherapies, ask yourself if they took into account the effects of differences between therapists.
View a copy of the Psychotherapists Matter When Evaluating Treatment Outcomes abstract.
Author email: email@example.com