The replication of research results is a cornerstone of science. If research findings can’t be reproduced by other researchers what confidence can we have in “scientific” findings? A large scale replication study reported in the September issue of the eminent journal Science has caused quite a stir. This study, a massive collaboration project involving over 100 researchers and research institutions was able to replicate only 39 of 100 recent studies in the most prominent journals in cognitive and social psychology. The project’s methodology was transparent and rigorously followed high quality research standards. The scientific community is confident in, but not happy with, the results of this study, which has implications for the entire scientific enterprise.
It is tempting to dismiss the relevance of these findings for the broader science community by critiquing psychology’s claim to being or doing science. However, one authority on medical science, Dr. John Ioannidis of Stanford University believes that if replications were conducted in biomedical research, the results would be no greater than the 36% replication rate found in the collaboration project. There is some evidence that supports Dr. Ioannidis’ opinion. Researchers at the biotechnology firm Amgen were only able to reproduce the results in 11% of 53 landmark studies in oncology and hematology.
It seems that the time has come in psychology, if not science in general, to take the matter of replication seriously. Simultaneously with the publication of the replication collaboration project published in Science, the American Psychological Association’s flagship journal, the American Psychologist published a comprehensive review of the scant literature on replication in psychology and brought together the field’s thinking on ways to encourage higher quality replication studies, as well as ways to promote publication of replication studies.
It is noteworthy that replication is not a problem across all the subfields of psychology. There is a very large body of replicated research in clinical psychology. Over the past 50 years there have been thousands of studies that have looked at the effectiveness of psychotherapy and the various factors that affect its success. Over the past two decades researchers have pulled this vast literature about psychotherapy outcome together with a procedure called meta-analysis. This procedure allows researchers to combine the results of similar studies with a reliable statistical procedure that provides a measure of the reproducability of research finding.
Some consistent findings have emerged from this literature. First, psychotherapy regardless of psychotherapeutic orientation has been found to have a robust and positive effect on outcome across a broad array of problem areas. The “average” psychotherapy client is better off than 2/3rds of those who are untreated for similar problems. This figure is as good or better than the effectiveness of psychiatric medications for the treatment of similar disorders. Where there are differences between theoretical approaches to therapy for particular problems, they tend to be quite modest. For instance, cognitive behavior therapy (CBT) has a slight edge over humanistic psychotherapy in the treatment of anxiety presumably because CBT has a much more directive approach that anxious clients find supportive. In contrast, humanistic approaches have a therapeutic edge over CBT for relational problems because of the importance of relationality in humanistic psychotherapy.
Comprehensive, high quality reviews of research findings give us reason to have both skepticism and faith in science. This seems to be exactly what the scientific endeavor is all about.
Kevin Keenan, PhD, MSP Faculty
Dr. Keenan is coeditor with David Cain and Shawn Rubin of the book Humanistic Psychotherapies: Handbook of Research and Practice to be published in December 2015 by the American Psychological Association