- Peter Doshi, associate editor, The BMJ
One group’s efforts to monitor misreporting of outcomes has irritated several medical journals, which argue that the differences discovered are not clinically important. So how seriously should we be taking it? Peter Doshi reports
Glimpses inside SmithKline Beecham’s secret clinical trials programme for the antidepressant paroxetine began in the early 2000s. Amid a growing storm over the safety of selective serotonin reuptake inhibitors for children, a leaked memo revealed by the BBC’s Panorama programme1 depicted a company trying to manage the unfavourable results of two important trials. “It would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the profile of paroxetine,” the memo read. A reviewer for the US Food and Drug Administration considered both trials as “failed.”2
But then there was the public face of the data. One of the two trials–Study 329–was published in the peer reviewed literature.3 The manufacturer told its sales representatives that the “landmark study … demonstrates REMARKABLE efficacy and safety.”4
Study 329 has become a classic example of what is known as outcome reporting bias, in which trial authors selectively present trial results leading, almost inevitably, to a rosier picture than would have occurred had the trial been reported according to the original protocol. The data showed no difference between paroxetine and placebo for all eight of the originally specified outcomes of interest, and an increase in harms.567 Yet the 2001 trial publication3 reported on four outcomes not specified in the protocol (all of which had statistically significant differences) and concluded that paroxetine was “generally well tolerated and effective.” Put simply, the goalposts set when the trial commenced had moved by the time the trial was reported.
Some may have suspected that Study 329 was an anomaly. …
Accepted by Christian Leuz. We thank Wenjiao (Amanda) Cao, Sujie Chen, Michael Geulen, Anne Koning, Moritz Kostrzewa, Gita Lieuw, Willem Mobach, Mitzi Perez Padilla, Tetiana Shevchenko, Violeta Shtereva, and Fei Wang for their help in collecting the data. We thank the Center for Accounting Research and Education at Notre Dame for providing us with the data on SEC enforcement releases. We benefited from feedback from Shane Dikolli, Igor Goncharov, Kelvin Law, Max Mueller, Bill Mayew, Cheryl McCormick, Valeri Nikolaev, Per Olsson, Thorsten Sellhorn, as well as from workshop participants at Duke, ESSEC Singapore, LMU, Tilburg, the Frankfurt School of Finance and Management, the University of Melbourne, University of Queensland, VU University Amsterdam, WHU, and the 2012 European Accounting Association Annual Meeting. We are also grateful for the constructive comments by three anonymous reviewers. This study was approved by the Frankfurt School of Finance and Management Research Ethics Committee. An Online Appendix to this paper can be downloaded at http://research.chicagobooth.edu/arc/journal-of-accounting-research/online-supplements.