Documenting the Methods History
One of the problems we face in clinical research is the false-positive results that emerge from exploratory analyses, which occur in the midst of a multitude of comparisons. With increasing computational power and the availability of existing data, we have the potential to examine data until we find intriguing results. Such work can provide information for future studies and support the development of hypotheses or refine prior estimates. However, the potential weaknesses of the data may not be readily apparent to the reader unless the genesis of the results is properly disclosed. Was the research question fixed at the outset, or did it emerge from nonprotocolized analyses? Were the investigators surprised by the results, or did the findings confirm expectations they may have had from the outset?
Most articles do not detail the history of the methods. Studies that appear to have been developed in a forward progression from idea to analytic plan to results may actually have been constructed backward, from results derived from undirected analyses to analytic plan to idea. Without information about the history of a study and prior beliefs about the analyses that could be useful to frame the findings, readers may wonder whether the findings resulted from combing through many analyses to find the most impressive. If we could improve our taxonomy and refine our communication about how a study was developed, we could help readers determine whether a study should influence research and practice.
Clinical trials have long required a written protocol before patient enrollment can commence. Multiple comparisons contribute to a much greater awareness of the dangers of false-positive findings and the whittling away of power. Journals require that trials share their approach through registration at http://www.clinicaltrials.gov and, in many cases, submit their protocols. The registration of observational studies remains an option rather than a requirement despite literature that has emphasized the need for more rigorous and transparent reporting.1
It is important to note that there are many facets of an observational study that are worth reporting in detail, as expressed in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement.2 Missing from STROBE checklists, however, is information about the history of the methods, including whether the study protocol was set before the analyses.
Perhaps it is time to create a different standard. A new approach would involve considerable challenges. What would we require from our authors? How would we accommodate the inevitable modifications that occur even in studies that progress forward? Would we need to verify the claims of the authors? How would we standardize terminology to ensure that readers have a common understanding of the descriptions?
In pursuing this course, it would be important to emphasize that exploratory, post hoc studies that emerge from data mining have their place in medicine. This work should not be excluded from the literature, as it can serve to identify promising areas for further study.
Although a prespecified research protocol does not guarantee the validity of the subsequent findings, it can provide a different level of evidence. Further validation may be required for results obtained from both types of studies, but the backward approach would more often produce evidence that is not strong enough to influence care and would more appropriately be considered as hypothesis generating and thought provoking. However, data alone cannot tell us the probability that a hypothesis is true; rather, data can help us to determine which of many hypotheses is most likely.
We hope that Circulation: Cardiovascular Quality and Outcomes can play a role in creating a better taxonomy for observational studies. We will solicit input from our readers and authors as we seek ways to best communicate the process by which studies are pursued. Should we develop requirements for disclosure about how a study is developed? Would this improve the ability of readers to interpret our studies? What information would be most useful? Would quantitative measures of evidence, such as Bayes Factors, provide more sensible and honest assessments?
The journal wishes to contribute to a new standard for the publication of studies with existing data. This standard will serve to increase the transparency of the research process and improve the interpretability of the results. Challenges will be inevitable, particularly regarding studies that have components that were prespecified as well as those that evolved in response to the analysis. Nevertheless, we need to improve our communication of how studies are conducted and incorporate that information into our interpretation of their meaning. We do not wish to preclude important exploratory work, but to improve our understanding of the strength of the evidence and where it fits into our medical literature.
Sources of Funding
Dr Krumholz is supported by grant
Disclosures
Dr Krumholz is the recipient of a research grant from Medtronic, Inc. through Yale University and is chair of a cardiac scientific advisory board for UnitedHealth.
Footnotes
References
- 1.
Rubin DB . The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials.Stat Med. 2007; 26:20–36.CrossrefMedlineGoogle Scholar - 2. University of Bern. STROBE Statement. Strengthening the reporting of observational studies in epidemiology;2009.http://www.strobe-statement.org/index.php?id=available-checklists.Google Scholar


