Editor's note: This article was originally published by RealClearScience on December 29th, 2018.
The journal Science, published by the American Association for the Advancement of Science, recently published a series of articles on “metaresearch.” That term refers to research on the mechanisms of scientific research. Metaresearch is especially important because it highlights some of the mechanisms of the irreproducibility crisis, which has resulted in a wide variety of disciplines producing shoddy research. The series in Science is valuable but limited. Science, unfortunately, pulled its punches. The journal failed to look at how government contributes to the problem of scientific irreproducibility. Moreover, the series of articles skirts how Science itself contributed to the problem of irreproducible research. Science’s readership doesn’t get a full sense of the depth of the crisis, or of the range of possible solutions.
Jennifer Couzin-Frankel began the series with a study of “journalology”—the examination of different scientific journals’ procedures, particularly with an eye to see if they exhibit publication bias, which is a preference for publishing positive results (exciting, newsworthy) over negative results (drab, dull). An increasing body of research has revealed that scientists generally don’t submit negative results, leading to significant publication bias. Even when journals officially require researchers to pre-register their research—which should make negative results appear automatically—they don’t always enforce their rules. And the flood of new “predatory” journals more interested in profit than peer review or preregistration has undone much of what progress that has been made. Journalological researchers have had some effect on how modern science works—but they still struggle to get funded, by government agencies or private foundations. The field has yet to establish itself fully.
Jop de Vrieze continued with a study of meta-analyses, which are attempts to systematically collate and analyze the data and results from a whole range of comparable primary studies. There’s been an eleven-fold increase in meta-analyses since 2000. They’re cheap to do—a third are made in China, where researchers are keen to conduct low-cost, high-prestige research. But they’re not a neutral tool—meta-analyses of drug effectiveness vary widely in their results, depending on whether they’re conducted by industry-funded researchers or independent (but possibly ideological and activist) scientists. And then the question of how meta-analyses should correct for publication bias raises highly complex statistical issues, which different researchers resolve in divergent fashions. Meta-analysis requires its own standardized, preregistered research protocols.
Erik Stokstad profiled a group of Dutch scientists, led by Jelte Wicherts and Michèle Nuitjen, specializing in using statistical software to detect statistical errors in published research. The results are disturbing: “When [the program] statcheck scanned 30,717 [psychology] papers published between 1985 and 2013, it found a ‘gross inconsistency’ in one out of eight.” Many psychologists railed against ‘methodological terrorism,’ but some psychology journals are also beginning to use statchek on submitted papers. Others are pushing toward an “open data” assumption, which will allow for easier checking for errors of data and techniques. Psychologists are becoming more welcoming of using better statistics and data for their research—but they recognize that this may still hurt their chances for jobs and promotion.
Kai Kupferschmidt wrote about pre-registering research protocols—publishing in advance what you plan to study, what techniques you’ll use, and the amount of data you’ll seek. Preregistration helps avoid both publication bias and “HARKing,” Hypothesizing After the Results are Known. Clinical trials already standardly preregister; now psychologists have begun to shift toward preregistration. But there’s still a reluctance to adopt it: some scientists want to build their research “brand,” journals aren’t eager to publish negative results, and it does impose another administrative burden on scientific research. Still, the number of preregistrations on the Open Science Framework (OSF) is doubling every year.
These articles aren’t bad—but they don’t give you a full sense of the scope of the irreproducibility crisis, or of the possible solutions. Couzin-Frankel’s article on journalology doesn’t mention that Science itself has contributed to publication bias, especially in its reporting on climate change. The articles generally focus on clinical trials and psychology, without acknowledging that the irreproducibility crisis affects a far wider range of fields—or that these methods should be applied to these other disciplines. Nor do they mention the role of groupthink—or the growing number of studies attempting to account for its effects.
Above all, these articles don’t properly account for the role of government. It isn’t just journals that promote publication bias, but also government granting agencies. The Federal government is the largest single supporter of scientific research in the world; its bias toward funding positive results affects the entire world of scientific research. Nor do these articles consider whether governments should change their rules about how to judge the quality of scientific research that informs regulation. These articles paint a picture of vast amounts of science producing research gone wrong and of reform efforts that are only in their infancy. Governments should take account of what has gone wrong in modern science, and tighten up their standards about what science they should use to shape policy. Science may right itself eventually—but governments need to act now. They should reform their regulatory policy so that they don’t risk acting on the basis of junk science.
Science doesn’t seem eager to pursue that corollary. But the articles it publishes point in that direction.