Quasi-cumulative science - where we think we are building on foundations of prior work, only to find the earlier work isn't solid - has a detrimental effect on science. There is no single cause for this, but rather an unfortunate combination of factors. Five of these, together with possible solutions, are discussed:
-
Publication bias - One solution is Registered Reports, which decouples decisions about publication from knowledge of research results.
-
Citation bias. Creates the impression of widespread agreement on a topic, because literature that does not support it gets forgotten - Systematic reviews are one method that attempts to redress the imbalance in citations, but it is not infallible. Unlikely to change unless we increase awareness of the serious consequences of citation bias, and train researchers to seek out and evaluate evidence contrary to their position.
-
P-hacking, or selective reporting of only positive findings from within a study, often after exploration of many different analysis options. I discuss one form of p-hacking which might be termed 'moving goalposts', where studies that appear to replicate an initial finding actually fail to do so, but instead present results on a related question - Registered reports offer a solution to p-hacking.
-
Low statistical power, often due to small sample sizes - Training scientists to explore simulated data is one way to counteract underpowered studies.
-
Obsession of funders/journals with novelty. A 'top down' influence that produces a distorted incentive structure for scientists, who avoid slow, careful science that builds up knowledge on a topic, and rather feel they must overhype results and jump from one hot topic to another - The solution is for funders and institutions to change how they evaluate researchers, to focus on rewarding those who adopt open, reproducible methods. The Hong Kong Principles for assessing researchers are introduced as one such approach.