IN 1962 Jacob Cohen, a psychologist at New York University, reported an alarming finding. He had analysed 70 articles published in the Journal of Abnormal and Social Psychology and calculated their statistical “power” (a mathematical estimate of the probability that an experiment would detect a real effect). He reckoned most of the studies he looked at would actually have detected the effects their authors were looking for only about 20% of the time—yet, in fact, nearly all reported significant results. Scientists, Cohen surmised, were not reporting their unsuccessful research. No surprise there, perhaps. But his finding also suggested some of the papers were actually reporting false positives, in other words noise that looked like data. He urged researchers to boost the power of their studies by increasing the number of subjects in their experiments.
Wind the clock forward half a century and little has changed. In a new paper, this time published in Royal Society Open Science, two researchers, Paul Smaldino of the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology, in Leipzig, show that published studies in psychology, neuroscience and medicine are little more powerful than in Cohen’s day.