The Replication Paradox

Guest blog for The Replication Network by Michèle Nuijten

Lately, there has been a lot of attention for the excess of false positive and exaggerated findings in the published scientific literature. In many different fields there are reports of an impossibly high rate of statistically significant findings, and studies of meta-analyses in various fields have shown overwhelming evidence for overestimated effect sizes.

The suggested solution for this excess of false postive findings and exaggerated effect size estimates in the literature is replication. The idea is that if we just keep replicating published studies, the truth will come to light eventually.

This intuition also showed in a small survey I conducted among psychology students, social scientists, and quantitative psychologists. I offered them different hypothetical combinations of large and small published studies that were identical except for the sample size – they could be considered replications of each other. I asked them how they would evaluate this information if their goal was to obtain the most accurate estimate of a certain effect. In almost all of the situations I offered, the answer was almost unanimously: combine the information of both studies.

This makes a lot of sense: the more information the better, right? Unfortunately this is not necessarily the case.

The problem is that the respondents forgot to take into account the influence of publication bias: statistically significant results have a higher probability of being published than non-significant results. And only publishing significant effects leads to overestimated effect sizes in the literature.

But wasn’t this exactly the reason to take replication studies into account? To solve this problem and obtain more accurate effect sizes?

Unfortunately, there is evidence from multi-study papers and meta-analyses that replication studies suffer from the same publication bias as original studies (see below for references). This means that bothtypes of studies in the literature contain overestimated effect sizes.

The implication of this is that combining the results of an original study with those of a replication study could actually worsen the effect size estimate. This works as follows.

Bias in published effect size estimates depends on two factors: publication bias and power (the probability that you will reject the null hypothesis, given that it is false). Studies with low power (usually due to a small sample size) contain a lot of noise, and the effect size estimate will be all over the place, ranging from severe underestimations to severe overestimations.

This in itself is not necessarily a problem; if you would take the average of all these estimates (e.g., in a meta-analysis) you would end up with an accurate estimate of the effect. However, if because of publication bias only the significant studies are published, only the severe overestimations of the effect will end up in the literature. If you would calculate an average effect size based on these estimates, you will end up with an overestimation.

Studies with high power do not have this problem. Their effect size estimates are much more precise: they will be centered more closely on the true effect size. Even when there is publication bias, and only the significant (maybe slightly overestimated) effects are published, the distortion would not be as large as with underpowered, noisier studies.

Now consider again a replication scenario such as the one mentioned above. In the literature you come across a large original study and a smaller replication study. Assuming that both studies are affected by publication bias, the original study will probably have a somewhat overestimated effect size. However, since the replication study is smaller and has lower power, it will contain an effect size that is even more overestimated. Combining the information of these two studies then basically comes down to adding bias to the effect size estimate of the original study. In this scenario it would render a more accurate estimation of the effect if you would only evaluate the original study, and ignored the replication study.

In short: even though a replication will increase precision of the effect size estimate (a smaller confidence interval around the effect size estimate), it will add bias if the sample size is smaller than the original study, but only if there is publication bias and the power is not high enough.

There are two main solutions to the problem of overestimated effect sizes.

The first solution would be to eliminate publication bias; if there is no selective publishing of significant effects, the whole “replication paradox” would disappear. One way to eliminate publication bias is to preregister your research plan and hypotheses before collecting the data. Some journals will even review this preregistration, and can give you an “in principle acceptance” – completely independent of the results. In this case, studies with significant and non-significant findings have an equal probability of being published, and published effect sizes will not be systematically overestimated.  Another way is for journals to commit to publishing replication results independent of whether the results are significant.  Indeed, this is the stated replication policy of some journals already.

The second solution is to only evaluate (and perform) studies with high power. If a study has high power, the effect size estimate will be estimated more precisely and less affected by publication bias. Roughly speaking: if you discard all studies with low power, your effect size estimate will be more accurate.

A good example of an initiative that implements both solutions is the recently published Reproducibility Project, in which 100 psychological effects were replicated in studies that were preregistered and high powered. Initiatives such as this one eliminates systematic bias in the literature and advances the scientific system immensely.

However, before preregistered, highly powered replications are the new standard, researchers that want to play it safe should change their intuition from “the more information, the higher the accuracy,” to “the more power, the higher the accuracy.”

This blog is based on the paper “The replication paradox: Combining studies can decrease the accuracy of effect size estimate” (2015) by Nuijten, van Assen, Veldkamp, Wicherts (2015). Review of General Psychology, 19 (2), 172-182.

Literature on How Replications Suffer From Publication Bias

  • Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19(6), 975-991.
  • Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17, 120-128.

Data sharing not only helps facilitate the process of psychology research, it is also a reflection of rigour

Originally Published on LSE Impact Blog

Originally Published on LSE Impact Blog

Guest blog for LSE Impact Blog by Jelte Wicherts

Data sharing in scientific psychology has not been particularly successful and it is high time we change that situation. Before I explain how we hope to get rid of the secrecy surrounding research data in my field of psychology, let me explain how I got here.

 

Ten years ago, I was working on a PhD thesis for which I wanted to submit old and new IQ data from different cohorts to novel psychometric techniques. These techniques would enable us to better understand the remarkable gain in average IQ that has been documented in most western countries over the course of the 20thcentury. These new analyses had the potential to shed light on why it is that more recent cohorts of test-takers (say, folks born between 1975-1985) scored so much higher on IQ tests than older cohorts (say, baby boomers). In search of useful data from the millions of yearly IQ test administrations, I started emailing psychologists in academia and the test-publishing world. Although my colleagues acknowledged that indeed there must be a lot of data around, most of their data were not in any useful format or could no longer be found.

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

After a persistent search I ended up getting five useful data sets that had been lying in a nearly-destroyed file-cabinet at some library in Belgium, were saved on old floppy disks, were reported as a data table in published articles, or were in a data repository (because data collection had been financed by the Dutch Ministry of Education under the assumption that these data would perhaps be valuable for future use). Our analyses of the available data showed that the gain in average IQ was in part an artefact of testing. So a handful of psychologists back in the 1960s kept their data, which decades later helped show that their rebellious generation was not simply less intelligent than generations X  (born 1960-1980) or Y (born 1980-2000). The moral of the story is that often we do not know about all potential uses of the data that we as researchers collect. Keeping the data and sharing them can be scientifically valuable.

 

Psychologists used to be quite bad at storing and sharing their research data. In 2005, we contacted 141 corresponding authors of papers that had been published in top-ranked psychology journals. In our study, we found that 73% of corresponding authors of papers published 18 months earlier were unable or unwilling to share data upon request. They did so despite the fact that they had signed a form stipulating that they would share data for verification purposes. In a follow-up study, we found that researchers who failed to share data upon request reported more statistical errors and report less convincing results than researchers who did share data. In other words, sharing data is a reflection of rigor. We in psychology have learned a hard lesson when it comes to researchers being secretive about their data. Secrecy enables up all sorts of problems including biases in reporting of results, honest errors, and even fraud.

So it is high time that we as psychologists become more open with our research data. For this reason, an international group of researchers from different subfields in psychology and I have established an open access journal, published by Ubiquity Press, that rewards the sharing of psychological research data. The journal is called Journal of Open Psychology Data and in it we publish so-called data papers. Data papers are relatively short, peer-reviewed papers that describe an interesting and potentially useful data set that has been shared with the scientific community in an established data repository.

We aim to publish three types of data papers. First, a data paper in the Journal of Open Psychology Data may describe the data from research that has been published in traditional journals. For instance, our first data paper reports raw data from a study of cohort differences in personality factors over the period 1982-2007, which was previously published in the Journal of Personality and Social Psychology. Second, we seek data papers from unpublished work that may of interest for future work because they can be submitted to alternative analyses or can be enriched later. Third, we publish papers that report data from replications of earlier findings in the psychological literature. Such replication efforts are often hard to publish in traditional journals, but we consider them to be important for progress. So the Journal of Open Psychology Data helps psychologists to find interesting data sets that can be used for educational purposes (learning of statistical analyses), data sets that can be included in meta-analyses, or data sets that can be submitted to secondary analyses. More information can be found in the editorial I wrote for the first issue.

In order to remain open access, the Journal of Open Psychology Data charges authors a publication fee. But our article processing charge is currently only 25 pounds or 30 euros.  So if you are a psychologist and have data lying around that will probably vanish as soon as your new computer arrives, don’t hesitate. Put your data in a safe place in a data repository, download the paper template, describe how the data were collected (and/or where they were previously reported), explain why they are interesting, and submit your data paper to the Journal of Open Psychology Data. We will quickly review your data paper, determine whether the data are interesting and useful, and check the documentation and accessibility of the data. If all is well, you can add a data paper to your resume and let the scientific community know that you have shared your interesting data. Who knows how your data may be used in the future.

This post is part of a wider collection on Open Access Perspectives in the Humanities and Social Sciences (#HSSOA) and is cross-posted at SAGE Connection. We will be featuring new posts from the collection each day leading up to the Open Access Futures in the Humanities and Social Sciences conference on the 24th October, with a full electronic version to be made openly available then.