Blog by Michèle Nuijten for Tilburg University on the occasion of World Digital Preservation Day.
We have all seen headlines about scientific findings that sounded too good to be true. Think about the headline “a glass of red wine is the equivalent of an hour in the gym”. A headline like this may make you skeptical right away, and rightly so. In this particular case, it turned out that several journalists got carried away, and the researchers never made such claims.
However, sometimes the exaggeration of an effect already takes place in the scientific article itself. Indeed, increasing evidence shows that many published results might be overestimated, or even false.
This excess of overestimated results is probably caused by a complex interaction of different factors, but there are several leads of what important problems might be.
The first problem is publication bias: studies that “find something” have a larger probability to be published than studies that don’t find anything. You can imagine that if we only present the success stories, the overall picture gets distorted and overly optimistic.
This publication bias may lead to the second problem: flexible data analysis. Scientists can start showing strategic behavior to increase their chances to publish their findings: “if I leave out this participant, or if I try a different analysis, maybe my data will show me the result I was looking for.” This can even happen completely unconsciously: in hindsight, all these decisions may seem completely justified.
The third problem that can distort scientific results are statistical errors. Unfortunately, it seems that statistical errors in publications are widespread (see, e.g., the prevalence of errors in psychology).
The fact that we make mistakes and have human biases, doesn’t make us bad scientists. However, it does mean that we have to come up with ways to avoid or detect these mistakes, and that we need to protect ourselves from our own biases.
I believe that the best way of doing that is through open science.
One of the most straightforward examples of open science is sharing data. If raw data are available, you can see exactly what the conclusions in an article are based on. This way, any errors or questionable analytical choices can be corrected or discussed. Maybe the data can even be used to answer new research questions.
Sharing data can seem as simple as posting them on your own personal website, but this has proven to be rather unstable: URLs die, people move institutions, or they might leave academia altogether. A much better way to share data is via certified data repositories. That way, your data are safely stored for the long run.
Open data is only one example of open science. Another option is to openly preregister research plans before you actually start doing the research. You can also make materials and analysis code open, publish open access, or write public peer reviews.
Of course, it is not always possible to make everything open in every research project. Practical issues such as privacy can restrict how open you can be. However, you might be surprised by how many other things you can make open, even if you can’t share your data.
I would like to encourage you to think about ways to make your own research more open. Maybe you can preregister your plans, maybe you can publish open access, maybe you can share your data. No matter how small the change is, opening things up will make our science better, one step at a time.
This blog has been posted on the website of Tilburg University: https://www.tilburguniversity.edu/current/news/blog-michele-nuijten-open-science/