Data sharing not only helps facilitate the process of psychology research, it is also a reflection of rigour

Originally Published on LSE Impact Blog

Originally Published on LSE Impact Blog

Guest blog for LSE Impact Blog by Jelte Wicherts

Data sharing in scientific psychology has not been particularly successful and it is high time we change that situation. Before I explain how we hope to get rid of the secrecy surrounding research data in my field of psychology, let me explain how I got here.

 

Ten years ago, I was working on a PhD thesis for which I wanted to submit old and new IQ data from different cohorts to novel psychometric techniques. These techniques would enable us to better understand the remarkable gain in average IQ that has been documented in most western countries over the course of the 20thcentury. These new analyses had the potential to shed light on why it is that more recent cohorts of test-takers (say, folks born between 1975-1985) scored so much higher on IQ tests than older cohorts (say, baby boomers). In search of useful data from the millions of yearly IQ test administrations, I started emailing psychologists in academia and the test-publishing world. Although my colleagues acknowledged that indeed there must be a lot of data around, most of their data were not in any useful format or could no longer be found.

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

After a persistent search I ended up getting five useful data sets that had been lying in a nearly-destroyed file-cabinet at some library in Belgium, were saved on old floppy disks, were reported as a data table in published articles, or were in a data repository (because data collection had been financed by the Dutch Ministry of Education under the assumption that these data would perhaps be valuable for future use). Our analyses of the available data showed that the gain in average IQ was in part an artefact of testing. So a handful of psychologists back in the 1960s kept their data, which decades later helped show that their rebellious generation was not simply less intelligent than generations X  (born 1960-1980) or Y (born 1980-2000). The moral of the story is that often we do not know about all potential uses of the data that we as researchers collect. Keeping the data and sharing them can be scientifically valuable.

 

Psychologists used to be quite bad at storing and sharing their research data. In 2005, we contacted 141 corresponding authors of papers that had been published in top-ranked psychology journals. In our study, we found that 73% of corresponding authors of papers published 18 months earlier were unable or unwilling to share data upon request. They did so despite the fact that they had signed a form stipulating that they would share data for verification purposes. In a follow-up study, we found that researchers who failed to share data upon request reported more statistical errors and report less convincing results than researchers who did share data. In other words, sharing data is a reflection of rigor. We in psychology have learned a hard lesson when it comes to researchers being secretive about their data. Secrecy enables up all sorts of problems including biases in reporting of results, honest errors, and even fraud.

So it is high time that we as psychologists become more open with our research data. For this reason, an international group of researchers from different subfields in psychology and I have established an open access journal, published by Ubiquity Press, that rewards the sharing of psychological research data. The journal is called Journal of Open Psychology Data and in it we publish so-called data papers. Data papers are relatively short, peer-reviewed papers that describe an interesting and potentially useful data set that has been shared with the scientific community in an established data repository.

We aim to publish three types of data papers. First, a data paper in the Journal of Open Psychology Data may describe the data from research that has been published in traditional journals. For instance, our first data paper reports raw data from a study of cohort differences in personality factors over the period 1982-2007, which was previously published in the Journal of Personality and Social Psychology. Second, we seek data papers from unpublished work that may of interest for future work because they can be submitted to alternative analyses or can be enriched later. Third, we publish papers that report data from replications of earlier findings in the psychological literature. Such replication efforts are often hard to publish in traditional journals, but we consider them to be important for progress. So the Journal of Open Psychology Data helps psychologists to find interesting data sets that can be used for educational purposes (learning of statistical analyses), data sets that can be included in meta-analyses, or data sets that can be submitted to secondary analyses. More information can be found in the editorial I wrote for the first issue.

In order to remain open access, the Journal of Open Psychology Data charges authors a publication fee. But our article processing charge is currently only 25 pounds or 30 euros.  So if you are a psychologist and have data lying around that will probably vanish as soon as your new computer arrives, don’t hesitate. Put your data in a safe place in a data repository, download the paper template, describe how the data were collected (and/or where they were previously reported), explain why they are interesting, and submit your data paper to the Journal of Open Psychology Data. We will quickly review your data paper, determine whether the data are interesting and useful, and check the documentation and accessibility of the data. If all is well, you can add a data paper to your resume and let the scientific community know that you have shared your interesting data. Who knows how your data may be used in the future.

This post is part of a wider collection on Open Access Perspectives in the Humanities and Social Sciences (#HSSOA) and is cross-posted at SAGE Connection. We will be featuring new posts from the collection each day leading up to the Open Access Futures in the Humanities and Social Sciences conference on the 24th October, with a full electronic version to be made openly available then.