What about meta-arts?

This blogpost was written by Ben Kretzler. Ben is a PhD student of our meta-research group and started his PhD in September 2024. During his PhD, he will be working on Jelte’s Vici project: Examining the Variation in Causal Effects in Psychology.


According to Wikipedia, we use the term "metascience" for the application of scientific methodology to study science itself. But there's perhaps another reason to talk about meta-science: within the arts and sciences, it seems that primarily the latter have a substantial number of researchers dedicated to scrutinizing research practices and assessing the confidence we can have in our knowledge. 

To explain this, we could put forward several reasons: First, theories from the sciences often yield statements whose content is easier to falsify compared to the statements we can derive from theories from the arts.¹ Therefore, overconfidence in or flaws of theories from the sciences might be more easily detectable than those of theories from the arts. Second, and relatedly, meta-research movements in fields like medicine and psychology often arise as reactions to "crises of confidence"—when results don't replicate or scientific misconduct is uncovered (Nelson et al., 2018; Rennie & Flanagin, 2018). Since evaluating the arts' function can be more challenging, such confidence crises may simply occur less often, perhaps reducing the pressure for self-evaluation.²

Still, even if these reasons help explain why meta-research in the arts has not reached the same intensity as its counterparts in the sciences in the past, they are insufficient to explain why such meta-research is not happening in the present. In this post, we will argue that meta-research in the arts is not only possible but necessary, exemplified by cases from quantitative history and cultural studies.¹


Quick Detour: What Is the Current State of Meta-Arts? 

As pointed out above, the meta-researcher-to-researcher ratio in the arts seems to be far below that in psychology or medicine. Consequently, evidence regarding publication bias, selective reporting, or analysis heterogeneity is sparse. Still, there are some individual projects that (directly or indirectly) addressed the replicability and robustness of research in the arts: 

  • The X-Phi Replicability Project, which tested the reproducibility of experimental philosophy (Cova et al., 2018) by conducting high-powered replication of two samples of popular and randomly drawn studies. It yielded at a replication rate of 78.4% for original studies presenting significant results (as a comparison: the replication rate for psychological research stemming from 2008 seems to be around 37%; Open Science Collaboration, 2015). 

  • A part of the June 2024 Issue of Zygon was devoted to a direct and a conceptual replication of John Hedley Brooke's account of whether religion helped or hindered the rise of modern science, as explored in his book Science and Religion. While the replicators mentioned a few minor inconsistencies in how Brooke presented the theses of some other researchers and interpreted some original and newly added source material differently than Brooke, they acknowledged that his work was of high quality and did not challenge his general account. Thus, although this historical work and its underlying sources beard some reliability issues and researcher degrees of freedom, they did not necessarily undermine the production of a robust and credible account of the relationship between religion and early science. 

  • Ultimately, a project to assess the robustness reproducibility of publications in the American Economic Review (Campbell et al., 2024) also reanalyzed some cliometric  papers (e.g., Angelucci et al., 2022; Ashraf & Galor, 2013; Berger et al., 2013). At the very least, these papers were not excluded from the general observation that the analyses conducted by the original authors tended to yield higher effect sizes and were more often significant than those conducted by the replication teams. 

The latter notion is reinforced by several research controversies over the past two decades, where commentaries analyzing the same research question in different ways contradicted the original findings (e.g., Albouy, 2012, cf. Acemoglu et al., 2012; Guinnane & Hoffman, 2022, cf. Voigtländer & Voth, 2022). Thus, there seems to be some analysis heterogeneity in some individual cases. 

What should we conclude from this short overview? On the one hand, it demonstrates the possibility that different research designs and analyses can induce interpretation-changing differences in results and that some publication bias and selective reporting are going on in quantitative historical or cultural research. On the other hand, these notions do not much more than thwart universal statements about the non-existence of such problems in the arts and, due to their anecdotal character, do not allow for any statements about the extent of such heterogeneity or bias. 


Researcher Degrees of Freedom in Cliometrics and Cultural Research 

Adding to our (weak) conclusion that researcher degrees of freedom can also affect topics associated with the arts, we will introduce two degrees of freedom specific to cliometric and cross-cultural research (and not included in enumerations of researcher degrees of freedom in other disciplines, such as psychology; cf. Wicherts et al., 2016): the selection of (growth) control variables and a reference year. 

(Growth) Control Variables 

Apparently, cross-cultural researchers like growth and GDP regressions (e.g., Acemoglu et al., 2005; Berggren et al., 2011; Gorodnichenko & Roland, 2017). However, they can hardly ever assume that the relationship between their predictor of interest and growth or GDP is unaffected by confounders, so that a set of control variables has to be determined. Defining such a set is not easy—for instance because many controls, such as education and income, are highly correlated with one another—and the outcomes are very different: some papers control for geographical and religious factors (e.g., Gorodnichenko & Roland, 2017), while others exclude these factors and instead focus on economic variables such as inflation rates, openness to foreign trade, or government expenditures (e.g., Berggren et al., 2011), and others again add historical variables such as the year of independence or war history (Acemoglu et al., 2005). 

Thus, researchers can choose from a bunch of reasonable combinations of control variables. Does this affect the outcomes? To test this, we ran multiple analyses about the relationship between general government debt and growth rates across a sample of countries worldwide.³ Working with a set of nine widely used control variables,⁴ we ran one analysis with all control variables, and nine additional analyses where we removed one of the controls. The distribution of the p-values is displayed below. 

First, the black bar shows the p-value for the analysis using the complete set of control variables. Here, the relationship between debt and growth rates was insignificant (p = .458). Yet, when removing one of the nine control variables, the results can change drastically, as demonstrated by the grey bars: Two analyses (one without life expectancy and the other without inflation rates) found that higher debt levels were highly significantly associated with lower growth, with p-values of .003 and < .001, respectively. Additionally, another analysis (this time without investment levels) detected a significant negative relationship, too, p = .023.⁵ All remaining analyses were, however, not even close to being significant. 

Why do p-values change when we include or exclude different control variables? Generally, there are two main reasons for this: 

  • Control variables might reduce noise in the outcome variable: By including control variables, we might explain some of the variation in the outcome (here: growth rates). This reduces the "noise" in its values, so it is easier to detect the effect of the predictor of interest (here: debt). 

  • They might, however, also account for relationships between variables: Control variables may be related to both the predictor of interest and the outcome. By including these controls, we isolate the unique contribution of debt to growth rates. Without them, we might mistakenly attribute some of the control variable's effect to debt. 

The second case is particularly interesting because it changes how we (should) interpret the regression results. For example, if we do not control for inflation rates, the observed relationship between debt and growth might not be due to debt itself reducing growth. Instead, it could reflect the fact that higher debt levels are often associated with high inflation, which in turn hampers growth. In this case, failing to control for inflation could lead us to a misleading conclusion about the causal relationship between debt and growth. However, not many papers reporting growth regressions seem to discuss how their composition of control variables affects the outcomes; instead, it appears more common to choose a particular set based on previous research (e.g., a popular paper by Barro, 1991) that might be more or less appropriate for different regressions. 

This quick example demonstrates that the set of control variables heavily influences whether a predictor for economic growth will be significant or not.⁶ It also shows that, given the lack of consensus about which variables to control for, researchers have a fair chance of generating positive results by playing with the controls. 

Year 

Another standard research design in cliometrics or cultural research is the cross-section, where we score countries on a predictor and then examine whether this predictor is significantly related to an outcome: Does an individualistic (vs. collectivistic) culture relate to higher productivity (Gorodnichenko & Roland, 2017)? Do countries with low, medium, or high genetic diversity have a higher GDP per capita (Q. Ashraf & O. Galor, 2013)? For such comparisons, we must select a reference year—does an individualistic culture relate to higher productivity in 2000, 2010, or 2020? 

To demonstrate that the year matters, we set up a quick example analysis: Is indulgence vs. restraint (i.e., the degree to which relatively free gratification of basic human needs is restricted by, for instance, social norms; Hofstede, 1980) associated with GDP per capita?⁷ The graphic below shows the p values for the years between 2005 and 2022: 

First, the analyses for all years indicate that indulgence is positively related to GDP per capita. However, while this relationship is significant for the years between 2005 and 2012 (and marginally significant until 2015), it becomes insignificant afterward. This could be due to some short-term developments: for example, some very restrained countries (e.g., China and Pakistan) experienced relatively high growth rates during our investigation period, while some very indulgent countries (e.g., Argentina and Brazil) struggled more. Still, it could also reflect that the relationship between indulgence/restraint and economic performance became weaker over time. In any case, the common practice of picking one year seems misplaced for this particular analysis, as developments characteristic of that year but not of the research question of interest could determine whether we observe a (significant) relationship or not. Instead, it might be more appropriate to look at the development of the relationship over time: accounting for variation between the results of different years might not only prevent false positives (or negatives) but also detect long-term developments in a relationship that could, in turn, be exploited for theory-building (e.g., Maseland, 2021). 

The analyses reveal a consistent positive relationship between indulgence and GDP per capita across all years. However, this relationship is significant only between 2005 and 2012 (and marginally significant until 2015) but becomes insignificant in later years. This shift could reflect short-term developments during the study period: for instance, some highly restrained countries, like China and Pakistan, experienced relatively high economic growth, while the economies of more indulgent countries, such as Argentina and Brazil, struggled at the start of the millennium. Alternatively, the fluctuations might also indicate that the relationship between indulgence/restraint and economic performance has weakened over time. 

In either case, relying on data from a single year seems problematic for this kind of analysis. A snapshot from one year could be heavily influenced by events specific to that period that determine the answer we receive to our broader research question. It would be more meaningful to examine how this relationship evolves over time. By considering variations across multiple years, researchers can not only reduce the risk of false positives (or negatives) but also uncover long-term trends that might inform theory development (see, e.g., Maseland, 2021). Such an approach could help identify persistent patterns or shifts in the relationship, providing valuable insights into the dynamics between cultural traits and economic performance. 


Conclusion 

This blog post aimed to establish two fundamental notions: First, quantitative analysis in the arts (e.g., history, cultural research) also involves researcher degrees of freedom, which can lead to meaningful variations in results. Second, these degrees of freedom can be strategically utilized to generate significant findings. 

Together, these two notions could lead to an inflated number of false-positive results. Indeed, the limited evidence we have so far suggests the existence of at least some publication bias and/or selective reporting in the quantitative humanities. Finally, while research in the humanities may not share the same topics or degrees of freedom as fields like psychology or medicine, the approaches that meta-researchers have developed in recent years (e.g., multiverse analyses, p-curves) could provide a good starting position for addressing publication bias and selective reporting in the arts as well. 


References 

Acemoglu, D., Johnson, S., & Robinson, J. (2005). The Rise of Europe: Atlantic Trade, Institutional Change, and Economic Growth. American Economic Review, 95(3), 546-579. https://doi.org/10.1257/0002828054201305  

Acemoglu, D., Johnson, S., & Robinson, J. A. (2012). The Colonial Origins of Comparative Development: An Empirical Investigation: Reply. American Economic Review, 102(6), 3077-3110. https://doi.org/10.1257/aer.102.6.3077  

Albouy, D. Y. (2012). The Colonial Origins of Comparative Development: An Empirical Investigation: Comment. American Economic Review, 102(6), 3059-3076. https://doi.org/10.1257/aer.102.6.3059  

Angelucci, C., Meraglia, S., & Voigtländer, N. (2022). How Merchant Towns Shaped Parliaments: From the Norman Conquest of England to the Great Reform Act. American Economic Review, 112(10), 3441-3487. https://doi.org/10.1257/aer.20200885  

Ashraf, Q., & Galor, O. (2013). The “Out of Africa” Hypothesis, Human Genetic Diversity, and Comparative Economic Development. American Economic Review, 103(1), 1-46. https://doi.org/10.1257/aer.103.1.1  

Astington, J. W. (1999). The language of intention: Three ways of doing it. In P. D. Zelazo, J. W. Astington, & D. R. Olson (Eds.), Developing theories of intention. Erlbaum.  

Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230-244. https://doi.org/10.1037/0022-3514.71.2.230  

Barro, R. J. (1991). Economic growth in a cross section of countries. The Quarterly Journal of Economics, 106(2), 407. https://doi.org/10.2307/2937943 

Berger, D., Easterly, W., Nunn, N., & Satyanath, S. (2013). Commercial Imperialism? Political Influence and Trade During the Cold War. American Economic Review, 103(2), 863-896. https://doi.org/10.1257/aer.103.2.863  

Berggren, N., Bergh, A., & BjØRnskov, C. (2011). The growth effects of institutional instability. Journal of Institutional Economics, 8(2), 187-224. https://doi.org/10.1017/s1744137411000488  

Bratman, M. E. (1987). Intention, plans, and practical reason. MIT Press.  

Campbell, D., Brodeur, A., Dreber, A., Johannesson, M., Kopecky, J., Lusher, L., & Tsoy, N. (2024). The Robustness Reproducibility of the American Economic Review (124). https://www.econstor.eu/bitstream/10419/295222/1/I4R-DP124.pdf 

Cova, F., Strickland, B., Abatista, A., Allard, A., Andow, J., Attie, M., Beebe, J., Berniūnas, R., Boudesseul, J., Colombo, M., Cushman, F., Diaz, R., N’Djaye Nikolai van Dongen, N., Dranseika, V., Earp, B. D., Torres, A. G., Hannikainen, I., Hernández-Conde, J. V., Hu, W.,…Zhou, X. (2018). Estimating the Reproducibility of Experimental Philosophy. Review of Philosophy and Psychology, 12(1), 9-44. https://doi.org/10.1007/s13164-018-0400-9  

De Rijcke, S., & Penders, B. (2018). Resist calls for replicability in the humanities. Nature, 560(7716), 29. https://doi.org/10.1038/d41586-018-05845-z 

Gorodnichenko, Y., & Roland, G. (2017). Culture, Institutions, and the Wealth of Nations. The Review of Economics and Statistics, 99(3), 402-416. https://doi.org/10.1162/REST_a_00599  

Guinnane, T. W., & Hoffman, P. (2022). Medieval Anti-Semitism, Weimar Social Capital, and the Rise of the Nazi Party: A Reconsideration. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4286968  

Hofstede, G. (1980). Culture's Consequences: International Differences in Work-Related Values. Sage Publications.  

Knobe, J. (2003). Intentional action in folk psychology: An experimental investigation. Philosophical Psychology, 16(2), 309-324. https://doi.org/10.1080/09515080307771  

Latour, B. (1991). We have never been modern. Harvard University Press.  

Maseland, R. (2021). Contingent determinants. Journal of Development Economics, 151. https://doi.org/10.1016/j.jdeveco.2021.102654  

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology&apos;s Renaissance. Annual Review of Psychology, 69(Volume 69, 2018), 511-534. https://doi.org/https://doi.org/10.1146/annurev-psych-122216-011836  

Open Science Collaboration. (2015). PSYCHOLOGY. Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716  

Peels, R., Van Den Brink, G., Van Eyghen, H., & Pear, R. (2024). Introduction: Replicating John Hedley Brooke’s work on the history of science and religion. Zygon, 59(2). https://doi.org/10.16995/zygon.11255 

Rennie, D., & Flanagin, A. (2018). Three Decades of Peer Review Congresses. JAMA, 319(4), 350-353. https://doi.org/10.1001/jama.2017.20606  

Voigtländer, N., & Voth, H.-J. (2022). Response to Guinnane and Hoffman: Medieval Anti-Semitism, Weimar Social Capital, and the Rise of the Nazi Party: A Reconsideration. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4316007  

Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., van Aert, R. C., & van Assen, M. A. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Front Psychol, 7, 1832. https://doi.org/10.3389/fpsyg.2016.01832  


Footnotes

1. For example, the theory of unconscious priming was originally corroborated by verifying the hypothesis ‘‘People walk slower when they are shown words they associate with the elderly” (Bargh et al., 1996). Compared to that, it is very hard to establish (inter-subjective) falsification of, say, the central hypothesis of Bruno Latour’s We Have Never Been Modern (Latour, 1991) “What we call the modern world is based on an ill-defined separation between nature and society.” 

2. Also, an interesting account by de Rijcke and Penders (2018 suggests that the arts are more about the search for meaning than chasing after truth and perform “evaluation and assessment according to different quality criteria — namely, those that are based on cultural relationships and not statistical realities.” In this case, the problem of overconfidence in or flaws of the theoretical state of the art(s) is irrelevant and any efforts to detect such issues redundant. Still, as Peels et al. (2024) note, the arts are not entirely off the hook when it comes to truth-seeking, as they also include research questions such as whether European colonies that were poorer at the end of the Middle Ages developed better than richer colonies because they were not subject to extractive institutions (Acemoglu et al., 2002). Therefore, this blog post at least concerns questions of this type, without explicitly including or excluding any other research question from the arts. 

3. Growth rates were calculated using the data from the Maddison Project Database 2023 (Bolt & van Zanden, 2023). Data about general government debt came from the Global Debt Database of the International Monetary Fund (Mbaye et al., 2018). The sources of the control variables were the Penn World Tables 10.01 (Feenstra et al., 2015), the World Development Indicators (World Bank, 2024), ILOSTAT (International Labour Organization, 2024), and Barro and Lee (2013). We used data from 2005 to 2019. 

4. The control variables were GDP capita (for convergence), population growth, investment levels relative to the GDP, government share relative to the GDP, sum of imports and exports relative to the GDP, education level, inflation level, life expectancy, and labor force growth. 

5. The coefficients indicate that a 25% increase in general government debt (similar to the increase in the United States during the first year of the COVID-19 pandemic) decreases yearly growth rates by 0.3% to 0.4%. 

6. Interestingly, the effect size estimates are rather close to one another, ranging from 0.0% to 0.4% for all (significant and insignificant) analyses. The underlying multiverse variability is 0.014 for Cohen’s f².  

7. GDP data came from the Maddison Project Database 2023 (Bolt & van Zanden, 2023), and data for power distance from the Hofstede (1980) data. We performed a linear regression for each year, controlling for a standard set of geographical and religious variables already used by previous studies on the relationship between culture and economic performance (e.g., Gorodnichenko & Roland, 2017). 

Improving the Quality and Specificity of Preregistration

cos_logo.png

Marjan Bakker, writing for the Center of Open Science:

Preregistration, which is specifying a research plan in advance of the study, is seen as one of the most important ways to improve the credibility of research findings. With preregistration, a clear distinction between planned and unplanned analyses can be made, thereby eliminating the possibility of making data-contingent decisions (Nosek, Ebersole, DeHaven, & Mellor, 2018). Over the last years, preregistration is gaining more and more popularity. For example, the number of preregistrations at OSF has approximately doubled yearly from 38 in 2012 to 36,675 by the end of 2019 (http://osf.io/registries).

Read more…

Why I Think Open Peer Review Benefits PhD Students

This blog post was part of an initiative by Nature Human Behavior called 'Publish or Perish' where early career researchers give their views on the pressure to publish in academia. The original blog post can be found here.

Doing scientific research is my dream job. Unfortunately, it’s not at all certain that I can keep doing research after getting my PhD degree. Research jobs are scarce and every year the academic job market is flooded with freshly minted PhDs. In practice, this means that only the most prolific PhD students will land a job. In other words, you either ‘publish or perish’. In this blog post I will argue that the culture of ‘publish or perish’, although not a problem in theory, is a problem in practice because of the unfairness of the peer review system. In my view, opening up this system would make it fairer for all researchers, but especially for PhD students.

dims.jpeg

Based on discussions with colleagues as well as my own experiences I’ve become aware that the peer review system can be random and biased. This intuition is supported by scientific studies of peer review that find that the interrater reliability of reviewers is low, which means that an editor’s (often arbitrary) choice of reviewers plays a big part in whether your manuscript will be accepted (Bornmann, Mutz, & Daniel, 2010; Cicchetti, 1991, Cole, Cole, & Simon, 1981; Jackson, Srinivasan, Rea, Fletcher, & Kravitz, 2011). In addition, studies have found that reviewers are more likely to value manuscripts including positive results (Mahoney, 1977; Emerson et al., 2010) and results consistent with their theoretical viewpoints (Mahoney, 1977). These structural biases as well as the random element make the peer review system unfair as it is unable to consistently distinguish good quality research from bad quality research. This is especially concerning for PhD students who only have a few years to accrue publications to get funding for an academic job. One unfair negative review could nip their career in the bud.

In my view, the solution to the unfairness of the peer review system is straightforward: Switch from a closed peer review system to an open peer review system. Here, I define open peer review as a peer review system in which authors and reviewers are aware of each other’s identity, and review reports are published alongside the relevant article. Ross-Hellauer (2017) found that these two aspects together account for more than 95% of the mentions of ‘open peer review’ in the recent literature. Note that open peer review may also refer to a situation where the wider community can comment on a manuscript, but I do not use that definition here. Below, I list the potential benefits and downsides of switching to an open peer review system.

Potential benefits of open peer review for PhD students

1) In an open peer review system reviewers’ names are linked to their public reviews, which increases accountability

This accountability may cause reviewers to be more conscientious and thorough when reviewing a manuscript. Indeed, a transparent peer review process has been linked to higher-quality reviews in several studies (Kowalczuk et al., 2015; Mehmani, 2016; Walsh, Rooney, Appleby, & Wilkinson, 2000; Wicherts, 2016), although a sequence of studies by Van Rooyen (Van Rooyen, Delamothe, & Evans, 2010; Van Rooyen, Godlee, Evans, Smith, & Black, 1999) failed to find any difference in quality between open and closed reviews. For PhD students higher quality peer reviews are especially important because they are at a stage where feedback on their work is crucially important for their development. Moreover, high quality reviews are fairer for PhD students as such reviews can distinguish more accurately between good and bad research (and thus good and bad PhD students).

2) If the identity of reviewers are made public PhD students can get credit for the reviews they conduct

McDowell, Knutsen, Graham, Oelker, & Lijkek (2019) found that many PhD students do not find their names on peer review reports submitted to journal editorial staff even though they had co-written the report with a more senior researcher. In such instances of “ghostwriting” the PhD student usually does most of the work while the senior researchers is the only one that profits by gaining appreciation from the editor. An open review system would provide public credit to reviewing PhD students (for example by making reviews citable, Hendricks & Lin, 2017) but would also provide less tangible rewards like senior researchers acknowledging their skills as a high quality scientist (see Tweet 1 below).

Tweet 1

Tweet 1

3) The fact that reviews are made open may also create a motivation for reviewers to be more friendly and constructive in their reviews.

Of course, this would greatly benefit PhD students because given their status they are likely influenced most severely by scathing or harsh reviews. Indeed, some research shows that reviews are potentially more courteous and constructive when they are open (Bravo, Grimaldo, López-Iñesta, Mehmani, & Squazzoni, 2019; Walsh, Rooney, Appleby, & Wilkinson, 2000).

4) Open peer review may lessen the risk of PhD students publishing in predatory journals

In a situation with open peer review, journals with no or substandard peer review will be identified quickly and will become known as low-quality journals. Predatory journals can no longer hide behind the closed peer review system and will eventually disappear. This makes life easier for PhD students as it is often difficult to orient the publishing landscape if you are inexperienced with it.

5) Open peer review can help to prevent a practice called citation manipulation (Baas & Fennell, 2019), whereby a reviewer suggests large numbers of citations of their own work to be added to a submitted manuscript

These are often unwarranted citations, but researchers (especially PhD students) are often coerced into adding them because they desperately want to publish their paper. Of course, only researchers who have a reasonable amount of citable papers under their belt would engage in citation manipulation, making it harder for PhD students to compete on the academic job market. Indeed, a prominent case of citation manipulation spurred a group of early career researchers to write an open letter to voice their concern. Open peer review would clearly help here as reviewers thinking of engaging with this unethical practice would think twice if their name and review were public.

6) Open peer review provides PhD students with insight in the mechanics of science

For example, it allows PhD students to see how other papers have developed over time or to see that landmark papers have been rejected multiple times before being published. Such insights into the peer review process are very valuable for PhD students as they can get more comfortable with the peer review system and can see that rejections are the norm rather than the exception.

7) Open peer review (or streamlined review, see Collabra, 2019) could save PhD students (and other researchers) time

Once a manuscript is rejected it is usually sent out to another journal to undergo a new round of review. It is likely that the arguments used by the first set of reviewers and the second set of reviewers are similar because the first set of reviews was done behind closed doors and authors often change little in between submission. It is estimated that 15 million hours are spent every year by restating arguments while reviewing rejected papers (The AJE Team, 2019). In open peer review, researchers can build on previous reviews, and see the development of the paper, which can free up many hours for valuable research. Of course, not all of the wasted review time is accounted for by PhD students, but because they are likely taking longer than the mean 8.5 hours for a review (Ware, 2008) an open peer review system would be especially time-saving for them.

Potential downsides of open peer review for PhD students

1) The main argument put forward against open peer review is that PhD students who write negative reviews may frustrate other researchers who could then retaliate. For example, vindictive researchers could provide negative reviews of the PhD student’s future work or could speak badly about them to their colleagues during a conference or in personal e-mails. This is plausible, but it is unsure whether a blind review system would prevent such practices as anonymity is by no means guaranteed. Many authors at least think they are able to correctly identify their reviewers (see Tweet 2 and 3), and a review found that masking reviewers’ identities was only successful about half of the time (Snodgrass, 2006). In any case, open peer review at least makes situations of power abuse easier to identify. 

Tweet 2

Tweet 2

Tweet 3

Tweet 3

2) Whether PhD students will be retaliated against or not, a fear of retaliation does exist in the academic community (see Tweet 4 and 5) This fear could cause PhD students to shy away from criticizing senior researchers in reviews, or could even cause PhD students to reject review requests for work authored by senior researchers. The first scenario would cause suboptimal work by senior research to be published more often, reinforcing the academic status quo and decreasing the quality of the scientific literature. The second scenario would prevent PhD students from gaining valuable review experience and would cause the scientific process to slow down. The second scenario seems unlikely though in light of findings by Bravo et al. (2019) and Ross-Hellauer, Deppe, & Schmidt (2017) that more junior scholars are more willing to engage in open peer review than more senior scholars.

Tweet 4

Tweet 4

Tweet 5

Tweet 5

3) Power dynamics can also play a problematic role when the reviewer is a senior researcher and the manuscript’s author is a PhD student. When the manuscript involves findings that run counter to the senior researcher’s self-interest they may decide to write a condemning review to intimidate the PhD student from pursuing the work further (see Tweet 6). However, this can also happen in a system of closed peer review. At least in open peer review unfairly harsh and power-abusive reviews can be identified and be followed up on. Although there is currently no system for reprimanding power abuse in peer reviews, Bastian (2018) argues that there are ways to do this effectively. For example, we could explicitly label power abuse in peer review as professional misconduct or even harassment in the relevant codes of conduct. 

Tweet 6

Tweet 6

4) In my view, the most problematic downside of open peer review (as I have defined it) is that all kinds of biases could creep into the peer review system. For example, it could be the case that papers from PhD students are rejected more often because PhD students do not have enough prestige or because PhD students more often come up with ideas that challenge the status quo in the literature. And indeed, studies have shown that open peer review may be associated with disproportionate rejections of researchers with low prestige, like PhD students (Seeber & Bacchelli, 2017; Tomkins, Zhang, & Heavlin, 2017). These findings are worrying and should be taken seriously. Importantly, open peer review should not be a goal in itself but should only be implemented when the benefits outweigh the costs. In this case, the benefits of unmasking the identities of authors (e.g., less hassle with masking your manuscripts) are marginal while the potential costs (discrimination against low prestige researchers) are likely high. An open peer review system where the identities of authors are masked therefore seems like the best solution.

Conclusion

My hope is that I won’t be the one to perish, but the simple fact is that there’s not enough funding available to accommodate every PhD student aspiring a job in academia. That does not need to be a problem as a little academic competition is fine. After all, it only seems fair that the best of the best are tasked with expanding our scientific knowledge. However, the best of the best are only selected as long as the peer review system is fair. Currently, that does not seem to be the case. 

In this blog post I have therefore argued for an open peer review system. Implementing this system across the board could increase the quality and tone of peer reviews, could provide PhD students with credit for their reviews, could root out predatory journals, could prevent citation manipulation, could provide PhD students with insight into the mechanics of science, and could lessen the peer review burden for PhD students. Even though the arguments against open peer review should be taken seriously (for example by masking the identities of authors) I am convinced open peer review will create a fairer system. And, as you can see below, the European Journal of Neuroscience, one of the journals that already practices open peer review, wholeheartedly agrees.

Excerpt from the summary report of the European Journal of Neuroscience about their new open peer review system. Retrieved from https://www.wiley.com/network/researchers/being-a-peer-reviewer/transparent-review-at-the-european-journal-of-neuroscienc…

Excerpt from the summary report of the European Journal of Neuroscience about their new open peer review system. Retrieved from https://www.wiley.com/network/researchers/being-a-peer-reviewer/transparent-review-at-the-european-journal-of-neuroscience-experiences-one-year-on

References

  • Baas, J., & Fennell, C. (2019, May). When peer reviewers go rogue-Estimated prevalence of citation manipulation by reviewers based on the citation patterns of 69,000 reviewers. SSRN Working Paper. Retrieved from https://ssrn.com/abstract=3339568.

  • Bastian, H. (2018). Signing critical peer reviews & the fear of retaliation: What should we do? https://blogs.plos.org/absolutely-maybe/2018/03/22/signing-critical-peer-reviews-the-fear-of-retaliation-what-should-we-do.

  • Bornmann, L., Mutz, R., & Daniel, H. D. (2010). A reliability-generalization study of journal peer reviews: A multilevel meta-analysis of inter-rater reliability and its determinants. PloS ONE, 5(12), e14331.

  • Bravo, G., Grimaldo, F., López-Iñesta, E., Mehmani, B., & Squazzoni, F. (2019). The effect of publishing peer review reports on referee behavior in five scholarly journals. Nature Communications, 10(1), 322.

  • Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross- disciplinary investigation. Behavioral and Brain Sciences, 14(1), 119-135.

  • Cole, S., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214(4523), 881-886.

  • Collabra (2019). Editorial Policies. Retrieved from https://www.collabra.org/about/editorialpolicies/#streamlined-review.

  • Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. (2010). Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Archives of Internal Medicine, 170(21), 1934-1939.

  • Hendricks, G., & Lin, J. (2017). Making peer reviews citable, discoverable, and creditable. Retrieved from https://www.crossref.org/blog/making-peer-reviews-citable-discoverable-and-creditable.

  • Jackson, J. L., Srinivasan, M., Rea, J., Fletcher, K. E., & Kravitz, R. L. (2011). The validity of peer review in a general medicine journal. PLoS ONE, 6(7), e22475.

  • Kowalczuk, M. K., Dudbridge, F., Nanda, S., Harriman, S. L., Patel, J., & Moylan, E. C. (2015). Retrospective analysis of the quality of reports by author-suggested and non-author-suggested reviewers in journals operating on open or single-blind peer review models. BMJ Open, 5(9), e008707.

  • Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161-175.

  • McDowell, G. S., Knutsen, J., Graham, J., Oelker, S. K., & Lijek, R. S. (2019). Co-reviewing and ghostwriting by early career researchers in the peer review of manuscripts. BioRxiv, 617373.

  • Mehmani, B. (2016). Is open peer review the way forward? Retrieved from https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/is-open-peer-review-the-way-forward.

  • Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6. 10.12688/f1000research.11369.2

  • Ross-Hellauer, T., Deppe, A., & Schmidt, B. (2017). Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS ONE, 12(12), e0189311.

  • Seeber, M., & Bacchelli, A. (2017). Does single blind peer review hinder newcomers? Scientometrics, 113(1), 567-585.

  • Snodgrass, R. (2006). Single-versus double-blind reviewing: an analysis of the literature. ACM Sigmod Record, 35(3), 8-21.

  • The AJE Team (2019). Peer Review: How We Found 15 Million Hours of Lost Time. Retrieved from https://www.aje.com/arc/peer-review-process-15-million-hours-lost-time.

  • Tomkins, A., Zhang, M., & Heavlin, W. D. (2017). Reviewer bias in single-versus double-blind peer review. Proceedings of the National Academy of Sciences, 114(48), 12708-12713.

  • Van Rooyen, S., Delamothe, T., & Evans, S. J. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: Randomised controlled trial. BMJ, 341, c5729.

  • Van Rooyen, S., Godlee, F., Evans, S., Smith, R., & Black, N. (1999). Effect of blinding and unmasking on the quality of peer review. Journal of General Internal Medicine, 14(10), 622-624.

  • Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: A randomised controlled trial. The British Journal of Psychiatry, 176(1), 47-51.

  • Ware, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community–Results from an international study. Information Services & Use, 28(2), 109-112.

  • Wicherts, J. M. (2016). Peer review quality and transparency of the peer-review process in open access and subscription journals. PLoS ONE, 11(1), e0147913.

A Recap of the Tilburg Meta-Research Day

On Friday November 22, 2019, the Meta-Research Center at Tilburg University organized the Tilburg Meta-Research Day. Around 90 interested researchers attended this day that involved three plenary lectures, by John Ioannidis (who received an honorary doctorate from Tilburg University a day earlier), Ana Marušić, and Sarah de Rijcke, and seven parallel sessions on meta-research. 

Below you can find the links to the video footage of the three plenary sessions as well as summaries of all seven parallel sessions. The full program of the Tilburg Meta-Research Day can be found here. If you have any questions or comments, please contact us at metaresearch@uvt.nl.

Next up at Tilburg: The 1st European Conference on Meta-Research (July 2021).

 

Recordings of plenary talks:

Plenary talk by Sarah de Rijcke: Research on Research Evaluation: State-of-the-art and practical insights

Plenary talk by Ana Marušić: Reviewing Reviews: Research on the Review Process at Journals and Funding Agencies

Plenary talk by John Ioannidis: Meta-research in different scientific fields: What lessons can we learn from each other?

 

Parallel sessions (see below for summaries):

  • How can meta-research improve research evaluation? (Session leaders: Sarah de Rijcke & Rinze Benedictus)

  • How can we ensure the future of meta-research? (Session leader: Olmo van den Akker)

  • How can meta-research improve statistical practices? (Session leader: Judith ter Schure)

  • How can meta-research improve the Psychological Science Accelerator (PSA) and how can the PSA improve meta-research? (Session leaders: Peder Isager & Marcel van Assen)

  • How can meta-research improve peer review? (Session leader: Ana Marušić)

  • How can meta-research improve our understanding of the effects of incentives on the efficiency and reliability of science? (Session leaders: Sophia Crüwell, Leonid Tiokhin, & Maia Salholz-Hillel)

  • Many Paths: A new way to communicate, discuss, and conduct (meta-)research (Session leaders: Hans van Dijk & Esther Maassen)

How can meta-research improve research evaluation

Session leaders: Sarah de Rijcke & Rinze Benedictus

The evaluation of research and researchers is currently based on biased metrics like the H-index and the journal impact factor. Several new initiatives have been launched in favor of indicators that correspond better to actual research quality. One of these initiatives is “Redefine excellence” from the University Medical Center (UMC) Utrecht. In this session, Rinze Benedictus shortly outlined the innovations that are implemented at the UMC Utrecht, after which Sarah de Rijcke led a discussion on how we can properly evaluate whether these innovations are effective.

The session stimulated a productive discussion about differences and similarities between sociology of science and meta-research. Both fields could be termed ‘research on research’, but they appear to be rather distinct, using very different languages, concepts and maybe even springing from different concerns. However, the feeling in the session was that a lot could be gained by more interaction between the fields.

Promising ways to build bridges seem:

  • Shared conferences to share concepts, language and maybe even research questions. A thematic approach (as opposed to method-based) to research questions could also facilitate interaction.

  • Identification of stakeholders: why are we doing research? For who?

  • Shared teaching, e.g. through setting up a joint workshop by CWTS and Tilburg University/Department of Methodology

Picture1.png

How can we ensure the future of meta-research?

Session leader: Olmo van den Akker

In this session, we set out to identify how we can ensure that the field of meta-research will remain vital in the upcoming years. Although the original focus of the session was to identify grant opportunities for meta-research projects, the discussion quickly developed into identifying journals that are open to submissions of meta-research studies. We aimed to draft a list of such journals, which can be found here. The list is far from exhaustive so please add journals if you can. The list mainly pertains to journals and journal collections specifically catered to meta-research, but there are of course also general journals that welcome meta-research submissions. In that sense, we are lucky as meta-researchers that our studies are often suitable for a wide variety of different journals.

That being said, one sentiment that arose in our discussion is that we still feel that we are missing a broad meta-research journal purely for meta-research papers. Such a journal would increase the visibility of our field, but there’s also the danger that more substantive researchers would engage less with meta-research studies published in a journal like this (as opposed to journals in their substantive field). However, we concluded that this might not be so problematic given that the majority of researchers use Google Scholar or other databases to look for papers and are less and less committed to only reading their papers from a few of their favorite journals. Below you can find a list of things that we thought would be valuable to consider when launching a specific meta-research journal.

  • The journal should be broad and welcome submissions from all areas of meta-research (and even meta-meta-research). As long as studying the process and outcomes of science is critical.

  • It would be good to have the journal link meta-research to the philosophy of science and science and technology studies (STS) as it appears that these related fields currently do not work together as much as they could.

  • It would be great if this journal would incorporate the latest meta-research on the effectiveness of journals as journal policy.

  • The journal could even be a trial ground for journal innovations. For example, the journal could try out whether a designated statistical reviewer for each submission would work (like is customary in medicine) or try out technological innovations facilitating SMART preregistration, multiverse analyses.

  • Initiating a Meta-Research Society with a dedicated conference could help fund the journal through society fees and conference fees.

  • The journal would do well to implement the CRediT authorship guidelines.

  • Preregistration, open data, open code, and open materials should be required, unless authors can convince the editorial team that it is not necessary in their case.

  • The editorial board should be paid, because a committed editorial board is crucial for the longevity and credibility of the journal. Preferably also reviewers would be paid, but this would require substantially more funding.

In the summer of 2021 Tilburg University will organize another Meta-Research Conference, this one will probably consist of two days and will focus more on the dissemination of meta-research studies. This conference could be a great place to launch a meta-research society and an accompanying meta-research journal.

How can meta-research improve statistics? 

Session leader: Judith ter Schure

How can meta-research improve statistics? The conclusion we reached is that it varies a lot per field whether scientists in their experimental design actually feel like they contribute to an accumulating series of studies. In some fields awareness exists that the results of an experiment will someday end up in a meta-analysis with existing experiments, while in others scientists aim to design experiments as 'refreshingly new' as possible. In a table that shows series of studies together in one column if they could be meta-analyzed, this latter approach shows scientists who mainly aim to initiate new columns. This pre-experimental perspective might be different from the meta-analysis perspective, in which a systematic search and inclusion criteria might still force those experiments together in one column, even though they weren't intended that way. This practice might erode trust in meta-analyses that try to synthesize effects from too different experiments.

The discussion was very hesitant towards enforcing rules (e.g. by funders or universities) on scientists in priority setting, such as whether a field needs more columns of 'refreshingly new' experiments, or needs replications of existing studies (extra rows) so a field can settle on a specific topic in one column with a meta-analysis.

In terms of statistical consequences, sequential processes might still be at play if scientists designing experiments know about the results of other experiments that might end up in the same meta-analysis. Full exchangeability in meta-analysis means that no-one would have decided differently on the feasibility or design of an experiment had the results of others been different. If that assumption cannot be met, we should consider studies as part of series in our statistical meta-analysis, even without forcing this approach in the design phase.

Picture2.png
Picture3.png

 Meta-research and the Psychological Science Accelerator

Session leaders: Marcel van Assen & Peder Isager

The Psychological Science Accelerator (PSA) is a standing network of more than 500 laboratories that collect large-scale, non-WEIRD data for psychology studies (see https://psysciacc.org and https://osf.io/93qpg). The PSA is currently running six many-lab projects, and a number of proposed future projects are currently under review. Importantly, the PSA has established a meta-research working group that is currently examining both how the PSA can best interface with the meta-research community, and how meta-research can help bolster the quality of research projects conducted at the PSA (see https://docs.google.com/document/d/1D-NmvFE4qaC-dXAWQn16SBLsY9AABCrm8jDDy3-cD8w/edit?usp=sharing)

The session began with an overview of PSA’s organization, presented by Peder, and a discussion of the importance of many-lab studies, presented by Marcel. The slides for these presentations can be found at https://osf.io/wnyga. Afterwards, the majority of the session was devoted to discussing seven predetermined topics related to how the meta-research field and the PSA may learn from each other. Participants could either independently provide their suggestions on the seven topics in a google doc (https://bit.ly/2KIUHTW) or on paper. After about half an hour independently working on the topics, we discussed the participants’ suggestions in the remainder of the session.

The following conclusions can be drawn from our discussion:

  1. There are multiple ways in which the PSA could contribute to meta-research (e.g. by providing access to lab data and project-level data for conducted studies, and by allowing researchers to vary properties of research designs - like the measurement tools - to study effect size heterogeneity, and advance theory by examining boundary conditions). 

  2. There are multiple issues within the meta-research field that seems relevant to the PSA. Issues related to theory, measurement and sample size determination were emphasized in particular. 

  3. Meta-researchers seem interested in contributing to the PSA research endeavor, but emphasize a lack of both general information about the PSA organization and specific information about what contributions could/would entail (e.g. what volunteer efforts one could contribute to and what studies would be relevant for the “piggy-back” submission policy). 

In summary, there seems to be much enthusiasm for the PSA within the meta-research community, and there are many overlapping interests between the PSA and the meta-research community. The points raised in this session will be communicated to the PSA network of researchers, with the hope that it will help facilitate more communication between the two research communities in the future. 

Other resources

PSA Data & methods committee bylaws: https://osf.io/p65qe/ 

Proposing a theory committee at the PSA (blog post): https://pedermisager.netlify.com/post/psa-theory-committee/

How can meta-research improve peer review?

Session leaderAna Marušić

The session started with a discussion about research approaches to different types of peer review: single blind, double blind, consultative, results free, open, and post-publication peer review. In post-publication peer review, the system that was pioneered by the F1000 Research, peer review is completely open to study, as all steps in the peer review process and editorial decision making are transparent and available in the public domain. This is not possible for other types of peer review, which remain elusive to researchers. Even in journals that publish the prepublication history of an article (like BioMed Central journals in biomedicine), the information on the review process is available only for published articles, but not for those that were rejected (and which represent the majority of articles submitted to a journal). This is a serious hindrance to meta-research on journal peer review. 

The participants discussed the possibilities of having access to complete peer review data, and the recent activities by the COST Action PEERE – New Frontiers in Peer Review, were discussed. PEERE brought together the researchers and publishers to establish a database on peer review in journals from different disciplines in order to study all aspects of peer review.

The participants in the session also discussed differences in peer review in different disciplines, as well as the need for qualitative studies on peer review. This methodological approach would be particularly important in understanding preferences and habits of peer reviewers. Recent findings, both from surveys and analysis of peer review in journals show that researchers prefer double blind peer review when they are invited to review for a journal. A qualitative approach would be useful to understand this phenomenon and build hypotheses for testing in a quantitative methodological approach.

Picture4.png

How can meta-Research improve research incentives?

Session leaders: Sophia Crüwell, Leo Tiokhin, & Maia Salholz-HillelEveryone’s talking about “the incentives,” but what does that mean? How can we move beyond our intuitions and towards a deeper understanding of how incentives affect the efficiency and reliability of science? The aim of this session was to explore the role of incentives in science, with the goal of facilitating a broader discussion of what important questions remain unanswered. 

Some conclusions from our discussion are outlined below.  We would like to invite both session participants and the wider community to contribute to the following library of resources on (meta)research relevant to incentives in science: https://www.zotero.org/groups/2421057/incentives_in_academic_science.

Some conclusions from our discussion:

  • We need to split incentives, stakeholders, behaviors, and outcomes.

    • Should we be focusing on predictors of career success rather than on incentives? However, career success is the outcome, which incentivizes the behaviors (e.g. publications).

  • We need to understand the parameters within which each incentive operates, i.e., a cost-benefit assessment towards outcomes. We could create a mapping or taxonomy to move the conversation forward. We could do this through an iterative, cross-stakeholder process that would then allow us to decide on next steps.

    • Rational choice theory

    • Delphi method: cyclical process for circulating solutions between

  • We should consider both intrinsic and extrinsic incentives.

    • Intrinsic incentives include what a person values, such as a desire to help patients, discover something about the world, etc. Extrinsic incentives include tenure and other career payoffs, prestige, etc. The external may crowd out internal incentives. 

    • Is it possible to separate them? For example, proximate/ultimate from biology. However, intrinsic vs. extrinsic may be a false dichotomy. Extrinsic incentives shape intrinsic ones. 

    • From a Mertonian sociology of science perspective, the drive to make a discovery is as strong a drive to refute a discovery. But this doesn’t seem to be the case. So, what are researchers trying to optimize?

  • Why do incentives exist? They are used as a proxy to measure who is a good scientist. E.g., measured by papers, publication, citation.

    • Why do people leave science?

  • Possible definitions of incentives

    • An ontology/framework of types of incentives & what questions you should ask about them; is it a positive or negative incentive?

    • Approach & avoidance approach 

    • Incentive can also be the purpose

    • Lots of theories of behavior change already exist; do we need to reinvent the wheel? 

    • Should we be talking about specific incentives?

    • Do incentivized behaviors have to be intentional?

    • Knowledge deficiency approach

Many Paths: A new way to conduct, discuss, and communicate (meta-)research

Session leaders: Hans van Dijk & Esther Maassen (in collaboration with Liberate Science)

Slides: https://github.com/emaassen/talks/blob/master/191122-mrd-many-paths.pdf

In Many Paths, we invite researchers from multiple disciplines to participate in a collaborative project to answer one research question, and we allow an emergent process to occur in the theory, data, results, and conclusion steps thereafter. Given that results are often path dependent, and *many paths* can be taken in a research process, we aim to examine what paths a research project initiates, prunes, and merges. The Many Paths model offers insight into how researchers from different disciplines approach and study the same question. We conduct and communicate the Many Paths research process in steps ("as-you-go"), instead of after the research is completed ("after-the-fact"). During our session, we also discussed the relationship of Many Paths to previous Many Projects (i.e., the Reproducibility Project Psychology, Many Labs, and Many Analysts).

Our goal of the session was to introduce the Many Paths model and to gather feedback and suggestions on the project. Reactions to the proposed model and the new way of communication were generally positive. Many Paths appears to provide the opportunity to gather a large amount of data from various disciplines in a transparent manner. It also allows for diversity and inclusivity. It would be interesting to find out if and how researchers decide to collaborate across disciplines. However, they might be hesitant to do so because of the notable difference in what they are used to now (i.e., competition) compared to what they could do (i.e., collaboration). Whereas some people claimed a project such as Many Paths would provide clear answers to the proposed research question, some expressed concerns about the possibility of excessive fragmentation or disintegration of paths, and difficulties with combining information from various conclusions and paths. Another possible issue that was mentioned relates to the quality assurance for the research output of Many Paths; a threshold should be in place to ensure contributions adhere to a certain quality. It should also be clear how the code of conduct would be enforced.

Picture5.png

Meta-research at the Psychological Science Accelerator

Friday November 22, 2019, the Meta-research center at Tilburg University (https://metaresearch.nl/) organized the meta-research day. Around 90 researchers attended the meta-research day that involved three plenary lectures, by John Ioannidis (who received an honorary doctorate from Tilburg University a day earlier), Ana Marušić, and Sarah de Rijcke, and seven parallel sessions on meta-research. One of these sessions was titled How can meta-research improve the Psychological Science Accelerator (PSA) and how can the PSA improve meta-research?, and was led by Peder Isager and Marcel van Assen. Nineteen participants attended this session.

Read More

Meta-Research Center at ICPS Paris

March 7-9, 2019, the International Convention of Psychological Science (ICPS) of the Association for Psychological Science (APS) was held in Paris, France. The Meta-research group Tilburg (co-)organized three sessions at the ICPS. Here a short overview of the three sessions and their presentations, including links to the presentations.

Preregistration: Common Issues and Best Practices (Chair: Marjan Bakker)

Preregistration has been lauded as one of the key solutions to the many issues in the field of psychology (Nosek, Ebersole, DeHaven, & Mellor, 2018). For example, researchers have argued that preregistration tackles the problems of publication bias, reporting bias, and the opportunistic use of researchers degrees of freedom in data analysis (also called questionable research practices or p-hacking). However, skeptics have put forward a broad list of issues concerned with preregistration. For example, they have argued that preregistration stifles researchers’ creativity, is not effective in the case secondary data or qualitative data, and is only intended for confirmatory research. In this symposium we aim to touch upon some of these issues.

Andrea Stoevenbelt, in her talk “Challenges to Preregistering a Direct Replication - Experiences from Conducting an RRR on Stereotype Threat”, described the challenges surrounding the preregistration of direct replication studies from her experiences of conducting a registered replication report of the seminal study by Johns, Schmaders, and Martens (2005) on stereotype threat.

Olmo van den Akker, in this talk “The Do’s and Don'ts of Preregistering Secondary Data Analyses”, presented a tutorial for a template that can be used to preregister secondary data analysis. Preregistering secondary data analysis is different from preregistering primary data analysis because mainly because researchers already have some knowledge about the data (through their own work using the data or through reading other people´s work using the data). Olmo´s take home message from this talk is: "Specify your prior knowledge of the data set from your own previous use the data and from other researcher’s previous use of the data, preferably for each author separately."

In all, this symposium touched upon many of the issues that have been raised about preregistration and hopefully encouraged researchers from a wide range of fields to give preregistration a try.

Issues with Meta-Analysis: Bias, Heterogeneity, Reproducibility (Chair: Jelte Wicherts)

The popularity of meta-analysis has been increasing the last decades, which is reflected by the rapid increase of the relative number of published meta-analyses. One question of meta-research is what we learn from all these meta-analyses; about a certain research topic, systematic biases, meta-analytic outcomes, or quality of coding. All talks in this symposium correspond to these meta-questions on meta-analysis.

Jelte Wicherts, in his talk “Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis”, presents the results of a meta-meta-analysis to estimate the average effect size, median power, and evidence of bias (publication bias, decline effect, early extremes effect, citation bias) in the field of intelligence research.

Anton Olsson Collentine presented on the “Limited evidence for widespread heterogeneity in psychology”. He examined the heterogeneity of all meta-analyses of ManyLab studies and registered multi-lab replication studies, which both are presumably not affected by publication or other bias. This research is important as many researchers stress the potential effect of moderators when trying to explain the failure of replication studies.

Esther Maassen, in her talk “Reproducibility of Psychological Meta-analyses”, systematically assessed the prevalence of reporting errors and inaccuracy of computations within meta-analyses. She documented whether coding errors affected meta-analytic effect sizes and heterogeneity estimates, as well as how issues related to heterogeneity, outlying primary studies, and signs of publication bias were dealt with.

Meta-analysis: Informative Tools (Chair: Marcel van Assen)

Meta-analysis is a statistical technique that statistically combines effect sizes from independent primary studies on the same topic, and is now seen as the “gold standard” for synthesizing and summarizing the results from multiple primary studies. Main research objectives of a meta-analysis are (i) estimating the average effect, (ii) assessing heterogeneity of true effect size, and if true effect size differs across studies (iii) incorporating moderator variables in the meta-analysis to explain this heterogeneity. Many different tools, visual (e.g., the funnel plot) or purely statistical (e.g., techniques to estimate heterogeneity or adjust for publication bias), have been developed to reach these objectives.

In this symposium, four speakers explain visual and statistical tools helping researchers to make sense of information in the meta-analysis and provide recommendations for applying these tools in practice. The focus is more on application than on the statistical background of the tools. Xinru Li from Leiden University will explain how classification and regression trees (CART) can be used to explain heterogeneity in effect size in a meta-analysis. The current meta-analysis methodology lacks appropriate methods to identify interactions between multiple moderators when no a priori hypotheses have been specified. The proposed meta-CART approach has the advantage that it can deal with many moderators and is able to identify interaction effects between them.

Hilde Augusteijn, in her talk “Posterior Probabilities in Meta-Analysis: An Intuitive Approach of Dealing with Publication Bias”, introduced a new meta-analytical method that makes use of both Bayesian and frequentist statistics. This method evaluates the probability of the true effect size being zero, small, medium or large, and the probability of true heterogeneity being zero, small, medium or large, while correcting for publication bias. The approach, which intuitively provides an evaluation of uncertainty in the estimates of effect size and heterogeneity, is illustrated with real-life examples.

Robbie van Aert, in his talk “P-uniform*: A new meta-analytic method to correct for publication bias”, presented a new method to correct for publication bias in a meta-analysis. In contrast to the vast majority of existing methods to correct for publication bias, the proposed p-uniform* method can also be applied if the true effect size in a meta-analysis is heterogeneous. Moreover, the method enables meta-analysts to estimate and test for the presence of heterogeneity while taking into account publication bias. An easy-to-use web application will be presented for applying p-uniform* and recommendations for assessing the impact of publication bias will be given.

Marcel van Assen, in his talk “The Meta-plot: A Descriptive Tool for Meta-analysis”, explained and illustrate the meta-plot using real-life meta-analyses, in this talk “The meta-plot”. The meta-plot improves on the funnel plot and shows in one figure the overall effect size and its confidence interval, the quality of primary studies with respect to their power to detect small, medium, or larger effects, and evidence of publication bias.

Presentation on Teaching Open Science: Turning Students into Skeptics, not Cynics (Presenter: Michèle Nuijten)

Michèle Nuijten, in her presentation “Teaching Open Science: Turning Students into Skeptics, not Cynic”, focused on strategies to teach undergraduates about replicability and open science. Psychology’s “replication crisis” has led to many methodological changes, including preregistration, larger samples, and increased transparency. Nuijten argued that psychology students should learn these open science practices from the start. They should adopt a skeptical attitude – but not a cynical one. 

Michèle Nuijten was also discussant at two sessions:

  • What can you do with nothing? Informative null results in hard-to-reach populations” (discussant). In hard-to-reach populations, it is especially difficult and time consuming to collect data, resulting in smaller sample sizes and inconclusive results. Therefore it is particularly important to understand what null results can mean. In this symposium, we discussed results from our own experimental data and how meta-analyses and Bayes factors can increase informativeness. 

  • Improving the transparency of your research one step at a time” (chair & discussant). Many solutions have been proposed to increase the quality and replicability of psychological science. All these options can be a bit overwhelming, so in this symposium, we focused on some easy-to-implement, pragmatic strategies and tools, including preprints, Bayesian statistics, and multi-lab collaboration.

Plan S: Are the Concerns Warranted?

Blog by Olmo van den Akker. A Dutch version has been published by ScienceGuide.

IMG_20190131_171308957.jpg

Plan S is the ambitious plan of eleven national funding agencies together with the European Commission (cOAlition S) to make all research funded by these organisations publicly accessible from 2020 onward. Since its announcement on September 4th 2018 the plan’s contents and consequences have been widely debated. When the guidelines for the implementation of the plan were presented at the end of November some aspects were clarified, but it also became apparent that a lot of details are still unclear. Here, I will give my thoughts on four main themes surrounding Plan S: early career researchers, researchers with less financial backing, scholarly societies, and academic freedom.

The consequences of Plan S for early career researchers

Because of the low job security in the early stage of an academic career it is possible that early career researchers will be negatively affected by Plan S. Plan S currently involves 14 national funding agencies (including India that announced their participation on January 12th) and draws support from big private funds like the Wellcome Trust and the Bill & Melinda Gates Foundation. Combined, these funds represent not more than 15% of the available research money in the world.

This relatively small market share could hurt young researchers dependent on Plan S funders as they will not be allowed to publish in some prestigious, but closed access journals. When researchers funded by other agencies can put these publications on their CV they would have an unfair advantage on the academic labour market. Only when Plan S or similar initiatives would cover a critical mass of the world’s research output would the playing field be levelled.

A crucial assumption underlying this reasoning is the continuation of the prestige model of scientific journals. However, Plan S specifically expresses the ambition to change the way researchers are being evaluated. Instead of looking at the number of publications in prestigious journals researchers should be evaluated on the quality of their work. This point has been emphasized in the San Francisco Declaration on Research Assessment (DORA).

DORA has been signed by more than 1,000 research organizations and more than 13,500 individuals worldwide, indicating that the scientific community wants to get rid of classical quality indicators like the impact factor and the h-index in favour of a new system of research assessment. One way to evaluate researchers is to look at the extent to which their work is open and reproducible. Plan S strongly supports open science and could therefore even be beneficial to early career researchers. However, it should be noted that cOAlition S should play a proactive role in this culture change. The fact that so many people signed DORA does not mean that they will act on its principles.

The consequences of Plan S researchers with less financial backing 

It is expected that Plan S will cause many journals that currently have a closed subscription model to transition to an author-pays model where the author pays so-called article processing charges (APCs) to get their work published open access. Many researchers have raised concerns that Plan S would make publishers increase their profits by increasing their APCs. Because researchers are forced to publish open access they are also forced to pay these higher APCs. For researchers with less financial backing (for example from smaller institutions or developing countries) the increased APCs may be unaffordable, which would crowd them out of science. However, there are several counterpoints to this scenario.

First, Plan S involves the condition that journals make their APCs reasonable and transparent. If this condition is met, it is expected that journal APCs go down. This is illustrated by the fact that many open access journals that have no or very low APCs. This was underscored by a white paper of the Max Planck Society that shows that an open access system with APCs comes with significantly lower cost than the current system. To attain this scenario, it is important that cOAlition S monitors that journal APCs are indeed reasonable and transparent. Commercial publishers have a lot of market power and will undoubtedly try to artificially increase their APCs. cOAlition S has already announced that they will develop a database like the Directory of Open Access Journals, in which researchers can find journals that comply by the demands set out in Plan S. Hopefully, the necessity for journals to be included in that database will make sure that they set affordable APCs.

Second, representatives of cOAlition S have already clarified that they will instate a fund that can help researchers pay due APCs. This fund will be available for funded researchers as well as non-funded researchers that cannot reasonably be expected to pay APCs. The way this APC fund will be financed is as of yet unclear, but it is clear that individual researchers do not need to come up with the costs of open access themselves.

The consequences of Plan S for scholarly societies

Like regular journals, journals from scholarly societies will have to move from a subscription model to an author-pays model. Representatives of scholarly societies fear that this will be the end of them. Societies would face high investments to make the open access transition. For example, to be Plan S compliant, journals need to make their articles fully machine-readable by transforming them into a JATS XML format. In addition, they need to create an Application Programming Interface (API). Developing a digital infrastructure like this is costly and can be problematic given that societies lose their subscription fees from January 1st 2020.

Therefore, it is essential that cOAlition S plays a proactive role and tries to facilitate the open access transition for society journals on a case-by-case basis. A starting point for cOAlition S could be the results of a study by Wellcome Trust that will investigate how scholarly societies can transition to a Plan S compliant model as efficiently as possible. One possibility is that cOAlition S (partly) subsidizes the transition costs of journals and guides them in developing the required digital infrastructure.

The consequences of Plan S for academic freedom 

One common concern of Plan S is that it restricts the freedom of researchers to determine what and how they do research, and how they disseminate their research results. This academic freedom is guaranteed by governments and academic institutions with the aim of insulating researchers from censorship and other negative consequences of their work. In this way, researchers can focus on their research without having to worry about any outside influence. When Plan S is implemented, researchers can no longer publish in paywalled journals. This would hamper researcher’s freedom to disseminate their research in the way they see fit.


However, one can raise doubts about the extent to which researchers currently do have the freedom to choose where and how to publish their work as researchers’ hands are generally tied by demands from scientific journals. They must abide by strict word limits and specific layout standards, and usually have to hand over their copyright to the commercial publisher. Moreover, to move up in academia, they are almost forced to publish in prestigious journals. Therefore, appealing to academic freedom to criticize Plan S is unconvincing, especially given that Plan S does not place any restrictions on research contents and on the methods researchers employ.

A more ideological point against the academic freedom argument is that academic freedom is part of an unofficial reciprocal arrangement between researchers and society. Researchers receive funding and freedom from society, but in return they should incorporate the interests of society into their decision-making. Publishing in a prestigious but closed journal does not fit with this reciprocal arrangement. Currently, many researchers have access to closed journals because university libraries pay a subscription fee to the publishers of those journals. However, not all researchers can take advantage of these subscriptions because their organisation cannot afford them or because the negotiations about subscription fees were unsuccessful.

Because of the limited access to research results scientific progress slows down. This is problematic in itself, but can have major consequences for research about climate change or contagious diseases. In addition, the subscription fees demanded by publishers is disproportionally high. In 2018, The Netherlands paid more than 12 million euros to one of the main scientific publishers, Elsevier. A big chunk of that money ended up as profit for Elsevier and would not by reinvested into science. Obviously, this practice does not fit with the reciprocal arrangement between researchers and society either.

Conclusion

After their call for feedback cOAlition S was flooded by a wave of comments and ideas about Plan S, of which the mains ones are outlined above. Even alternative plans were proposed with names like Plan U and Plan T, which were often even more radical than Plan S. Although such initiatives are very valuable to the scientific community it is hard to create a new infrastructure for scholarly communication without a large budget and without the support of a critical mass. cOAlition S does have a large budget and is getting increasing support from the scientific community. That’s why I think that Plan S is currently the most efficient way forward, especially because the potential issues with the plan are relatively straightforward to prevent. I have faith that cOAlition S will take the responsibility that follows from intiating this ambitious plan. Let us place our trust as a research community and back cOAlition S toward a more open science.

Open Science: The Way Forward

Blog by Michèle Nuijten for Tilburg University on the occasion of World Digital Preservation Day.

We have all seen headlines about scientific findings that sounded too good to be true. Think about the headline “a glass of red wine is the equivalent of an hour in the gym”. A headline like this may make you skeptical right away, and rightly so. In this particular case, it turned out that several journalists got carried away, and the researchers never made such claims.

However, sometimes the exaggeration of an effect already takes place in the scientific article itself. Indeed, increasing evidence shows that many published results might be overestimated, or even false.

This excess of overestimated results is probably caused by a complex interaction of different factors, but there are several leads of what important problems might be.

The first problem is publication bias: studies that “find something” have a larger probability to be published than studies that don’t find anything. You can imagine that if we only present the success stories, the overall picture gets distorted and overly optimistic.

This publication bias may lead to the second problem: flexible data analysis. Scientists can start showing strategic behavior to increase their chances to publish their findings: “if I leave out this participant, or if I try a different analysis, maybe my data will show me the result I was looking for.” This can even happen completely unconsciously: in hindsight, all these decisions may seem completely justified.

The third problem that can distort scientific results are statistical errors. Unfortunately, it seems that statistical errors in publications are widespread (see, e.g., the prevalence of errors in psychology).

The fact that we make mistakes and have human biases, doesn’t make us bad scientists. However, it does mean that we have to come up with ways to avoid or detect these mistakes, and that we need to protect ourselves from our own biases.

I believe that the best way of doing that is through open science.

One of the most straightforward examples of open science is sharing data. If raw data are available, you can see exactly what the conclusions in an article are based on. This way, any errors or questionable analytical choices can be corrected or discussed. Maybe the data can even be used to answer new research questions.

Sharing data can seem as simple as posting them on your own personal website, but this has proven to be rather unstable: URLs die, people move institutions, or they might leave academia altogether. A much better way to share data is via certified data repositories. That way, your data are safely stored for the long run.

Open data is only one example of open science. Another option is to openly preregister research plans before you actually start doing the research. You can also make materials and analysis code open, publish open access, or write public peer reviews.

Of course, it is not always possible to make everything open in every research project. Practical issues such as privacy can restrict how open you can be. However, you might be surprised by how many other things you can make open, even if you can’t share your data.

I would like to encourage you to think about ways to make your own research more open. Maybe you can preregister your plans, maybe you can publish open access, maybe you can share your data. No matter how small the change is, opening things up will make our science better, one step at a time.

This blog has been posted on the website of Tilburg University: https://www.tilburguniversity.edu/current/news/blog-michele-nuijten-open-science/

statcheck – A Spellchecker for Statistics

Guest blog for LSE Impact Blog by Michèle Nuijten

If you’re a non-native English speaker (like me), but you often have to write in English (like me), you will probably agree that the spellchecker is an invaluable tool. And even when you do speak English fluently, I’m sure that you’ve used the spellchecker to filter out any typos or other mistakes.

When you’re writing a scientific paper, there are many more things that can go wrong than just spelling. One thing that is particularly error-prone is the reporting of statistical findings.

Statistical errors in published papers

Unfortunately, we have plenty of reasons to assume that copying the results from a statistical program into a manuscript doesn’t always go well. Published papers often contain impossible meanscoefficients that don’t add up, or ratios that don’t match their confidence intervals.

In psychology, my field, we found a high prevalence of inconsistencies in reported statistical test results (although these problems are by no means unique to psychology). Most conclusions in psychology are based on “null hypothesis significance testing” (NHST) and look roughly like this:

“The experimental group scored significantly higher than the control group, t(58) = 1.91, p < .05”.

This is a t-test with 58 degrees of freedom, a test statistic of 1.91, and a p-value that is smaller than .05. A p-value smaller than .05 is usually considered “statistically significant”.

This example is, in fact, inconsistent. If I recalculate the p-value based on the reported degrees of freedom and the test statistic, I would get p = .06, which is not statistically significant anymore. In psychology, we found that roughly half of papers contain at least one inconsistent p-value, and in one in eight papers this may have influenced the statistical conclusion.

Even though most inconsistencies we found were small and likely to be the result of innocent copy-paste mistakes, they can substantively distort conclusions. Errors in papers make results unreliable, because they become “irreproducible”: if other researchers would perform the same analyses on the same data, a different conclusion would roll out. This, of course, affects the level of trust we place in these results.

statcheck

The inconsistencies I’m talking about are obvious. Obvious, in the sense you don’t need raw data to see that certain reported numbers don’t match. The fact that these inconsistencies do arise in the literature means that peer review did not filter them out. I think it could be useful to have an automated procedure to flag inconsistent numbers. Basically, we need a spellchecker for stats. To that end, we developed statcheck.

statcheck is a free, open-source tool that automatically extracts reported statistical results from papers and recalculates p-values. It is available as an R package and as a user-friendly web app at http://statcheck.io.

statcheck is a free, open-source tool that automatically extracts reported statistical results from papers and recalculates p-values. It is available as an R package and as a user-friendly web app at http://statcheck.io.

statcheck roughly works as follows. First, it converts articles to plain-text files. Next, it searches the text for statistical results. This is possible in psychology, because of the very strict reporting style (APA); stats are always reported in the same way. When statcheck detects a statistical result, it uses the reported degrees of freedom and test statistic to recompute the p-value. Finally, it compares the reported p-value with the recalculated one, to see if they match. If not, the result is flagged as an inconsistency. If the reported p-value is significant and the recalculated one is not, or vice versa, it is flagged as a gross inconsistency. More details about how statcheck works can be found in the manual.

statcheck’s accuracy

It is important that we know how accurate statcheck is in flagging inconsistencies. We don’t want statcheck to mark large numbers of correct results as inconsistent, and, conversely, we also don’t want statcheck to wrongly classify results as correct when they are actually inconsistent. We investigated statcheck’s accuracy by running it on a set of articles for which inconsistencies were also manually coded.

When we compared statcheck’s results with the manual codings, we found two main things. First, statcheck detects roughly 60% of all reported stats. It missed the statistics that were not reported completely according to APA style. Second, statcheck did a very good job in flagging the detected statistics as inconsistencies and gross inconsistencies. We found an overall accuracy of 96.2% to 99.9%, depending on the specific settings. (There has been some debate about this accuracy analysis. A summary of this discussion can be found here.)

Even though statcheck seems to perform well, its classifications are not 100% accurate. But, to be fair, I doubt whether any automated algorithm could achieve this (yet). And again, the comparison with the spellchecker still holds; mine keeps telling me I misspelled my own name, and that it should be “Michelle” (it really shouldn’t be).

One major advantage of using statcheck (or any algorithm) for statistical checks is its efficiency. It will take only seconds to flag potential problems in a paper, rather than going through all the reported stats and checking them manually.

An increasing number of researchers seem convinced of statcheck’s merits; the R package has been downloaded more than 8,000 times, while the web app has been visited over 23,000 times. Additionally, two flagship psychology journals have started to use statcheck as a standard part of their peer review process. Testimonies on Twitter illustrate the ease and speed with which papers can be checked before they’re submitted:

Just statcheck-ed my first co-authored manuscript. On my phone while brushing my teeth. Great stuff @MicheleNuijten @SachaEpskamp @seanrife!

— Anne Scheel (@annemscheel) October 22, 2016

Automate the error-checking process

More of these “quick and dirty spellchecks” for stats are being developed (e.g. GRIM to spot inconsistencies in means; or p-checker to analyse the consistency and other properties of p-value), and an increasing number of papers and projects make use of automated scans to retrieve statistics from large numbers of papers (e.g. hereherehere, and here).

In an era where scientists are pressed for time, automated tools such as statcheck can be very helpful. As an author you can make sure you didn’t mistype your key results, and as a peer reviewer you can quickly check if there are obvious problems in the statistics of a paper. Reporting statistics can just as easily go wrong as grammar and spelling; so when you’re typing up a research paper, why not also check your stats?

More information about statcheck can be found at: http://statcheck.io

Journal Policies that Encourage Data Sharing Prove Extremely Effective

Guest blog for LSE Impact Blog by Michèle Nuijten

For science to work well we should move towards opening it up. That means sharing research plans, materials, code, and raw data. If everything is openly shared, all steps in a study can be checked, replicated, or extended. By sharing everything we let the facts speak for themselves, and that’s what science is all about.

Unfortunately, in my own field of psychology, raw data are notoriously hard to come by. Statements in papers such as “all data are available upon request” are often void, and data may get lost if a researcher retires, switches university, or even buys a new computer. We need to somehow incentivise researchers to archive their data online in a stable repository. But how?

Currently it is not in a scientist’s interests to put effort into making data and materials available. Scientists are evaluated based on how much they publish and how often they’re cited. If they don’t receive credit for sharing all details of their work, but instead run the risk colleagues will criticise their choices (or worse: find errors!), why would they do it?

So now for the good news: incentivising researchers to share their data may be a lot easier than it seems. It could be enough for journals to simply ask for it! In our recent preprint, we found journal policies that encourage data sharing are extremely effective. Journals that require data sharing showed a steep increase in the percentage of articles with open data from the moment these policies came into effect.

In our study we looked at five journals. First, we compared two journals in decision making research: Judgment and Decision Making (JDM), which started to require data sharing from 2011; and the Journal of Behavioral Decision Making (JBDM), which does not require data sharing. Figure 1 shows a rapidly increasing percentage of articles in JDM sharing data (up to 100%!), whereas nothing happens in JBDM. The same pattern holds for psychology articles from open access publisher PLOS (with its data-sharing policy taking effect in 2014) and the open access journal Frontiers in Psychology (FP; no such data policy).

Similarly, the journal Psychological Science (PS) also contained increasing numbers of articles with open data after it introduced its Open Practice Badges in 2014. You can earn a badge for sharing data, sharing materials, or preregistering your study. A badge is basically a sticker for good behaviour on your paper. Although this may sound a little kindergarten, believe me: you don’t want to be the one without a sticker!

Figure 1: Percentage of articles per journal to have open data. A solid circle indicates no open-data policy; an open circle indicates an open-data policy. Source: Nuijten, M. B., Borghuis, J., Veldkamp, C. L. S., Alvarez, L. D., van Assen, M. A. L.…

Figure 1: Percentage of articles per journal to have open data. A solid circle indicates no open-data policy; an open circle indicates an open-data policy. Source: Nuijten, M. B., Borghuis, J., Veldkamp, C. L. S., Alvarez, L. D., van Assen, M. A. L. M., & Wicherts, J. M. (2017) “Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology”, PsyArXiv Preprints. This work is licensed under a CC0 1.0 Universal license.

The increase in articles with available data is encouraging and has important consequences. With raw data we are able to explore different hypotheses from the same dataset, or combine information of similar studies in an Individual Participant Data (IPD) meta-analysis. We could also use the data to check if conclusions are robust to changes in the analyses.

The availability of research data would increase the quality of science as a whole. With raw data we have the possibility to find and correct mistakes. On top of that, the probability of making a mistake is likely to be lower once you have gone to the effort of archiving your data in such a way that another person can understand it. The process of archiving data for future users could also provide a barrier to taking advantage of the flexibility in data analysis that could lead to false positive results. Enforcing data sharing might even deter fraud.

Of course, data-sharing policy is not a “one-size-fits-all” solution. In some fields of psychological research (e.g. sexology or psychopathology) data can be very personal and sensitive, and can’t simply be posted online. Luckily there are increasingly sophisticated techniques to anonymise data, and often materials and analysis plans can still be shared to increase transparency.

It is also important to acknowledge the time and effort it took to collect the original data. One way to do this is to set a fixed period of time during which only the original researchers have access to the data. That way they get a head start in publishing studies based on the data. When this period is over and others can also use the data, the original authors should, of course, be properly acknowledged through citations, or even, in some cases, co-authorship.

There are many different ways to encourage openness in science. My hope is that more journals will soon follow and start implementing an open-data policy. But aside from merely requiring data sharing, journals should also check if the data is actually available. To illustrate the importance of this, our study found one third of PLOS articles claiming to have open data, actually did not deliver (for similar numbers, see the data by Chris Chambers).

And many (including myself) would even like to go one step further. Datasets should not only be available, they should also be stored in such a way that others can use them (see the FAIR Data Principles). A good way to influence the usability of open data might be the use of the Open Practice Badges. It turned out that in PS, the badges not only increased the availability of data, but also the relevance, usability, and completeness of the data. Another way of ensuring data quality, but also recognition for your work, is to publish your data in a special data journal, such as the Journal of Open Psychology Data.

Even though data sharing in psychology is not yet the status quo, several journals are already helping our field take a step in the right direction. As a matter of fact, the American Psychological Association (APA) has recently announced it will give its editors the option of awarding badges. It is very encouraging that journal policies on data sharing, or even an intervention as simple as a badge to reward good practice can cause such a surge in open data. Therefore, I hereby encourage all editors in all fields to start requiring data. And while we’re at it, why not ask for research plans, materials, and analysis code too?

I would like to thank Marcel van Assen for his helpful comments while drafting this blog.

This blog post is based on the author’s co-written article, “Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology”, available at http://doi.org/10.1525/collabra.102

The Replication Paradox

Guest blog for The Replication Network by Michèle Nuijten

Lately, there has been a lot of attention for the excess of false positive and exaggerated findings in the published scientific literature. In many different fields there are reports of an impossibly high rate of statistically significant findings, and studies of meta-analyses in various fields have shown overwhelming evidence for overestimated effect sizes.

The suggested solution for this excess of false postive findings and exaggerated effect size estimates in the literature is replication. The idea is that if we just keep replicating published studies, the truth will come to light eventually.

This intuition also showed in a small survey I conducted among psychology students, social scientists, and quantitative psychologists. I offered them different hypothetical combinations of large and small published studies that were identical except for the sample size – they could be considered replications of each other. I asked them how they would evaluate this information if their goal was to obtain the most accurate estimate of a certain effect. In almost all of the situations I offered, the answer was almost unanimously: combine the information of both studies.

This makes a lot of sense: the more information the better, right? Unfortunately this is not necessarily the case.

The problem is that the respondents forgot to take into account the influence of publication bias: statistically significant results have a higher probability of being published than non-significant results. And only publishing significant effects leads to overestimated effect sizes in the literature.

But wasn’t this exactly the reason to take replication studies into account? To solve this problem and obtain more accurate effect sizes?

Unfortunately, there is evidence from multi-study papers and meta-analyses that replication studies suffer from the same publication bias as original studies (see below for references). This means that bothtypes of studies in the literature contain overestimated effect sizes.

The implication of this is that combining the results of an original study with those of a replication study could actually worsen the effect size estimate. This works as follows.

Bias in published effect size estimates depends on two factors: publication bias and power (the probability that you will reject the null hypothesis, given that it is false). Studies with low power (usually due to a small sample size) contain a lot of noise, and the effect size estimate will be all over the place, ranging from severe underestimations to severe overestimations.

This in itself is not necessarily a problem; if you would take the average of all these estimates (e.g., in a meta-analysis) you would end up with an accurate estimate of the effect. However, if because of publication bias only the significant studies are published, only the severe overestimations of the effect will end up in the literature. If you would calculate an average effect size based on these estimates, you will end up with an overestimation.

Studies with high power do not have this problem. Their effect size estimates are much more precise: they will be centered more closely on the true effect size. Even when there is publication bias, and only the significant (maybe slightly overestimated) effects are published, the distortion would not be as large as with underpowered, noisier studies.

Now consider again a replication scenario such as the one mentioned above. In the literature you come across a large original study and a smaller replication study. Assuming that both studies are affected by publication bias, the original study will probably have a somewhat overestimated effect size. However, since the replication study is smaller and has lower power, it will contain an effect size that is even more overestimated. Combining the information of these two studies then basically comes down to adding bias to the effect size estimate of the original study. In this scenario it would render a more accurate estimation of the effect if you would only evaluate the original study, and ignored the replication study.

In short: even though a replication will increase precision of the effect size estimate (a smaller confidence interval around the effect size estimate), it will add bias if the sample size is smaller than the original study, but only if there is publication bias and the power is not high enough.

There are two main solutions to the problem of overestimated effect sizes.

The first solution would be to eliminate publication bias; if there is no selective publishing of significant effects, the whole “replication paradox” would disappear. One way to eliminate publication bias is to preregister your research plan and hypotheses before collecting the data. Some journals will even review this preregistration, and can give you an “in principle acceptance” – completely independent of the results. In this case, studies with significant and non-significant findings have an equal probability of being published, and published effect sizes will not be systematically overestimated.  Another way is for journals to commit to publishing replication results independent of whether the results are significant.  Indeed, this is the stated replication policy of some journals already.

The second solution is to only evaluate (and perform) studies with high power. If a study has high power, the effect size estimate will be estimated more precisely and less affected by publication bias. Roughly speaking: if you discard all studies with low power, your effect size estimate will be more accurate.

A good example of an initiative that implements both solutions is the recently published Reproducibility Project, in which 100 psychological effects were replicated in studies that were preregistered and high powered. Initiatives such as this one eliminates systematic bias in the literature and advances the scientific system immensely.

However, before preregistered, highly powered replications are the new standard, researchers that want to play it safe should change their intuition from “the more information, the higher the accuracy,” to “the more power, the higher the accuracy.”

This blog is based on the paper “The replication paradox: Combining studies can decrease the accuracy of effect size estimate” (2015) by Nuijten, van Assen, Veldkamp, Wicherts (2015). Review of General Psychology, 19 (2), 172-182.

Literature on How Replications Suffer From Publication Bias

  • Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19(6), 975-991.
  • Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17, 120-128.

Data sharing not only helps facilitate the process of psychology research, it is also a reflection of rigour

Originally Published on LSE Impact Blog

Originally Published on LSE Impact Blog

Guest blog for LSE Impact Blog by Jelte Wicherts

Data sharing in scientific psychology has not been particularly successful and it is high time we change that situation. Before I explain how we hope to get rid of the secrecy surrounding research data in my field of psychology, let me explain how I got here.

 

Ten years ago, I was working on a PhD thesis for which I wanted to submit old and new IQ data from different cohorts to novel psychometric techniques. These techniques would enable us to better understand the remarkable gain in average IQ that has been documented in most western countries over the course of the 20thcentury. These new analyses had the potential to shed light on why it is that more recent cohorts of test-takers (say, folks born between 1975-1985) scored so much higher on IQ tests than older cohorts (say, baby boomers). In search of useful data from the millions of yearly IQ test administrations, I started emailing psychologists in academia and the test-publishing world. Although my colleagues acknowledged that indeed there must be a lot of data around, most of their data were not in any useful format or could no longer be found.

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

Raven Matrix – IQ Test Image credit: Life of Riley [CC-BY-SA-3.0]

After a persistent search I ended up getting five useful data sets that had been lying in a nearly-destroyed file-cabinet at some library in Belgium, were saved on old floppy disks, were reported as a data table in published articles, or were in a data repository (because data collection had been financed by the Dutch Ministry of Education under the assumption that these data would perhaps be valuable for future use). Our analyses of the available data showed that the gain in average IQ was in part an artefact of testing. So a handful of psychologists back in the 1960s kept their data, which decades later helped show that their rebellious generation was not simply less intelligent than generations X  (born 1960-1980) or Y (born 1980-2000). The moral of the story is that often we do not know about all potential uses of the data that we as researchers collect. Keeping the data and sharing them can be scientifically valuable.

 

Psychologists used to be quite bad at storing and sharing their research data. In 2005, we contacted 141 corresponding authors of papers that had been published in top-ranked psychology journals. In our study, we found that 73% of corresponding authors of papers published 18 months earlier were unable or unwilling to share data upon request. They did so despite the fact that they had signed a form stipulating that they would share data for verification purposes. In a follow-up study, we found that researchers who failed to share data upon request reported more statistical errors and report less convincing results than researchers who did share data. In other words, sharing data is a reflection of rigor. We in psychology have learned a hard lesson when it comes to researchers being secretive about their data. Secrecy enables up all sorts of problems including biases in reporting of results, honest errors, and even fraud.

So it is high time that we as psychologists become more open with our research data. For this reason, an international group of researchers from different subfields in psychology and I have established an open access journal, published by Ubiquity Press, that rewards the sharing of psychological research data. The journal is called Journal of Open Psychology Data and in it we publish so-called data papers. Data papers are relatively short, peer-reviewed papers that describe an interesting and potentially useful data set that has been shared with the scientific community in an established data repository.

We aim to publish three types of data papers. First, a data paper in the Journal of Open Psychology Data may describe the data from research that has been published in traditional journals. For instance, our first data paper reports raw data from a study of cohort differences in personality factors over the period 1982-2007, which was previously published in the Journal of Personality and Social Psychology. Second, we seek data papers from unpublished work that may of interest for future work because they can be submitted to alternative analyses or can be enriched later. Third, we publish papers that report data from replications of earlier findings in the psychological literature. Such replication efforts are often hard to publish in traditional journals, but we consider them to be important for progress. So the Journal of Open Psychology Data helps psychologists to find interesting data sets that can be used for educational purposes (learning of statistical analyses), data sets that can be included in meta-analyses, or data sets that can be submitted to secondary analyses. More information can be found in the editorial I wrote for the first issue.

In order to remain open access, the Journal of Open Psychology Data charges authors a publication fee. But our article processing charge is currently only 25 pounds or 30 euros.  So if you are a psychologist and have data lying around that will probably vanish as soon as your new computer arrives, don’t hesitate. Put your data in a safe place in a data repository, download the paper template, describe how the data were collected (and/or where they were previously reported), explain why they are interesting, and submit your data paper to the Journal of Open Psychology Data. We will quickly review your data paper, determine whether the data are interesting and useful, and check the documentation and accessibility of the data. If all is well, you can add a data paper to your resume and let the scientific community know that you have shared your interesting data. Who knows how your data may be used in the future.

This post is part of a wider collection on Open Access Perspectives in the Humanities and Social Sciences (#HSSOA) and is cross-posted at SAGE Connection. We will be featuring new posts from the collection each day leading up to the Open Access Futures in the Humanities and Social Sciences conference on the 24th October, with a full electronic version to be made openly available then.