Meta-Research Center at ICPS Paris

March 7-9, 2019, the International Convention of Psychological Science (ICPS) of the Association for Psychological Science (APS) was held in Paris, France. The Meta-research group Tilburg (co-)organized three sessions at the ICPS. Here a short overview of the three sessions and their presentations, including links to the presentations.

Preregistration: Common Issues and Best Practices (Chair: Marjan Bakker)

Preregistration has been lauded as one of the key solutions to the many issues in the field of psychology (Nosek, Ebersole, DeHaven, & Mellor, 2018). For example, researchers have argued that preregistration tackles the problems of publication bias, reporting bias, and the opportunistic use of researchers degrees of freedom in data analysis (also called questionable research practices or p-hacking). However, skeptics have put forward a broad list of issues concerned with preregistration. For example, they have argued that preregistration stifles researchers’ creativity, is not effective in the case secondary data or qualitative data, and is only intended for confirmatory research. In this symposium we aim to touch upon some of these issues.

Andrea Stoevenbelt, in her talk “Challenges to Preregistering a Direct Replication - Experiences from Conducting an RRR on Stereotype Threat”, described the challenges surrounding the preregistration of direct replication studies from her experiences of conducting a registered replication report of the seminal study by Johns, Schmaders, and Martens (2005) on stereotype threat.

Olmo van den Akker, in this talk “The Do’s and Don'ts of Preregistering Secondary Data Analyses”, presented a tutorial for a template that can be used to preregister secondary data analysis. Preregistering secondary data analysis is different from preregistering primary data analysis because mainly because researchers already have some knowledge about the data (through their own work using the data or through reading other people´s work using the data). Olmo´s take home message from this talk is: "Specify your prior knowledge of the data set from your own previous use the data and from other researcher’s previous use of the data, preferably for each author separately."

In all, this symposium touched upon many of the issues that have been raised about preregistration and hopefully encouraged researchers from a wide range of fields to give preregistration a try.

Issues with Meta-Analysis: Bias, Heterogeneity, Reproducibility (Chair: Jelte Wicherts)

The popularity of meta-analysis has been increasing the last decades, which is reflected by the rapid increase of the relative number of published meta-analyses. One question of meta-research is what we learn from all these meta-analyses; about a certain research topic, systematic biases, meta-analytic outcomes, or quality of coding. All talks in this symposium correspond to these meta-questions on meta-analysis.

Jelte Wicherts, in his talk “Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis”, presents the results of a meta-meta-analysis to estimate the average effect size, median power, and evidence of bias (publication bias, decline effect, early extremes effect, citation bias) in the field of intelligence research.

Anton Olsson Collentine presented on the “Limited evidence for widespread heterogeneity in psychology”. He examined the heterogeneity of all meta-analyses of ManyLab studies and registered multi-lab replication studies, which both are presumably not affected by publication or other bias. This research is important as many researchers stress the potential effect of moderators when trying to explain the failure of replication studies.

Esther Maassen, in her talk “Reproducibility of Psychological Meta-analyses”, systematically assessed the prevalence of reporting errors and inaccuracy of computations within meta-analyses. She documented whether coding errors affected meta-analytic effect sizes and heterogeneity estimates, as well as how issues related to heterogeneity, outlying primary studies, and signs of publication bias were dealt with.

Meta-analysis: Informative Tools (Chair: Marcel van Assen)

Meta-analysis is a statistical technique that statistically combines effect sizes from independent primary studies on the same topic, and is now seen as the “gold standard” for synthesizing and summarizing the results from multiple primary studies. Main research objectives of a meta-analysis are (i) estimating the average effect, (ii) assessing heterogeneity of true effect size, and if true effect size differs across studies (iii) incorporating moderator variables in the meta-analysis to explain this heterogeneity. Many different tools, visual (e.g., the funnel plot) or purely statistical (e.g., techniques to estimate heterogeneity or adjust for publication bias), have been developed to reach these objectives.

In this symposium, four speakers explain visual and statistical tools helping researchers to make sense of information in the meta-analysis and provide recommendations for applying these tools in practice. The focus is more on application than on the statistical background of the tools. Xinru Li from Leiden University will explain how classification and regression trees (CART) can be used to explain heterogeneity in effect size in a meta-analysis. The current meta-analysis methodology lacks appropriate methods to identify interactions between multiple moderators when no a priori hypotheses have been specified. The proposed meta-CART approach has the advantage that it can deal with many moderators and is able to identify interaction effects between them.

Hilde Augusteijn, in her talk “Posterior Probabilities in Meta-Analysis: An Intuitive Approach of Dealing with Publication Bias”, introduced a new meta-analytical method that makes use of both Bayesian and frequentist statistics. This method evaluates the probability of the true effect size being zero, small, medium or large, and the probability of true heterogeneity being zero, small, medium or large, while correcting for publication bias. The approach, which intuitively provides an evaluation of uncertainty in the estimates of effect size and heterogeneity, is illustrated with real-life examples.

Robbie van Aert, in his talk “P-uniform*: A new meta-analytic method to correct for publication bias”, presented a new method to correct for publication bias in a meta-analysis. In contrast to the vast majority of existing methods to correct for publication bias, the proposed p-uniform* method can also be applied if the true effect size in a meta-analysis is heterogeneous. Moreover, the method enables meta-analysts to estimate and test for the presence of heterogeneity while taking into account publication bias. An easy-to-use web application will be presented for applying p-uniform* and recommendations for assessing the impact of publication bias will be given.

Marcel van Assen, in his talk “The Meta-plot: A Descriptive Tool for Meta-analysis”, explained and illustrate the meta-plot using real-life meta-analyses, in this talk “The meta-plot”. The meta-plot improves on the funnel plot and shows in one figure the overall effect size and its confidence interval, the quality of primary studies with respect to their power to detect small, medium, or larger effects, and evidence of publication bias.

Presentation on Teaching Open Science: Turning Students into Skeptics, not Cynics (Presenter: Michèle Nuijten)

Michèle Nuijten, in her presentation “Teaching Open Science: Turning Students into Skeptics, not Cynic”, focused on strategies to teach undergraduates about replicability and open science. Psychology’s “replication crisis” has led to many methodological changes, including preregistration, larger samples, and increased transparency. Nuijten argued that psychology students should learn these open science practices from the start. They should adopt a skeptical attitude – but not a cynical one. 

Michèle Nuijten was also discussant at two sessions:

  • What can you do with nothing? Informative null results in hard-to-reach populations” (discussant). In hard-to-reach populations, it is especially difficult and time consuming to collect data, resulting in smaller sample sizes and inconclusive results. Therefore it is particularly important to understand what null results can mean. In this symposium, we discussed results from our own experimental data and how meta-analyses and Bayes factors can increase informativeness. 

  • Improving the transparency of your research one step at a time” (chair & discussant). Many solutions have been proposed to increase the quality and replicability of psychological science. All these options can be a bit overwhelming, so in this symposium, we focused on some easy-to-implement, pragmatic strategies and tools, including preprints, Bayesian statistics, and multi-lab collaboration.