Why I Think Open Peer Review Benefits PhD Students

This blog post was part of an initiative by Nature Human Behavior called 'Publish or Perish' where early career researchers give their views on the pressure to publish in academia. The original blog post can be found here.

Doing scientific research is my dream job. Unfortunately, it’s not at all certain that I can keep doing research after getting my PhD degree. Research jobs are scarce and every year the academic job market is flooded with freshly minted PhDs. In practice, this means that only the most prolific PhD students will land a job. In other words, you either ‘publish or perish’. In this blog post I will argue that the culture of ‘publish or perish’, although not a problem in theory, is a problem in practice because of the unfairness of the peer review system. In my view, opening up this system would make it fairer for all researchers, but especially for PhD students.

dims.jpeg

Based on discussions with colleagues as well as my own experiences I’ve become aware that the peer review system can be random and biased. This intuition is supported by scientific studies of peer review that find that the interrater reliability of reviewers is low, which means that an editor’s (often arbitrary) choice of reviewers plays a big part in whether your manuscript will be accepted (Bornmann, Mutz, & Daniel, 2010; Cicchetti, 1991, Cole, Cole, & Simon, 1981; Jackson, Srinivasan, Rea, Fletcher, & Kravitz, 2011). In addition, studies have found that reviewers are more likely to value manuscripts including positive results (Mahoney, 1977; Emerson et al., 2010) and results consistent with their theoretical viewpoints (Mahoney, 1977). These structural biases as well as the random element make the peer review system unfair as it is unable to consistently distinguish good quality research from bad quality research. This is especially concerning for PhD students who only have a few years to accrue publications to get funding for an academic job. One unfair negative review could nip their career in the bud.

In my view, the solution to the unfairness of the peer review system is straightforward: Switch from a closed peer review system to an open peer review system. Here, I define open peer review as a peer review system in which authors and reviewers are aware of each other’s identity, and review reports are published alongside the relevant article. Ross-Hellauer (2017) found that these two aspects together account for more than 95% of the mentions of ‘open peer review’ in the recent literature. Note that open peer review may also refer to a situation where the wider community can comment on a manuscript, but I do not use that definition here. Below, I list the potential benefits and downsides of switching to an open peer review system.

Potential benefits of open peer review for PhD students

1) In an open peer review system reviewers’ names are linked to their public reviews, which increases accountability

This accountability may cause reviewers to be more conscientious and thorough when reviewing a manuscript. Indeed, a transparent peer review process has been linked to higher-quality reviews in several studies (Kowalczuk et al., 2015; Mehmani, 2016; Walsh, Rooney, Appleby, & Wilkinson, 2000; Wicherts, 2016), although a sequence of studies by Van Rooyen (Van Rooyen, Delamothe, & Evans, 2010; Van Rooyen, Godlee, Evans, Smith, & Black, 1999) failed to find any difference in quality between open and closed reviews. For PhD students higher quality peer reviews are especially important because they are at a stage where feedback on their work is crucially important for their development. Moreover, high quality reviews are fairer for PhD students as such reviews can distinguish more accurately between good and bad research (and thus good and bad PhD students).

2) If the identity of reviewers are made public PhD students can get credit for the reviews they conduct

McDowell, Knutsen, Graham, Oelker, & Lijkek (2019) found that many PhD students do not find their names on peer review reports submitted to journal editorial staff even though they had co-written the report with a more senior researcher. In such instances of “ghostwriting” the PhD student usually does most of the work while the senior researchers is the only one that profits by gaining appreciation from the editor. An open review system would provide public credit to reviewing PhD students (for example by making reviews citable, Hendricks & Lin, 2017) but would also provide less tangible rewards like senior researchers acknowledging their skills as a high quality scientist (see Tweet 1 below).

Tweet 1

Tweet 1

3) The fact that reviews are made open may also create a motivation for reviewers to be more friendly and constructive in their reviews.

Of course, this would greatly benefit PhD students because given their status they are likely influenced most severely by scathing or harsh reviews. Indeed, some research shows that reviews are potentially more courteous and constructive when they are open (Bravo, Grimaldo, López-Iñesta, Mehmani, & Squazzoni, 2019; Walsh, Rooney, Appleby, & Wilkinson, 2000).

4) Open peer review may lessen the risk of PhD students publishing in predatory journals

In a situation with open peer review, journals with no or substandard peer review will be identified quickly and will become known as low-quality journals. Predatory journals can no longer hide behind the closed peer review system and will eventually disappear. This makes life easier for PhD students as it is often difficult to orient the publishing landscape if you are inexperienced with it.

5) Open peer review can help to prevent a practice called citation manipulation (Baas & Fennell, 2019), whereby a reviewer suggests large numbers of citations of their own work to be added to a submitted manuscript

These are often unwarranted citations, but researchers (especially PhD students) are often coerced into adding them because they desperately want to publish their paper. Of course, only researchers who have a reasonable amount of citable papers under their belt would engage in citation manipulation, making it harder for PhD students to compete on the academic job market. Indeed, a prominent case of citation manipulation spurred a group of early career researchers to write an open letter to voice their concern. Open peer review would clearly help here as reviewers thinking of engaging with this unethical practice would think twice if their name and review were public.

6) Open peer review provides PhD students with insight in the mechanics of science

For example, it allows PhD students to see how other papers have developed over time or to see that landmark papers have been rejected multiple times before being published. Such insights into the peer review process are very valuable for PhD students as they can get more comfortable with the peer review system and can see that rejections are the norm rather than the exception.

7) Open peer review (or streamlined review, see Collabra, 2019) could save PhD students (and other researchers) time

Once a manuscript is rejected it is usually sent out to another journal to undergo a new round of review. It is likely that the arguments used by the first set of reviewers and the second set of reviewers are similar because the first set of reviews was done behind closed doors and authors often change little in between submission. It is estimated that 15 million hours are spent every year by restating arguments while reviewing rejected papers (The AJE Team, 2019). In open peer review, researchers can build on previous reviews, and see the development of the paper, which can free up many hours for valuable research. Of course, not all of the wasted review time is accounted for by PhD students, but because they are likely taking longer than the mean 8.5 hours for a review (Ware, 2008) an open peer review system would be especially time-saving for them.

Potential downsides of open peer review for PhD students

1) The main argument put forward against open peer review is that PhD students who write negative reviews may frustrate other researchers who could then retaliate. For example, vindictive researchers could provide negative reviews of the PhD student’s future work or could speak badly about them to their colleagues during a conference or in personal e-mails. This is plausible, but it is unsure whether a blind review system would prevent such practices as anonymity is by no means guaranteed. Many authors at least think they are able to correctly identify their reviewers (see Tweet 2 and 3), and a review found that masking reviewers’ identities was only successful about half of the time (Snodgrass, 2006). In any case, open peer review at least makes situations of power abuse easier to identify. 

Tweet 2

Tweet 2

Tweet 3

Tweet 3

2) Whether PhD students will be retaliated against or not, a fear of retaliation does exist in the academic community (see Tweet 4 and 5) This fear could cause PhD students to shy away from criticizing senior researchers in reviews, or could even cause PhD students to reject review requests for work authored by senior researchers. The first scenario would cause suboptimal work by senior research to be published more often, reinforcing the academic status quo and decreasing the quality of the scientific literature. The second scenario would prevent PhD students from gaining valuable review experience and would cause the scientific process to slow down. The second scenario seems unlikely though in light of findings by Bravo et al. (2019) and Ross-Hellauer, Deppe, & Schmidt (2017) that more junior scholars are more willing to engage in open peer review than more senior scholars.

Tweet 4

Tweet 4

Tweet 5

Tweet 5

3) Power dynamics can also play a problematic role when the reviewer is a senior researcher and the manuscript’s author is a PhD student. When the manuscript involves findings that run counter to the senior researcher’s self-interest they may decide to write a condemning review to intimidate the PhD student from pursuing the work further (see Tweet 6). However, this can also happen in a system of closed peer review. At least in open peer review unfairly harsh and power-abusive reviews can be identified and be followed up on. Although there is currently no system for reprimanding power abuse in peer reviews, Bastian (2018) argues that there are ways to do this effectively. For example, we could explicitly label power abuse in peer review as professional misconduct or even harassment in the relevant codes of conduct. 

Tweet 6

Tweet 6

4) In my view, the most problematic downside of open peer review (as I have defined it) is that all kinds of biases could creep into the peer review system. For example, it could be the case that papers from PhD students are rejected more often because PhD students do not have enough prestige or because PhD students more often come up with ideas that challenge the status quo in the literature. And indeed, studies have shown that open peer review may be associated with disproportionate rejections of researchers with low prestige, like PhD students (Seeber & Bacchelli, 2017; Tomkins, Zhang, & Heavlin, 2017). These findings are worrying and should be taken seriously. Importantly, open peer review should not be a goal in itself but should only be implemented when the benefits outweigh the costs. In this case, the benefits of unmasking the identities of authors (e.g., less hassle with masking your manuscripts) are marginal while the potential costs (discrimination against low prestige researchers) are likely high. An open peer review system where the identities of authors are masked therefore seems like the best solution.

Conclusion

My hope is that I won’t be the one to perish, but the simple fact is that there’s not enough funding available to accommodate every PhD student aspiring a job in academia. That does not need to be a problem as a little academic competition is fine. After all, it only seems fair that the best of the best are tasked with expanding our scientific knowledge. However, the best of the best are only selected as long as the peer review system is fair. Currently, that does not seem to be the case. 

In this blog post I have therefore argued for an open peer review system. Implementing this system across the board could increase the quality and tone of peer reviews, could provide PhD students with credit for their reviews, could root out predatory journals, could prevent citation manipulation, could provide PhD students with insight into the mechanics of science, and could lessen the peer review burden for PhD students. Even though the arguments against open peer review should be taken seriously (for example by masking the identities of authors) I am convinced open peer review will create a fairer system. And, as you can see below, the European Journal of Neuroscience, one of the journals that already practices open peer review, wholeheartedly agrees.

Excerpt from the summary report of the European Journal of Neuroscience about their new open peer review system. Retrieved from https://www.wiley.com/network/researchers/being-a-peer-reviewer/transparent-review-at-the-european-journal-of-neuroscienc…

Excerpt from the summary report of the European Journal of Neuroscience about their new open peer review system. Retrieved from https://www.wiley.com/network/researchers/being-a-peer-reviewer/transparent-review-at-the-european-journal-of-neuroscience-experiences-one-year-on

References

  • Baas, J., & Fennell, C. (2019, May). When peer reviewers go rogue-Estimated prevalence of citation manipulation by reviewers based on the citation patterns of 69,000 reviewers. SSRN Working Paper. Retrieved from https://ssrn.com/abstract=3339568.

  • Bastian, H. (2018). Signing critical peer reviews & the fear of retaliation: What should we do? https://blogs.plos.org/absolutely-maybe/2018/03/22/signing-critical-peer-reviews-the-fear-of-retaliation-what-should-we-do.

  • Bornmann, L., Mutz, R., & Daniel, H. D. (2010). A reliability-generalization study of journal peer reviews: A multilevel meta-analysis of inter-rater reliability and its determinants. PloS ONE, 5(12), e14331.

  • Bravo, G., Grimaldo, F., López-Iñesta, E., Mehmani, B., & Squazzoni, F. (2019). The effect of publishing peer review reports on referee behavior in five scholarly journals. Nature Communications, 10(1), 322.

  • Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross- disciplinary investigation. Behavioral and Brain Sciences, 14(1), 119-135.

  • Cole, S., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214(4523), 881-886.

  • Collabra (2019). Editorial Policies. Retrieved from https://www.collabra.org/about/editorialpolicies/#streamlined-review.

  • Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. (2010). Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Archives of Internal Medicine, 170(21), 1934-1939.

  • Hendricks, G., & Lin, J. (2017). Making peer reviews citable, discoverable, and creditable. Retrieved from https://www.crossref.org/blog/making-peer-reviews-citable-discoverable-and-creditable.

  • Jackson, J. L., Srinivasan, M., Rea, J., Fletcher, K. E., & Kravitz, R. L. (2011). The validity of peer review in a general medicine journal. PLoS ONE, 6(7), e22475.

  • Kowalczuk, M. K., Dudbridge, F., Nanda, S., Harriman, S. L., Patel, J., & Moylan, E. C. (2015). Retrospective analysis of the quality of reports by author-suggested and non-author-suggested reviewers in journals operating on open or single-blind peer review models. BMJ Open, 5(9), e008707.

  • Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161-175.

  • McDowell, G. S., Knutsen, J., Graham, J., Oelker, S. K., & Lijek, R. S. (2019). Co-reviewing and ghostwriting by early career researchers in the peer review of manuscripts. BioRxiv, 617373.

  • Mehmani, B. (2016). Is open peer review the way forward? Retrieved from https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/is-open-peer-review-the-way-forward.

  • Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6. 10.12688/f1000research.11369.2

  • Ross-Hellauer, T., Deppe, A., & Schmidt, B. (2017). Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS ONE, 12(12), e0189311.

  • Seeber, M., & Bacchelli, A. (2017). Does single blind peer review hinder newcomers? Scientometrics, 113(1), 567-585.

  • Snodgrass, R. (2006). Single-versus double-blind reviewing: an analysis of the literature. ACM Sigmod Record, 35(3), 8-21.

  • The AJE Team (2019). Peer Review: How We Found 15 Million Hours of Lost Time. Retrieved from https://www.aje.com/arc/peer-review-process-15-million-hours-lost-time.

  • Tomkins, A., Zhang, M., & Heavlin, W. D. (2017). Reviewer bias in single-versus double-blind peer review. Proceedings of the National Academy of Sciences, 114(48), 12708-12713.

  • Van Rooyen, S., Delamothe, T., & Evans, S. J. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: Randomised controlled trial. BMJ, 341, c5729.

  • Van Rooyen, S., Godlee, F., Evans, S., Smith, R., & Black, N. (1999). Effect of blinding and unmasking on the quality of peer review. Journal of General Internal Medicine, 14(10), 622-624.

  • Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: A randomised controlled trial. The British Journal of Psychiatry, 176(1), 47-51.

  • Ware, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community–Results from an international study. Information Services & Use, 28(2), 109-112.

  • Wicherts, J. M. (2016). Peer review quality and transparency of the peer-review process in open access and subscription journals. PLoS ONE, 11(1), e0147913.

A Recap of the Tilburg Meta-Research Day

On Friday November 22, 2019, the Meta-Research Center at Tilburg University organized the Tilburg Meta-Research Day. Around 90 interested researchers attended this day that involved three plenary lectures, by John Ioannidis (who received an honorary doctorate from Tilburg University a day earlier), Ana Marušić, and Sarah de Rijcke, and seven parallel sessions on meta-research. 

Below you can find the links to the video footage of the three plenary sessions as well as summaries of all seven parallel sessions. The full program of the Tilburg Meta-Research Day can be found here. If you have any questions or comments, please contact us at metaresearch@uvt.nl.

Next up at Tilburg: The 1st European Conference on Meta-Research (July 2021).

 

Recordings of plenary talks:

Plenary talk by Sarah de Rijcke: Research on Research Evaluation: State-of-the-art and practical insights

Plenary talk by Ana Marušić: Reviewing Reviews: Research on the Review Process at Journals and Funding Agencies

Plenary talk by John Ioannidis: Meta-research in different scientific fields: What lessons can we learn from each other?

 

Parallel sessions (see below for summaries):

  • How can meta-research improve research evaluation? (Session leaders: Sarah de Rijcke & Rinze Benedictus)

  • How can we ensure the future of meta-research? (Session leader: Olmo van den Akker)

  • How can meta-research improve statistical practices? (Session leader: Judith ter Schure)

  • How can meta-research improve the Psychological Science Accelerator (PSA) and how can the PSA improve meta-research? (Session leaders: Peder Isager & Marcel van Assen)

  • How can meta-research improve peer review? (Session leader: Ana Marušić)

  • How can meta-research improve our understanding of the effects of incentives on the efficiency and reliability of science? (Session leaders: Sophia Crüwell, Leonid Tiokhin, & Maia Salholz-Hillel)

  • Many Paths: A new way to communicate, discuss, and conduct (meta-)research (Session leaders: Hans van Dijk & Esther Maassen)

How can meta-research improve research evaluation

Session leaders: Sarah de Rijcke & Rinze Benedictus

The evaluation of research and researchers is currently based on biased metrics like the H-index and the journal impact factor. Several new initiatives have been launched in favor of indicators that correspond better to actual research quality. One of these initiatives is “Redefine excellence” from the University Medical Center (UMC) Utrecht. In this session, Rinze Benedictus shortly outlined the innovations that are implemented at the UMC Utrecht, after which Sarah de Rijcke led a discussion on how we can properly evaluate whether these innovations are effective.

The session stimulated a productive discussion about differences and similarities between sociology of science and meta-research. Both fields could be termed ‘research on research’, but they appear to be rather distinct, using very different languages, concepts and maybe even springing from different concerns. However, the feeling in the session was that a lot could be gained by more interaction between the fields.

Promising ways to build bridges seem:

  • Shared conferences to share concepts, language and maybe even research questions. A thematic approach (as opposed to method-based) to research questions could also facilitate interaction.

  • Identification of stakeholders: why are we doing research? For who?

  • Shared teaching, e.g. through setting up a joint workshop by CWTS and Tilburg University/Department of Methodology

Picture1.png

How can we ensure the future of meta-research?

Session leader: Olmo van den Akker

In this session, we set out to identify how we can ensure that the field of meta-research will remain vital in the upcoming years. Although the original focus of the session was to identify grant opportunities for meta-research projects, the discussion quickly developed into identifying journals that are open to submissions of meta-research studies. We aimed to draft a list of such journals, which can be found here. The list is far from exhaustive so please add journals if you can. The list mainly pertains to journals and journal collections specifically catered to meta-research, but there are of course also general journals that welcome meta-research submissions. In that sense, we are lucky as meta-researchers that our studies are often suitable for a wide variety of different journals.

That being said, one sentiment that arose in our discussion is that we still feel that we are missing a broad meta-research journal purely for meta-research papers. Such a journal would increase the visibility of our field, but there’s also the danger that more substantive researchers would engage less with meta-research studies published in a journal like this (as opposed to journals in their substantive field). However, we concluded that this might not be so problematic given that the majority of researchers use Google Scholar or other databases to look for papers and are less and less committed to only reading their papers from a few of their favorite journals. Below you can find a list of things that we thought would be valuable to consider when launching a specific meta-research journal.

  • The journal should be broad and welcome submissions from all areas of meta-research (and even meta-meta-research). As long as studying the process and outcomes of science is critical.

  • It would be good to have the journal link meta-research to the philosophy of science and science and technology studies (STS) as it appears that these related fields currently do not work together as much as they could.

  • It would be great if this journal would incorporate the latest meta-research on the effectiveness of journals as journal policy.

  • The journal could even be a trial ground for journal innovations. For example, the journal could try out whether a designated statistical reviewer for each submission would work (like is customary in medicine) or try out technological innovations facilitating SMART preregistration, multiverse analyses.

  • Initiating a Meta-Research Society with a dedicated conference could help fund the journal through society fees and conference fees.

  • The journal would do well to implement the CRediT authorship guidelines.

  • Preregistration, open data, open code, and open materials should be required, unless authors can convince the editorial team that it is not necessary in their case.

  • The editorial board should be paid, because a committed editorial board is crucial for the longevity and credibility of the journal. Preferably also reviewers would be paid, but this would require substantially more funding.

In the summer of 2021 Tilburg University will organize another Meta-Research Conference, this one will probably consist of two days and will focus more on the dissemination of meta-research studies. This conference could be a great place to launch a meta-research society and an accompanying meta-research journal.

How can meta-research improve statistics? 

Session leader: Judith ter Schure

How can meta-research improve statistics? The conclusion we reached is that it varies a lot per field whether scientists in their experimental design actually feel like they contribute to an accumulating series of studies. In some fields awareness exists that the results of an experiment will someday end up in a meta-analysis with existing experiments, while in others scientists aim to design experiments as 'refreshingly new' as possible. In a table that shows series of studies together in one column if they could be meta-analyzed, this latter approach shows scientists who mainly aim to initiate new columns. This pre-experimental perspective might be different from the meta-analysis perspective, in which a systematic search and inclusion criteria might still force those experiments together in one column, even though they weren't intended that way. This practice might erode trust in meta-analyses that try to synthesize effects from too different experiments.

The discussion was very hesitant towards enforcing rules (e.g. by funders or universities) on scientists in priority setting, such as whether a field needs more columns of 'refreshingly new' experiments, or needs replications of existing studies (extra rows) so a field can settle on a specific topic in one column with a meta-analysis.

In terms of statistical consequences, sequential processes might still be at play if scientists designing experiments know about the results of other experiments that might end up in the same meta-analysis. Full exchangeability in meta-analysis means that no-one would have decided differently on the feasibility or design of an experiment had the results of others been different. If that assumption cannot be met, we should consider studies as part of series in our statistical meta-analysis, even without forcing this approach in the design phase.

Picture2.png
Picture3.png

 Meta-research and the Psychological Science Accelerator

Session leaders: Marcel van Assen & Peder Isager

The Psychological Science Accelerator (PSA) is a standing network of more than 500 laboratories that collect large-scale, non-WEIRD data for psychology studies (see https://psysciacc.org and https://osf.io/93qpg). The PSA is currently running six many-lab projects, and a number of proposed future projects are currently under review. Importantly, the PSA has established a meta-research working group that is currently examining both how the PSA can best interface with the meta-research community, and how meta-research can help bolster the quality of research projects conducted at the PSA (see https://docs.google.com/document/d/1D-NmvFE4qaC-dXAWQn16SBLsY9AABCrm8jDDy3-cD8w/edit?usp=sharing)

The session began with an overview of PSA’s organization, presented by Peder, and a discussion of the importance of many-lab studies, presented by Marcel. The slides for these presentations can be found at https://osf.io/wnyga. Afterwards, the majority of the session was devoted to discussing seven predetermined topics related to how the meta-research field and the PSA may learn from each other. Participants could either independently provide their suggestions on the seven topics in a google doc (https://bit.ly/2KIUHTW) or on paper. After about half an hour independently working on the topics, we discussed the participants’ suggestions in the remainder of the session.

The following conclusions can be drawn from our discussion:

  1. There are multiple ways in which the PSA could contribute to meta-research (e.g. by providing access to lab data and project-level data for conducted studies, and by allowing researchers to vary properties of research designs - like the measurement tools - to study effect size heterogeneity, and advance theory by examining boundary conditions). 

  2. There are multiple issues within the meta-research field that seems relevant to the PSA. Issues related to theory, measurement and sample size determination were emphasized in particular. 

  3. Meta-researchers seem interested in contributing to the PSA research endeavor, but emphasize a lack of both general information about the PSA organization and specific information about what contributions could/would entail (e.g. what volunteer efforts one could contribute to and what studies would be relevant for the “piggy-back” submission policy). 

In summary, there seems to be much enthusiasm for the PSA within the meta-research community, and there are many overlapping interests between the PSA and the meta-research community. The points raised in this session will be communicated to the PSA network of researchers, with the hope that it will help facilitate more communication between the two research communities in the future. 

Other resources

PSA Data & methods committee bylaws: https://osf.io/p65qe/ 

Proposing a theory committee at the PSA (blog post): https://pedermisager.netlify.com/post/psa-theory-committee/

How can meta-research improve peer review?

Session leaderAna Marušić

The session started with a discussion about research approaches to different types of peer review: single blind, double blind, consultative, results free, open, and post-publication peer review. In post-publication peer review, the system that was pioneered by the F1000 Research, peer review is completely open to study, as all steps in the peer review process and editorial decision making are transparent and available in the public domain. This is not possible for other types of peer review, which remain elusive to researchers. Even in journals that publish the prepublication history of an article (like BioMed Central journals in biomedicine), the information on the review process is available only for published articles, but not for those that were rejected (and which represent the majority of articles submitted to a journal). This is a serious hindrance to meta-research on journal peer review. 

The participants discussed the possibilities of having access to complete peer review data, and the recent activities by the COST Action PEERE – New Frontiers in Peer Review, were discussed. PEERE brought together the researchers and publishers to establish a database on peer review in journals from different disciplines in order to study all aspects of peer review.

The participants in the session also discussed differences in peer review in different disciplines, as well as the need for qualitative studies on peer review. This methodological approach would be particularly important in understanding preferences and habits of peer reviewers. Recent findings, both from surveys and analysis of peer review in journals show that researchers prefer double blind peer review when they are invited to review for a journal. A qualitative approach would be useful to understand this phenomenon and build hypotheses for testing in a quantitative methodological approach.

Picture4.png

How can meta-Research improve research incentives?

Session leaders: Sophia Crüwell, Leo Tiokhin, & Maia Salholz-HillelEveryone’s talking about “the incentives,” but what does that mean? How can we move beyond our intuitions and towards a deeper understanding of how incentives affect the efficiency and reliability of science? The aim of this session was to explore the role of incentives in science, with the goal of facilitating a broader discussion of what important questions remain unanswered. 

Some conclusions from our discussion are outlined below.  We would like to invite both session participants and the wider community to contribute to the following library of resources on (meta)research relevant to incentives in science: https://www.zotero.org/groups/2421057/incentives_in_academic_science.

Some conclusions from our discussion:

  • We need to split incentives, stakeholders, behaviors, and outcomes.

    • Should we be focusing on predictors of career success rather than on incentives? However, career success is the outcome, which incentivizes the behaviors (e.g. publications).

  • We need to understand the parameters within which each incentive operates, i.e., a cost-benefit assessment towards outcomes. We could create a mapping or taxonomy to move the conversation forward. We could do this through an iterative, cross-stakeholder process that would then allow us to decide on next steps.

    • Rational choice theory

    • Delphi method: cyclical process for circulating solutions between

  • We should consider both intrinsic and extrinsic incentives.

    • Intrinsic incentives include what a person values, such as a desire to help patients, discover something about the world, etc. Extrinsic incentives include tenure and other career payoffs, prestige, etc. The external may crowd out internal incentives. 

    • Is it possible to separate them? For example, proximate/ultimate from biology. However, intrinsic vs. extrinsic may be a false dichotomy. Extrinsic incentives shape intrinsic ones. 

    • From a Mertonian sociology of science perspective, the drive to make a discovery is as strong a drive to refute a discovery. But this doesn’t seem to be the case. So, what are researchers trying to optimize?

  • Why do incentives exist? They are used as a proxy to measure who is a good scientist. E.g., measured by papers, publication, citation.

    • Why do people leave science?

  • Possible definitions of incentives

    • An ontology/framework of types of incentives & what questions you should ask about them; is it a positive or negative incentive?

    • Approach & avoidance approach 

    • Incentive can also be the purpose

    • Lots of theories of behavior change already exist; do we need to reinvent the wheel? 

    • Should we be talking about specific incentives?

    • Do incentivized behaviors have to be intentional?

    • Knowledge deficiency approach

Many Paths: A new way to conduct, discuss, and communicate (meta-)research

Session leaders: Hans van Dijk & Esther Maassen (in collaboration with Liberate Science)

Slides: https://github.com/emaassen/talks/blob/master/191122-mrd-many-paths.pdf

In Many Paths, we invite researchers from multiple disciplines to participate in a collaborative project to answer one research question, and we allow an emergent process to occur in the theory, data, results, and conclusion steps thereafter. Given that results are often path dependent, and *many paths* can be taken in a research process, we aim to examine what paths a research project initiates, prunes, and merges. The Many Paths model offers insight into how researchers from different disciplines approach and study the same question. We conduct and communicate the Many Paths research process in steps ("as-you-go"), instead of after the research is completed ("after-the-fact"). During our session, we also discussed the relationship of Many Paths to previous Many Projects (i.e., the Reproducibility Project Psychology, Many Labs, and Many Analysts).

Our goal of the session was to introduce the Many Paths model and to gather feedback and suggestions on the project. Reactions to the proposed model and the new way of communication were generally positive. Many Paths appears to provide the opportunity to gather a large amount of data from various disciplines in a transparent manner. It also allows for diversity and inclusivity. It would be interesting to find out if and how researchers decide to collaborate across disciplines. However, they might be hesitant to do so because of the notable difference in what they are used to now (i.e., competition) compared to what they could do (i.e., collaboration). Whereas some people claimed a project such as Many Paths would provide clear answers to the proposed research question, some expressed concerns about the possibility of excessive fragmentation or disintegration of paths, and difficulties with combining information from various conclusions and paths. Another possible issue that was mentioned relates to the quality assurance for the research output of Many Paths; a threshold should be in place to ensure contributions adhere to a certain quality. It should also be clear how the code of conduct would be enforced.

Picture5.png