Submit a report

Announcements

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.


 

Latest recommendationsrssmastodon

IdTitleAuthorsAbstractPictureThematic fieldsRecommenderReviewersSubmission date
28 Feb 2024
STAGE 2
(Go to stage 1)
toto

Genetically-modified animals as models of neurodevelopmental conditions: a review of systematic review reporting quality

Evidence for mixed quality of systematic reviews in preclinical animal studies of neurodevelopmental conditions

Recommended by ORCID_LOGO based on reviews by Marietta Papadatou-Pastou
Single gene alterations have been estimated to account for nearly half of neurodevelopmental conditions (NDCs), providing a crucial opportunity for animal models to understand the underlying mechanisms, causes and potential treatments. The use of systematic reviews (SRs) can, in principle, provide a powerful means to synthesise this evidence-base; however, the reporting quality of previous SRs in preclinical animal research has been found lacking (Hunniford et al., 2021). In the current study, Wilson et al. (2023) will undertook a review of systematic reviews to assess the characteristics and reporting quality of SRs that, in turn, synthesise research in genetically-modified animals to model NDCs. In particular, the authors extracted key features of reviews (including, among others, the aim and primary research questions, relevant animal model, and number of studies in the SR), in addition to quality indicators such as risk of bias and completeness of reporting. In doing so, the authors aimed to enhance guidance on the conduct and reporting of systematic reviews in this area.
 
Of twelve publications that met the preregistered search criteria, the completeness and quality of reporting was variable. Among the better reported characteristics were search strategies (9 of 12 articles), reporting of funding sources (10 of 12 articles) and use of animal data (11 of 12 articles). In contrast, only two articles reported whether the study protocol was preregistered, only three articles reported methods for assessing risk of bias, and just one included methods to analyse publication bias. In addition, the authors identified 19 review registrations via PROSPERO, most of which remained unpublished after their anticipated end dates. Overall, the results highlight the importance of adherence to reporting guidelines for increasing the transparency and reproducibility of SRs in this field.
 
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses by the authors, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/952qk
 
Level of bias control achieved: Level 4. At least some of the data/evidence that was used to answer the research question already existed prior to IPA and was accessible in principle to the authors, but the authors certify that they did not access any part of that data/evidence prior to IPA.
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Hunniford V. T., Montroy J., Fergusson D. A., Avey M. T., Wever K. E., McCann S. K., Foster M., Fox G., Lafreniere M., Ghaly M., Mannell S., Godwinska K., Gentles A., Selim S., MacNeil J., Sikora L., Sena E. S., Page M. J., Macleod M., Moher D., & Lalu M. M. (2021). Epidemiology and reporting characteristics of preclinical systematic reviews. PLOS Biology, 19:e3001177. https://doi.org/10.1371/journal.pbio.3001177
 
2. Wilson, E., Currie, G., Macleod, M., Kind, P. & Sena, E. S. (2023). Genetically-modified animals as models of neurodevelopmental conditions: a review of systematic review reporting quality [Stage 2]. Acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/s5xd4
Genetically-modified animals as models of neurodevelopmental conditions: a review of systematic review reporting qualityEmma Wilson, Gillian Currie, Malcolm Macleod, Peter Kind, and Emily S Sena<p><strong>Objective</strong><br>Using genetically-modified animals to model neurodevelopmental conditions (NDCs) helps better our understanding of biology underlying these conditions. Animal research has unique characteristics not shared with cli...Medical SciencesChris Chambers2023-11-22 10:26:44 View
28 Feb 2024
STAGE 1
toto

Changes in memory function in adults following SARS-CoV-2 infection: findings from the Covid and Cognition online study

Is memory affected in the long run following SARS-CoV-2 infection?

Recommended by ORCID_LOGO based on reviews by Phivos Phylactou, Dipanjan Ray and Mitul Mehta
COVID-19 has been suspected to have long-lasting effects on cognitive function. The SARS-CoV-2 virus may enter the central nervous system (Frontera et al., 2020; Miners, Kehoe, & Love, 2020), explaining the observed detrimental effects of COVID-19 on verbal planning and reasoning (Hampshire et al., 2021; Wild et al., 2021), executive function (Hadad et al., 2022), and long-term memory (Guo et al., 2022). In particular, Guo et al. (2022) used verbal item recognition and non-verbal associative memory tasks. Weinerova et al. (2024), in the current study, propose to conduct a replication of Guo et al. (2022), but specifically, to disentangle the effect of COVID-19 infection status on both memory type (item vs. associative) and stimulus modality (verbal vs. non-verbal). Furthermore, Weinerova et al. (2024) propose to analyze cognitive function based on vaccination status before infection to provide a critical test of the potential protective effects of vaccination on cognitive function.

Data collection has been completed with 325 participants after exclusion criteria were applied (COVID group N = 232, No COVID group N = 93). Simulations assuming an effect size observed in Guo et al. (2022), a Bayesian t-test comparing the groups, and a Bayes Factor of 6 indicated that N = 320 is sufficient to detect an effect on 79% of simulations. The main analyses will be conducted using a Bayesian ANCOVA that allows for the inclusion of control variables such as age, sex, country, and education level. Both accuracy and reaction times from the item and associative recognition tasks will be analyzed as the dependent variables. In one analysis, vaccination status will be included as a between-subjects factor, to understand whether vaccination status at the time of infection influences subsequent cognitive function. 

It is important to note that participants were recruited through long-COVID Facebook groups and clinics. Therefore, the results must be interpreted carefully to avoid generalizing to all COVID-19 infections. The data are part of a larger longitudinal study, and the current pre-registration applies only to the baseline timepoint for a cross-sectional analysis. The remaining longitudinal data collection is ongoing and is not part of the current pre-registration.  

The study plan was refined after one round of review, with input from three external reviewers who all agreed that the proposed study was well-designed and scientifically valid. The recommender then reviewed the revised manuscript and judged that the study met the Stage 1 criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/tjs5u (under temporary private embargo)
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Frontera, J., Mainali, S., Fink, E.L. et al. Global Consortium Study of Neurological Dysfunction in COVID-19 (GCS-NeuroCOVID): Study Design and Rationale. Neurocrit Care 33, 25–34 (2020). https://doi.org/10.1007/s12028-020-00995-3

2. Guo, P., Benito Ballesteros, A., Yeung, S. P., Liu, R., Saha, A., Curtis, L., Kaser, M., Haggard, M. P. & Cheke, L. G. (2022). COVCOG 2: Cognitive and Memory Deficits in Long COVID: A Second Publication From the COVID and Cognition Study. Frontiers in Aging Neuroscience. https://doi.org/10.3389/fnagi.2022.804937  

3. Hadad, R., Khoury, J., Stanger, C., Fisher, T., Schneer, S., Ben-Hayun, R., Possin, K., Valcour, V., Aharon-Peretz, J. & Adir, Y. (2022). Cognitive dysfunction following COVID-19 infection. Journal of NeuroVirology, 28(3), 430–437. https://doi.org/10.1007/s13365-022-01079-y  

4. Hampshire, A., Trender, W., Chamberlain, S. R., Jolly, A. E., Grant, J. E., Patrick, F., Mazibuko, N., Williams, S. C., Barnby, J. M., Hellyer, P. & Mehta, M. A. (2021). Cognitive deficits in people who have recovered from COVID-19. EClinicalMedicine, 39, 101044. https://doi.org/10.1016/j.eclinm.2021.101044

5. Miners, S., Kehoe, P. G., & Love, S. (2020). Cognitive impact of COVID-19: looking beyond the short term. Alzheimer's research & therapy, 12, 1-16. https://doi.org/10.1186/s13195-020-00744-w 
 
6. Weinerova, J., Yeung, S., Guo, P., Yau, A., Horne, C., Ghinn, M., Curtis, L., Adlard, F., Bhagat, V., Zhang, S., Kaser, M., Bozic, M., Schluppeck, D., Reid, A., Tibon, R. & Cheke, L. G. (2024). Changes in memory function in adults following SARS-CoV-2 infection: findings from the Covid and Cognition online study. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/tjs5u

7. Wild, C. J., Norton, L., Menon, D. K., Ripsman, D. A., Swartz, R. H. & Owen, A. M. (2022). Disentangling the cognitive, physical, and mental health sequelae of COVID-19. Cell Reports Medicine, 3, 100750. https://doi.org/10.1016/j.xcrm.2022.100750 
Changes in memory function in adults following SARS-CoV-2 infection: findings from the Covid and Cognition online studyJosefina Weinerova, Sabine Yeung, Panyuan Guo, Alice Yau, Connor Horne, Molly Ghinn, Lyn Curtis, Francess Adlard, Vidita Bhagat, Seraphina Zhang, Muzaffer Kaser, Mirjana Bozic, Denis Schluppeck, Andrew Reid, Roni Tibon, Lucy Cheke<p>SARS-CoV-2, the virus responsible for the Covid-19 pandemic, has been shown to have an impact on cognitive function, but the specific aspects of cognition that are affected remain unclear. In this Registered Report, we present a study aimed at ...Life SciencesVishnu Sreekumar2023-08-14 11:09:45 View
27 Feb 2024
STAGE 2
(Go to stage 1)
toto

Revisiting the motivated denial of mind to animals used for food: Replication Registered Report of Bastian et al. (2012)

Confirmatory evidence that the denial of animal minds explains the "meat paradox"

Recommended by ORCID_LOGO based on reviews by Brock Bastian, Ben De Groeve and Florian Lange
The psychology of meat-eating offers a fascinating window into moral reasoning, cognition and emotion, as well as applications in the shift toward more sustainable and ethical alternatives to meat consumption. One key observation in this field is the so-called “meat paradox” – the tendency for people to simultaneously eat meat while also caring about animals. One way to resolve this conflict and reduce cognitive dissonance is for people to separate the concept of meat from animals, mentally disengaging from the origins of meat in order to make the act of consumption more ethically acceptable. Another potential explanation is a motivated “denial of mind”, in which people believe that animals lack the mental capacity to experience suffering; therefore, eating an animal is not a harm that the animal will experience. In support of the latter hypothesis, Bastian et al (2012) found that animals judged to have greater mental capacities were also judged as less edible, and that simply reminding meat eaters that an animal was being raised for the purposes of meat consumption led to denial of its mental capacities.
 
Using a large-scale online design in 1000 participants, Jacobs et al. (2024) replicated two studies from Bastian et al. (2012): asking how the perceived mental capabilities of animals relates to both their perceived edibility and the degree of moral concern they elicit, and whether learning that an animal will be consumed influences perceptions of its mental capabilities. The original findings were successfully replicated. For study 1, attributions of mind were negatively related to animals’ edibility, positively related to negative affect towards eating animals, and positively related to moral concern for animals. For study 2, learning that an animal would be used for food led participants to attribute less mind to the animal. Overall, the results strengthen the conclusion that motivated denial of animal minds can be a mechanism for resolving the ‘meat paradox’.
 
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/cru4z
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. M. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38, 247–256. https://doi.org/10.1177/0146167211424291
 
2. Jacobs, T. P., Wang, M., Leach, S., Loong, S. H., Khanna, M., Chan, K. W., Chau, H. T., Tam, Y. Y. & Feldman, G. (2024). Revisiting the motivated denial of mind to animals used for food: Replication and extension of Bastian et al. (2012) [Stage 2]. Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/mwyde
Revisiting the motivated denial of mind to animals used for food: Replication Registered Report of Bastian et al. (2012)Tyler P. Jacobs, Meiying Wang, Stefan Leach, Ho Loong Siu, Mahika Khanna, Ka Wan Chan, Ho Ting Chau, Yuen Yan Tam, Gilad Feldman<p>Bastian et al. (2012) argued that the ‘meat paradox’–caring for animals yet eating them–exemplifies the motivated moral disengagement driven by a psychologically aversive tension between people’s moral standards (caring for animals) and their b...Social sciencesChris Chambers2023-08-10 21:19:16 View
26 Feb 2024
STAGE 2
(Go to stage 1)
toto

Psychological predictors of long-term esports success: A Registered Report

Psychological predictors of long-term success in esports

Recommended by and ORCID_LOGO based on reviews by Justin Bonny and Maciej Behnke
The competitive play of digital games known as ‘esports’ has surged in popularity over the past few decades. Millions of people nowadays participate in esports as a hobby, and many consider becoming professional esports athletes as a potential career path. However, psychological factors that may predict one's long-term success in esports are not entirely clear.
 
The current Registered Report by Martončik and colleagues (2024) offered a comprehensive test of potential predictors of long-term success in the two currently most impactful PC esports games, namely League of Legends (LoL) and Counter Strike: Global Offensive (CSGO). A wide range of predictors were examined, including native and deliberate practice, attention, intelligence, reaction time, and persistence etc. In both LoL and CSGO, deliberate practice did not meaningfully predict players' highest rank in the past 12 months, as an indicator of long-term success. Younger age predicted better performance in both titles though. Lastly, two title-specific predictors emerged: in LoL, more non-deliberate practice hours predicted better performance, while in CSGO better attention predicted better performance.
 
To explain these findings, the authors proposed the information density theory. Different games differ in the amount of knowledge that is required for achieving long-term success. For information-heavy games such as LoL, naive practice hours may be more essential for players to acquire game-relevant information via playing, compared to information-light games such as CSGO. This might also explain why deliberative practice did not meaningfully predict performance in LoL and CSGO. While this theory still needs to be further tested, the current results will be useful to individuals who are considering pursuing a professional career in esports, as well as professional and semi-professional esports teams and coaches.
 
This Stage 2 manuscript was assessed over two rounds of in-depth review. The recommenders judged the responses to the reviewers' comments were satisfactory, and that the manuscript met the Stage 2 criteria for recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/84zbv
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
 
List of eligible PCI RR-friendly journals: 
 
References
 
Martončik, M., Karhulahti, V.-M., Jin, Y. & Adamkovič, M. (2023). Psychological predictors of long-term esports success: A Registered Report [Stage 2]. Acceptance of Version 1.7 by Peer Community in Registered Reports. https://osf.io/b6vdf
Psychological predictors of long-term esports success: A Registered ReportMarcel Martončik, Veli-Matti Karhulahti, Yaewon Jin, Matúš Adamkovič<p>The competitive play of digital games, esports, has attracted worldwide attention of hundreds of millions of young people. Although esports players are known to practice in similar ways to other athletes, it remains largely unknown what factors...Social sciencesZhang Chen2023-09-26 07:15:41 View
26 Feb 2024
STAGE 1
toto

Lure of choice revisited: Replication and extensions Registered Report of Bown et al. (2003)

Replicating the "lure of choice" phenomenon

Recommended by ORCID_LOGO based on reviews by Hu Chuan-Peng and Gakuto Chiba
The "lure of choice" refers to the idea that we prefer to preserve the option to choose even when the choice is not helpful. In a classic study cited hundred of times, Bown et al. (2003) reported evidence for the lure of choice from a series of studies involving choices between competing options of night clubs, bank savings accounts, casino spinners, and the Monty Hall door choice paradigm. In all cases, participants tended to prefer to choose an option when paired with a "lure", even when that lure was objectively inferior (e.g., same probability of winning but lower payoff).
 
The lure of choice phenomenon applies to a variety of real-life situations many of us often face in our daily lives, and Bown et al.’s findings have influenced the way organizations present choices to prospective users. Despite their theoretical and practical impact, Bown et al.'s findings have not previously been directly replicated, even as the importance of replication studies has become increasingly acknowledged (Nosek et al., 2022).
 
Here, Chan & Feldman (2024) outline a close replication of Bown et al. (2003) that will replicate and extend their original design. By unifying Bown et al.'s multiple studies into a single paradigm with which they will collect data from approximately 1,000 online participants via Prolific, they will have substantially greater statistical power than the original study to detect the predicted effects. They will follow LeBel et al.’s (2019) criteria for evaluating replicability, such that it will be considered a successful replication depending on how many of the 4 scenarios show a signal in the same direction as Bown et al.’s original results (at least 3 out of 4 scenarios = successful replication; no scenarios = failed replication; 1 or 2 scenarios = mixed results replication). They have also added additional controls including a neutral baseline choice without a lure, further ensuring the the validity and interpretability of their eventual findings.
 
One of the goals in creating Peer Community In Registered Reports (PCI RR) was to increase the availability of publishing venues for replication studies, and so PCI RR is well-suited to the proposed replication. Feldman’s lab has also pioneered the use of PCI RR for direct replications of previous studies (e.g., Zhu & Feldman, 2023), and the current submission uses an open-access template he developed (Feldman, 2023). This experience combined with PCI RR’s efficient scheduled review model meant that the current full Stage 1 protocol was able to go from initial submission, receive detailed peer review by two experts, and receive in-principle acceptance (IPA) for the revised submission, all in less than one month.
 
URL to the preregistered Stage 1 protocol: https://osf.io/8ug9m
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Bown, N. J., Read, D. & Summers, B. (2003). The lure of choice. Journal of Behavioral Decision Making, 16(4), 297–308. https://doi.org/10.1002/bdm.447
 
Chan, A. N. Y. & Feldman, G. (2024). The lure of choice revisited: Replication and extensions Registered Report of Bown et al. (2003) [Stage 1]. In principle acceptance of Version 2 by Peer Community In Registered Reports. https://osf.io/8ug9m
 
Feldman, G. (2023). Registered Report Stage 1 manuscript template. https://doi.org/10.17605/OSF.IO/YQXTP
 
LeBel, E. P., Vanpaemel, W., Cheung, I. & Campbell, L. (2019). A brief guide to evaluate replications. Meta-Psychology, 3. https://doi.org/10.15626/MP.2018.843
 
Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., ... & Vazire, S. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73(1), 719-748. https://doi.org/10.1146/annurev-psych-020821-114157
 
Zhu, M. & Feldman, G. (2023). Revisiting the links between numeracy and decision making: Replication Registered Report of Peters et al. (2006) with an extension examining confidence. Collabra: Psychology, 9(1). https://doi.org/10.1525/collabra.77608
Lure of choice revisited: Replication and extensions Registered Report of Bown et al. (2003)Nga Yi (Angela) Chan, Gilad Feldman<p>[IMPORTANT: Abstract, method, and results were written using a randomised dataset produced by Qualtrics to simulate what these sections will look like after data collection. These will be updated following the data collection. For the purpose o...Social sciencesPatrick Savage2023-11-15 00:40:47 View
14 Feb 2024
STAGE 1
toto

Detecting DIF in Forced-Choice Assessments: A Simulation Study Examining the Effect of Model Misspecification

Developing differential item functioning (DIF) testing methods for use in forced-choice assessments

Recommended by ORCID_LOGO based on reviews by Timo Gnambs and 2 anonymous reviewers
Traditional Likert-type items are commonly used but can elicit response bias. An alternative approach, the forced-choice question, required respondents to rank order all items. Forced-choice questions boast some advantages but required advanced item response theory analysis to generate scores which are comparable across individuals and to evaluate the properties of those scales. However, there has been limited discussion of how to test differential item functioning (DIF) in these scales. In a previous study, Lee et al. (2021) proposed a method for testing DIF.
 
Here, Plantz et al. (2024) explore the implications of incorrect specification of anchors in DIF detection for forced choice items. The study proposes to use a Monte Carlo simulation which manipulates sample size, equality of sample size across groups, effect size, percentage of differentially functioning items, analysis approach, anchor set size, and percent of DIF blocks in the anchor set. This study aims to answer research questions about the type I error and power of DIF detection strategies under a variety of circumstances, both evaluating whether the results from Lee et al. (2021) generalize to misspecified models and expanding to evaluate new research questions. Results of this study will provide practical implications for DIF testing with forced-choice questions. An important limitation of the study is that it does not explore non-uniform DIF, only uniform DIF. Additionally, as with all simulation studies not all results can only apply to conditions which are simulated and so rely on the realistic selection of simulation conditions. The authors have selected conditions to match reality in circumstances where data is available, but relied on previous simulations in cases when data is not available. 
 
This Stage 1 manuscript was evaluated over two rounds of review by two reviewers with expertise in psychometrics. An additional round of review was completed by the recommender only. Based on the merits of the original submission and responsiveness of the authors to requests from the reviewers, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).​
 
URL to the preregistered Stage 1 protocol: https://osf.io/p8awx
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 

References
 
1. Lee, P., Joo, S.-H. & Stark, S. (2021). Detecting DIF in multidimensional forced choice measures using the Thurstonian Item Response Theory Model. Organizational Research Methods, 24, 739–771. https://doi.org/10.1177/1094428120959822
 
2. Plantz, J. W.,  Brown, A., Wright, K. & Flake, J. K. (2024). Detecting DIF in Forced-Choice Assessments: A Simulation Study Examining the Effect of Model Misspecification. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/p8awx
Detecting DIF in Forced-Choice Assessments: A Simulation Study Examining the Effect of Model Misspecification Jake Plantz, Anna Brown, Keith Wright, Jessica K. Flake<p>On a forced-choice (FC) questionnaire, the respondent must rank two or more items instead of indicating how much they agree with each of them. Research demonstrates that this format can reduce response bias. However, the data are ipsative, resu...Social sciencesAmanda Montoya2023-09-06 22:43:32 View
14 Feb 2024
STAGE 1
article picture

Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template

Examining the restrictiveness of the PRP-QUANT Template

Recommended by ORCID_LOGO based on reviews by Marjan Bakker and 1 anonymous reviewer
The Psychological Research Preregistration-Quantitative Template has been created in 2022 to provide more structure and detail to preregistrations. The goal of the current study is to test if the PRP-QUANT template indeed provides greater restriction of the flexibility in a study for preregistered hypotheses than other existing templates. This question is important because one concern that has been raised about the practice of preregistration is that the quality of preregistrations is often low. Metascientific research has shown that preregistrations are often of low quality (Bakker et al., 2020), and hypothesis tests from preregistrations are still selectively reported (van den Akker, van Assen, Enting, et al., 2023). It is important to improve the quality of preregistrations, and if a better template can help, it is a cost-effective approach to improve quality if the wider adoption of the better template can be promoted. 
 
In the current study, Spitzer and Mueller (2024) will follow the procedure of a previous meta-scientific study by Heirene et al. (2021). 74 existing preregistrations with the PRP-QUANT template are available, and will be compared with an existing dataset coded by Bakker and colleagues (2020). The sample size is limited, but allows detecting some differences that would be considered large enough to matter, even though there might be smaller differences that would not be detectable based on the currently available sample size. Nevertheless, given that there is a need for improvement, even preliminary data might already be useful to provide tentative recommendations. Restrictiveness will be coded in 23 items, and adherence to or deviations from the preregistration are coded as well. As such deviations are common, the question whether this template reduced the likelihood of deviations is important. Two coders will code all studies. 
 
The study should provide a useful initial evaluation of the PRP-QUANT template, and has the potential to have practical implications if the PRP-QUANT template shows clear benefits. Both authors have declared COI's related to the PRP-QUANT template, making the Registered Report format a fitting approach to prevent confirmation bias from influencing the reported results. 
 
This Stage 1 manuscript was evaluated over two rounds of in-depth review by two expert reviewers and the recommender. After the revisions, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/vhezj
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. van den Akker, O. R., van Assen, M. A. L. M., Bakker, M., Elsherif, M., Wong, T. K., & Wicherts, J. M. (2023). Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology. Behavior Research Methods. https://doi.org/10.3758/s13428-023-02277-0
 
2. Bakker, M., Veldkamp, C. L. S., Assen, M. A. L. M. van, Crompvoets, E. A. V., Ong, H. H., Nosek, B. A., Soderberg, C. K., Mellor, D., & Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), e3000937. https://doi.org/10.1371/journal.pbio.3000937
 
3. Spitzer, L. & Mueller, S. (2024). Stage 1 Registered Report: Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vhezj
 
4. Heirene, R., LaPlante, D., Louderback, E. R., Keen, B., Bakker, M., Serafimovska, A., & Gainsbury, S. M. (2021). Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison. PsyArXiv. https://doi.org/10.31234/osf.io/nj4es
Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) TemplateLisa Spitzer & Stefanie Mueller<p>Preregistration can help to restrict researcher degrees of freedom and thereby ensure the integrity of research findings. However, its ability to restrict such flexibility depends on whether researchers specify their study plan in sufficient de...Social sciencesDaniel Lakens2023-06-01 10:39:20 View
10 Feb 2024
STAGE 1
toto

Using Shakespeare to Answer Psychological Questions: Complexity and Mental Representability of Character Networks

Complexity of Shakespeare’s Social Networks

Recommended by ORCID_LOGO based on reviews by Matúš Adamkovič, James Stiller, Tomáš Lintner and Matus Adamkovic
The rapid methodological development in digital humanities keeps opening new possibilities to better understand our cultural artifacts and, in the process, also ourselves. Some of the historically most influential works of literary human culture are the plays of Shakespeare, which continue to be read and treasured around the world. Although the social networks of Shakespeare’s plays have attracted scientific attention already more than two decades (Stiller et al. 2003), the understanding of their complexity in terms of character networks remains limited and not fully contextualized in the larger landscape of European drama.
 
In the present registered report, Thurn and colleagues (2024) apply Kolmogorov complexity analysis to investigate the social networks in 37 existing plays of Shakespeare. The authors replicate the original work by Stiller et al. (2003) and situate the findings in a larger regional context by further analyzing over 3,000 plays available in the European Drama Corpus. Ultimately, the authors explore the relationships between (Kolmogorov) complexity and the size of character networks as well as the robustness of their results in relation to possible researcher decisions in the analytic process.
 
This Stage 1 manuscript was evaluated over three rounds of in-depth review by four expert reviewers from the research fields of literature, networks, and social analysis. Based on the authors’ careful revisions and responses to the reviewers’ feedback, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/6uw27
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Stiller, J., Nettle, D. & Dunbar, R. I. M. (2003). The small world of shakespeare’s plays. Human Nature, 14, 397-408. https://doi.org/10.1007/s12110-003-1013-1

2. Thurn, C., Sebben, S. & Kovacevic, Z. (2024) Using Shakespeare to Answer Psychological Questions: Complexity and Mental Representability of Character Networks. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/6uw27
Using Shakespeare to Answer Psychological Questions: Complexity and Mental Representability of Character NetworksChristian M. Thurn; Simone Sebben; Zoran Kovacevic<p>Theater plays are a cultural product that can be used to learn about the capacity of human cognition. We argue that Kolmogorov complexity may be suited to operationalize the demand that is put onto a<br>recipient's cognitive system to represent...Humanities, Social sciencesVeli-Matti Karhulahti2023-06-16 12:40:14 View
05 Feb 2024
STAGE 2
(Go to stage 1)
toto

Functional MRI brain state occupancy in the presence of cerebral small vessel disease -- a pre-registered replication analysis of the Hamburg City Health Study

Replicable dynamic functional connectivity and cognitive correlates of cerebral small vessel disease in the Hamburg City Health Study

Recommended by ORCID_LOGO based on reviews by 1 anonymous reviewer
In a previous analysis of data from 988 participants in the Hamburg City Health Study (HCHS), Schlemm and colleagues (2022) reported significant associations between the extent of cerebral small vessel disease (cSVD) and dynamic functional connectivity measures from resting state fMRI. Specifically, the volume of white matter hyperintensities of presumed vascular origin, a structural indicator of cSVD, was negatively related to the proportion of time (‘fractional occupancy’) spent in the two most occupied functional brain states. Reduced fractional occupancy was also associated with longer times to complete part B of the Trail Making Test.
 
In the present Registered Report, Ingwersen and colleagues (2023) successfully replicated these associations between structural, functional and cognitive measures in a sample of 1651 HCHS participants not included in the earlier study. An exploratory multiverse analysis found that the associations were generally robust to different brain parcellation and confound regression strategies. These replicable patterns reinforce the idea that cSVD may disrupt the brain’s ability to enter and maintain distinct functional modes, and that these changes in functional dynamics are predictive of cognitive impairment.
 
The Stage 2 manuscript was assessed over one round of in-depth review. The recommender judged that responses to reviewer comments were appropriate, and that the manuscript met the Stage 2 criteria for recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/9yhzc
 
Level of bias control achieved: Level 2. At least some data/evidence that was used to answer the research question had been accessed and partially observed by the authors prior to Stage 1 in-principle acceptance, but the authors certify that they had not yet observed the key variables within the data that were used to answer the research question AND they took additional steps to maximise bias control and rigour.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Schlemm, E., Frey, B. M., Mayer, C., Petersen, M., Fiehler, J., Hanning, U., Kühn, S., Twerenbold, R., Gallinat, J., Gerloff, C., Thomalla, G. & Cheng, B. (2022). Equalization of brain state occupancy accompanies cognitive impairment in cerebral small vessel disease. Biological Psychiatry, 92, 592-602. https://doi.org/10.1016/j.biopsych.2022.03.019
 
2. Ingwersen, T., Mayer, C., Petersen, M., Frey, B. M., Fiehler, J., Hanning, U., Kühn, S., Gallinat, J., Twerenbold, R., Gerloff, C., Cheng, B., Thomalla, G. & Schlemm, E. (2023). Functional MRI brain state occupancy in the presence of cerebral small vessel disease -- a pre-registered replication analysis of the Hamburg City Health Study. Acceptance of Version 2.01 by Peer Community in Registered Reports. https://github.com/csi-hamburg/HCHS-brain-states-RR/blob/f9d00adbbcf9593d8d191bf5b93912141b80ab1b/manuscript/build/main.pdf
 
Functional MRI brain state occupancy in the presence of cerebral small vessel disease -- a pre-registered replication analysis of the Hamburg City Health StudyThies Ingwersen, Carola Mayer, Marvin Petersen, Benedikt M. Frey, Jens Fiehler, Uta Hanning, Simone Kühn, Jürgen Gallinat, Raphael Twerenbold, Christian Gerloff, Bastian Cheng, Götz Thomalla, Eckhard Schlemm, <p><strong>Objective</strong>: To replicate recent findings on the association between the extent of cerebral small vessel disease (cSVD), functional brain network dedifferentiation, and cognitive impairment.</p> <p><strong>Methods:</strong> We a...Life Sciences, Medical SciencesRobert McIntosh2023-10-17 09:53:02 View
19 Jan 2024
STAGE 1
toto

A systematic review of social connection inventories

Improving the measurement of social connection

Recommended by ORCID_LOGO based on reviews by Jacek Buczny, Richard James and Alexander Wilson
This is an ambitious systematic review that uses a combination of quantitative and qualitative methods to make the measurement of the construct of social connection more rigorous. Social connection is a heterogeneous construct that includes aspects of structure, function and quality. Here, Paris et al. (2024) will use predefined methods to create a database of social connection measures, and will assess heterogeneity of items using human coders and ChatGPT. This database will form the basis of a second systematic review which will look at evidence for validity and measurement properties. This study will also look at the population groups and country of origin for which different measures were designed, making it possible to see how far culturally specific issues affect the content of measures in this domain.
 
The questions asked by this study are exploratory and descriptive and so the importance of pre-registration is in achieving clear criteria for how each question is addressed, rather than evidential criteria for hypothesis-testing.
 
The authors responded comprehensively to three reviewer reports. This study will provide a wealth of useful information for those studying social connection, and should serve to make the literature in this field more psychometrically robust and less fragmented.
 
URL to the preregistered Stage 1 protocol: https://osf.io/796uv
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence. 
 
List of eligible PCI RR-friendly journals:
 
 
References
 

1. Paris, B., Brickau, D., Stoianova, T., Luhmann, M., Mikton, C., Holt-Lunstad, J., Maes, A., & IJzerman, H. (2024). A systematic review of social connection inventories. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/796uv

A systematic review of social connection inventoriesBastien Paris, Debora Brickau, Tetiana Stoianova, Maike Luhmann, Christopher Mikton, Julianne Holt-Lunstad, Marlies Maes, Hans IJzerman<p>Social connection is vital to health and longevity. To date, a plethora of instruments exists to measure social connection, assessing a variety of aspects of social connection like loneliness, social isolation, or social support. For comparabil...Social sciencesDorothy Bishop Alexander Wilson, Jacek Buczny, Richard James2023-07-09 21:33:01 View