Submit a report

Announcements

Please note: To accommodate reviewer and recommender holiday schedules, we will be closed to submissions from 1st July — 1st September. During this time, reviewers will be able to submit reviews and recommenders will issue decisions, but no new or revised submissions can be made by authors. The one exception to this rule is that authors using the scheduled track who submit their initial Stage 1 snapshot prior to 1st July can choose a date within the shutdown period to submit their full Stage 1 manuscript.

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.


 

Latest recommendationsrssmastodon

IdTitleAuthorsAbstractPictureThematic fieldsRecommenderReviewersSubmission date
15 Oct 2023
STAGE 1

Can one-shot learning be elicited from unconscious information?

Can unconscious experience drive perceptual learning?

Recommended by ORCID_LOGO based on reviews by Jeffrey Saunders and 1 anonymous reviewer
Unconscious priming effects have fascinated not just psychologists but also ad-makers and consumers alike. A related phenomenon in perception is illustrated by presenting participants with two-tone images, which are degraded versions of images of objects and scenes. These two-tone images look like and are indeed judged as meaningless dark and light patches. Upon presenting the actual template image, however, the two-tone image is accurately recognized. This perceptual learning is abrupt, robust, and long-lasting (Daoudi et al., 2017). Surprisingly, Chang et al. (2016) showed that such perceptual disambiguation of two-tone images can happen even in the absence of conscious awareness of having seen the template image. 
 
Halchin et al. (2023) in the current study propose to conduct a conceptual replication of Chang et al. (2016) with important modifications to the procedures to address limitations with the earlier work. Specifically, there was no explicit manipulation of levels of conscious awareness of the template images in the original study. Therefore, miscategorization of low-confidence awareness as unaware could have led to an erroneous conclusion about unconscious priors guiding perceptual learning. Such miscategorization errors and how to tackle them are of interest to the broader field of consciousness studies. Furthermore, a conceptual replication of Chang et al. (2016) is also timely given that prior related work suggests that masking impairs not only conscious awareness of visual features but also blocks processing of higher-level information about the images (e.g. object category). 
 
To address the issues identified above, Halchin et al. (2023) propose to experimentally manipulate conscious awareness by masking the template image very quickly (i.e., a short stimulus onset asynchrony; SOA) or by allowing some more time to induce weak and strong conscious awareness, respectively. The SOAs were validated through pilot studies. Furthermore, they include a four-point perceptual awareness scale instead of the original yes/no options to gauge participants’ subjective awareness of the template images. The authors also propose multiple experiments to include different ways of testing participants’ objective ability to identify the masked template images. Last but not least, the proposed design includes a stronger control condition than the original study by using masked images created from related images (e.g. belonging to the same semantic category). Depending on the results obtained in the main experiments, the inclusion of this control allows the authors to conduct a third experiment to investigate whether the results in the first two can be explained by semantic priming. The proposed study is sufficiently powered (as demonstrated through simulations), and Bayesian statistical procedures will be used to test the main hypotheses. In summary, the proposed work offers a significant improvement in terms of experimental procedures over the original study. If the Chang et al. (2016) results are replicated, the stronger design in the current study is likely to lead to a better understanding of the mechanisms underlying unconscious priors guiding perceptual learning. On the other hand, a failure to replicate not just Chang et al. (2016)’s results but also effects across the three experiments in the current study would raise legitimate questions about the reality of unconscious information guiding perceptual learning. 
 
The study plan was refined across two rounds of review, with input from two external reviewers who both agreed that the proposed study is well designed, timely, and scientifically valid. The recommender then reviewed the revised manuscript and judged that the study met the Stage 1 criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/juckg
 
Level of bias control achieved: Level 3. At least some of the data/evidence that will be used to answer the research question already exists AND is accessible in principle to the authors BUT the authors certify that they have not yet accessed any part of that data/evidence.
 
List of eligible PCI-RR-friendly journals:
 
 
References
 
1. Daoudi, L. D., Doerig, A., Parkosadze, K., Kunchulia, M. & Herzog, M. H. (2017). The role of one-shot learning in #TheDress. Journal of Vision, 17, 15-15. https://doi.org/10.1167/17.3.15 
 
2. Chang, R., Baria, A. T., Flounders, M. W., & He, B. J. (2016). Unconsciously elicited perceptual prior. Neuroscience of Consciousness, 2016. https://doi.org/10.1093/nc/niw008 
 
3. Halchin, A.-M., Teuful, C. & Bompas, A. (2023). Can one-shot learning be elicited from unconscious information? In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/juckg
Can one-shot learning be elicited from unconscious information? Adelina-Mihaela Halchin, Christoph Teufel, Aline Bompas<p>The human brain has the remarkable ability to make sense of highly impoverished images once relevant prior information is available. Fitting examples of this effect are two-tone images, which initially look like meaningless black-and-white patc...Life SciencesVishnu Sreekumar2022-11-30 00:34:07 View
28 Sep 2023
STAGE 1

Hormonal Contraceptive Use and Women’s Sexuality and Well-Being: Estimating Treatment Effects and Their Heterogeneity Based on Longitudinal Data

The Causal Effects of Hormonal Contraceptives on Psychological Outcomes

Recommended by ORCID_LOGO based on reviews by Summer Mengelkoch and 2 anonymous reviewers
Ensuring universal access to sexual and reproductive health and reproductive rights is a global concern, exemplified by goal 5.6 of the Sustainable Development Goals (UN General Assembly, 2015). Whilst the range of contraceptive options have increased, our understanding of the impacts of use for women are inadequate and represent a key barrier to positive change in policies and practices. In particular, we have few consensuses on the expected impacts of hormonal contraceptive use on women's sexuality and wellbeing.
 
In the current programmatic submission, Botzet et al. (2023) argue that this inconclusive evidence base could be due to the wide heterogeneity in responses, the impacts of this heterogeneity upon attrition, differences in contraceptive methods and dosage effects, confounders, and the potential for reverse causality. Tackling some of these potential factors, Botzet (2023) explore whether hormonal contraceptive use influences sexuality and well-being outcomes, and whether (and to what extent) the effects vary between women. To achieve this they have proposed analysis of longitudinal data from the German Family Panel (PAIRFAM) which includes annual waves of data collection from >6500 women, with separate Stage 2 submissions planned to report findings based on sexuality and well-being. The proposed work will progress our understanding of the impact of hormonal contraceptives by overcoming limitations of more common research approaches in this field, and has the potential to contribute to a more contextualised view of the impact of their impacts in real-world practice.
 
The Stage 1 manuscript was evaluated over three rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/kj3h2
 
Level of bias control achieved: Level 3: At least some of the data/evidence that will be used to answer the research question already exists AND is accessible in principle to the authors BUT the authors certify that they have not yet accessed any part of that data/evidence.
 
List of eligible PCI-RR-friendly journals:
 
References
 
Botzet, L. J., Rohrer, J. M., Penke, L. & Arslan, R. C. (2023). Hormonal Contraceptive Use and Women's Sexuality and Well-Being: Estimating Treatment Effects and Their Heterogeneity Based on Longitudinal Data. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/kj3h2
 
UN General Assembly (2015). Transforming our world : the 2030 Agenda for Sustainable Development, 21 October 2015, A/RES/70/1. Available at: https://www.refworld.org/docid/57b6e3e44.html [accessed 27 September 2023]
Hormonal Contraceptive Use and Women’s Sexuality and Well-Being: Estimating Treatment Effects and Their Heterogeneity Based on Longitudinal DataLaura J. Botzet, Julia M. Rohrer, Lars Penke, and Ruben C. Arslan<p>Different women experience hormonal contraceptives differently, reporting side effects on their sexuality and well-being that range from negative to positive. But research on such causal effects of hormonal contraceptives on psychological outco...Social sciencesThomas Evans2022-11-30 13:20:14 View
28 Sep 2023
STAGE 1

Investigating the barriers and enablers to data sharing behaviours: A qualitative Registered Report

Capability, Opportunity, and Motivation in Data Sharing Behaviour

Recommended by ORCID_LOGO based on reviews by Moin Syed, Peter Branney and Libby Bishop
In the past two decades, most academic fields have witnessed an open science revolution that has led to significant increases in open access publishing, reproducibility efforts, and scientific transparency in general (e.g., Spellman et al. 2018). One of the key areas in this ongoing change is data sharing. Although some evidence already points at progress in data sharing practices, many new datasets remain unshared (see Tedersoo et al. 2021).
 
In the present registered report, Henderson et al. (2023) empirically explore the factors that either hinder or facilitate data sharing in the UK. By means of semi-structured interviews, the team will chart researchers’ experiences of sharing and non-sharing. Thematic template analysis will be applied to organise the data into a hierarchical map of capabilities, opportunities, and motivations in a theoretical domains framework (COM-B-TDF). The research plan itself meets the highest open science standards and reflects on the authors own positions, from which the current qualitative interview data sharing efforts will be made.
 
The Stage 1 manuscript was reviewed over three rounds by three experts with familiary of the UK cultural context and specializations in open science practices, qualitative research, and data infrastructures. Based on careful revisions and detailed responses to the reviewers’ comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/2gm5s (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.  
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Henderson, E., Marcu, A., Atkins, L. & Farran, E.K. (2023). Investigating the barriers and enablers to data sharing behaviours: A qualitative Registered Report. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/2gm5s
 
2. Spellman, B. A., Gilbert, E. A. & Corker, K. S. (2018). Open Science. Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience, 5, 1-47. https://doi.org/10.1002/9781119170174.epcn519
 
3. Tedersoo, L., Küngas, R., Oras, E., Köster, K., Eenmaa, H., Leijen, Ä., ... & Sepp, T. (2021). Data sharing practices and data availability upon request differ across scientific disciplines. Scientific data, 8, 192. https://doi.org/10.1038/s41597-021-00981-0
Investigating the barriers and enablers to data sharing behaviours: A qualitative Registered ReportEmma L Henderson, Afrodita Marcu, Lou Atkins, Emily K Farran<p>Data sharing describes the process of making research data available for reuse. The availability of research data is the basis of transparent, effective research systems that democratise access to knowledge and advance discovery. Despite a broa...Social sciencesVeli-Matti Karhulahti2023-05-11 19:18:48 View
25 Sep 2023
STAGE 1

Effects of Auditory Stimuli During Submaximal Exercise on Cerebral Oxygenation

Does listening to music alter prefrontal cortical activity during exercise?

Recommended by ORCID_LOGO based on reviews by David Mehler and 1 anonymous reviewer
The relationship between music and exercise has been studied for over a century, with implications for our understanding of biomechanics, physiology, brain function, and psychology. Listening to music while exercising is associated with a wide range of benefits, from increasing motivation, to reducing perceived exertion, inhibiting awareness of negative bodily signals, boosting mood, and ultimately improving physical performance. But while these ergogenic benefits of music are well documented, much remains to be discovered about how music alters brain function during exercise. One reason for this gap in understanding is the technical difficulty in recording brain activity during realistic exercise, as neuroimaging methods such as fMRI, EEG or MEG typically require participants to remain as still as possible.
 
In the current study, Guérin et al. (2023) will use the optical brain imaging technique of functional near infrared spectroscopy (fNIRS) to measure oxygenation of key brain areas during exercise. Unlike other neuroimaging methods, fNIRS has a high tolerance for motion artefacts, making it the ideal method of choice for the current investigation. The authors propose a series of hypotheses based on previous studies that observed a decrease in cerebral oxygenation during intense exercise, particularly within the medial prefrontal cortex (mPFC) and dorsolateral prefrontal cortex (dlPFC). If, as suggested, the prefrontal cortex is important for regulation of cognition and emotion during exercise, then the benefits of listening to music might arise by delaying or reducing this drop in prefrontal oxygenation.
 
Using a within-subject designs, Guérin et al. will combine an incremental exercise protocol involving a cycling task with three auditory conditions: asynchronous music (the active condition), listening to an audiobook (an auditory control) or silence (baseline control). Compared to the two control conditions, they predict that music exposure will increase oxygenation in prefrontal and parietal regions and will also delay the drop in oxygenation associated with intense exercise (specifically within dlPFC and mPFC). To test whether any such changes are specific for prefrontal and parietal cortex, they will also compare the haemodynamic responses of the occipital cortex between the auditory conditions, predicting no difference.
 
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/52aeb
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 

 

 

References
 
1. Guérin, S. M. R., Karageorghis, C. I., Coeugnet, M. R., Bigliassi, M. & Delevoye-Turrell, Y. N. (2023). Effects of Auditory Stimuli During Submaximal Exercise on Cerebral Oxygenation. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/52aeb

Effects of Auditory Stimuli During Submaximal Exercise on Cerebral OxygenationDr Ségolène M. R. Guérin, Professor Costas I. Karageorghis, Marine R. Coeugnet, Dr Marcelo Bigliassi and Professor Yvonne N. Delevoye-Turrell<p>Asynchronous music has been commonly used to reduce perceived exertion and render the exercise experience more pleasant. Research has indicated that in-task asynchronous music can reallocate an individual’s attentional focus to task-unrelated s...Life SciencesChris Chambers2023-01-24 12:06:32 View
24 Sep 2023
STAGE 1

Sensorimotor Effects in Surprise Word Memory – a Registered Report

Evaluating adaptive and attentional accounts of sensorimotor effects in word recognition memory

Recommended by ORCID_LOGO based on reviews by Gordon Feld and Adam Osth
Words have served as stimuli in memory experiments for over a century. What makes some words stand out in memory compared to others? One plausible answer is that semantically rich words are more distinctive and therefore exhibit a mirror effect in recognition memory experiments where they are likely to be correctly endorsed and also less likely to be confused with other words (Glanzer & Adams, 1985). Semantic richness can arise due to extensive prior experience with the word in multiple contexts but can also arise due to sensorimotor grounding, i.e., direct perceptual and action-based experience with the concepts represented by the words (e.g. pillow, cuddle). However, previous experiments have revealed inconsistent recognition memory performance patterns for words based on different types of sensorimotor grounding (Dymarska et al., 2023). Most surprisingly, body-related words such as cuddle and fitness exhibited greater false alarm rates. 

In the current study, Dymarska and Connell (2023) propose to test two competing theories that can explain the increased confusability of body-related words: 1) the adaptive account - contextual elaboration-based strategies activate other concepts related to body and survival, increasing confusability; and 2) the attentional account - somatic attentional mechanisms automatically induce similar tactile and interoceptive experiences upon seeing body-related words leading to less distinctive memory traces. The adaptive account leads to different predictions under intentional and incidental memory conditions. Specifically, contextual elaboration strategies are unlikely to be employed when participants do not expect a memory test and therefore in an incidental memory task, body-related words should not lead to inflated false alarm rates (see Hintzman (2011) for a discussion on incidental memory tasks and the importance of how material is processed during memory tasks). However, the attentional account is not dependent on the task instructions or the knowledge about an upcoming memory test. 

Here, Dymarska and Connell (2023) have designed an incidental recognition memory experiment with over 5000 words, disguised as a lexical decision task using carefully matched pseudowords during the encoding phase. The sample size will be determined by using a sequential hypothesis testing plan with Bayes Factors. To test the predictions of the adaptive and attentional accounts, the authors derive a set of lexical and sensorimotor variables (including a body-component) after dimensionality reduction of a comprehensive set of lexical and semantic word features. The analysis will involve running both Bayesian and frequentist hierarchical linear regression to explain four different measures of recognition memory performance based on the key sensorimotor variables and other baseline/confounding variables. While this analysis plan enables a comparison with the earlier results from an expected memory test (Dymarska et al., 2023), the current study is self-contained in that it is possible to distinguish the adaptive and attentional accounts based on the effect of body component scores on hit rate and false alarm rate.

The study plan was refined across two rounds of review, with input from two external reviewers after which the recommender judged that the study satisfied the Stage 1 criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/ck5bg
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
References
 
Dymarska, A. & Connell, L. (2023). Sensorimotor Effects in Surprise Word Memory – a Registered Report. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/ck5bg

Dymarska, A., Connell, L. & Banks, B. (2023). More is Not Necessarily Better: How Different Aspects of Sensorimotor Experience Affect Recognition Memory for Words. Journal of Experimental Psychology: Language, Memory, Cognition. Advance online publication. https://dx.doi.org/10.1037/xlm0001265 

Glanzer, M., & Adams, J. K. (1985). The mirror effect in recognition memory. Memory & cognition, 13, 8-20.

Hintzman, D. L. (2011). Research strategy in the study of memory: Fads, fallacies, and the search for the “coordinates of truth”. Perspectives on Psychological Science, 6(3), 253-271.
Sensorimotor Effects in Surprise Word Memory – a Registered ReportAgata Dymarska, Louise Connell<p>Sensorimotor grounding of semantic information elicits inconsistent effects on word memory, depending on which type of experience is involved, with some aspects of sensorimotor information facilitating memory performance while others inhibit it...Social sciencesVishnu Sreekumar2023-01-31 15:21:17 View
15 Sep 2023
STAGE 1

Do error predictions of perceived exertion inform the level of running pleasure?

Does running pleasure result from finding it easier than you thought you would?

Recommended by ORCID_LOGO based on reviews by Jasmin Hutchinson and 1 anonymous reviewer
The reward value of a stimulus is based on an error in prediction: Things going better than predicted. Could this learning principle, often tested on short acting stimuli, also apply to a long lasting episode, like going for a run? Could how rewarding a run is be based on the run going better than predicted?
 
Understanding the conditions under which exercise is pleasurable could of course be relevant to tempting people to do more of it! Brevers et al. (2023) will ask people before a daily run to predict the amount of perceived exertion they will experience; then just after the run, to rate the retrospective amount of perceived exertion actually experienced. The difference between the two ratings is the prediction error.
 
Participants will also rate their remembered pleasure in running and the authors will investigate whether running pleasure depends on prediction error.
 
The study plan was refined across four rounds of review, with input from two external reviewers and the recommender, after which it was judged to satisfy the Stage 1 criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/xh724
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Brevers, D., Martinent, G., Oz, I. T., Desmedt, O. & de Geus, B. (2023). Do error predictions of perceived exertion inform the level of running pleasure? In principle acceptance of Version 5 by Peer Community in Registered Reports. https://osf.io/xh724
Do error predictions of perceived exertion inform the level of running pleasure?Damien Brevers, Guillaume Martinent, İrem Tuğçe Öz, Olivier Desmedt, Bas de Geus<p>Humans have the ability to mentally project themselves into future events (prospective thinking) to promote the implementation of health-oriented behaviors, such as the planning of daily sessions of physical exercise. Nevertheless, it is curren...Social sciencesZoltan Dienes2023-04-21 17:40:50 View
11 Sep 2023
STAGE 1

Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary

Understanding the validity of standardised language in research evaluation

Recommended by ORCID_LOGO and ORCID_LOGO based on reviews by Chris Hartgerink (they/them), Veli-Matti Karhulahti, Štěpán Bahník and Ross Mounce
In 2023, the journal eLife ended the practice of making binary accept/reject decisions following peer review, instead sharing peer review reports (for manuscripts that are peer-reviewed) and brief “eLife assessments” representing the consensus opinions of editors and peer reviewers. As part of these assessments, the journal draws language from a "common vocabulary" to linguistically rank the significance of findings and strength of empirical support for the article's conclusions. In particular, the significance of findings is described using an ordinal scale of terms from "landmark" → "fundamental" → "important" → "valuable" → "useful", while the strength of support is ranked across six descending levels from "exceptional" down to "inadequate".
 
In the current study, Hardwicke et al. (2023) question the validity of this taxonomy, noting a range of linguistic ambiguities and counterintuitive characteristics that may undermine the communication of research evaluations to readers. Given the centrality of this common vocabulary to the journal's policy, the authors propose a study to explore whether the language used in the eLife assessments will be interpreted as intended by readers. Using a repeated-measures experimental design, they will tackle three aims: first, to understand the extent to which people share similar interpretations of phrases used to describe scientific research; second, to reveal the extent to which people’s implicit ranking of phrases used to describe scientific research aligns with each other and with the intended ranking; and third, to test whether phrases used to describe scientific research have overlapping interpretations. The proposed study has the potential to make a useful contribution to metascience, as well as being a valuable source of information for other journals potentially interested in following the novel path made by eLife.
 
The Stage 1 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/mkbtp
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Hardwicke, T. E., Schiavone, S., Clarke, B. & Vazire, S. (2023). Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/mkbtp
Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabularyTom E. Hardwicke, Sarah Schiavone, Beth Clarke, Simine Vazire<p>The journal eLife recently announced that it would abandon binary ‘accept/reject’ decisions and instead focus on sharing both peer review reports and short “eLife assessments” representing the consensus opinions of editors and peer reviewers. F...Life Sciences, Social sciencesSarahanne Miranda Field2023-06-16 12:11:14 View
11 Sep 2023
STAGE 1

Researcher Predictions of Effect Generalizability Across Global Samples

Can psychology researchers predict which effects will generalise across cultures?

Recommended by ORCID_LOGO based on reviews by Michèle Nuijten, Ian Hussey, Jim Grange and Matthias Stefan
Compared to the wealth of debate surrounding replicability and transparency, relatively little attention has been paid to the issue of generalisability – the extent to which research findings hold across different samples, cultures, and other parameters. Existing research suggests that researchers in psychology are prone to generalisation bias, relying on narrow samples (e.g. drawn predominantly from US or European undergraduate samples) to draw broad conclusions about the mind and behaviour. While recent attempts to address generalisability concerns have been made – such as journals requiring explicit statements acknowledging constraints on generality – addressing this bias at root, and developing truly generalisable methods and results, requires a deeper understanding of how researchers perceive generalisability in the first place.
 
In the current study, Schmidt et al. (2023) tackle the issue of cross-cultural generalisability using four large-scale international studies that are being conducted as part of the Psychological Science Accelerator (PSA) – a globally distributed network of researchers in psychology that coordinates crowdsourced research projects across six continents. Specifically, participants (who will be PSA research members) will estimate the probability that an expected focal effect will be observed both overall and within regional subsamples of the PSA studies. They will also predict the size of these focal effects overall and by region.
 
Using this methodology, the authors plan to ask two main questions: first whether researchers can accurately predict the generalisability of psychological phenomena in upcoming studies, and second whether certain researcher characteristics (including various measures of expertise, experience, and demographics) are associated with the accuracy of generalisability predictions. Based on previous evidence that scientists can successfully predict the outcomes of research studies, the authors expect to observe a positive association between predicted and actual outcomes and effect sizes. In secondary analyses, the authors will also test if researchers can predict when variables that capture relevant cultural differences will moderate the focal effects – if so, this would suggest that at least some researchers have a deeper understanding as to why the effects generalise (or not) across cultural contexts.
 
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/vwqsa (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Schmidt, K., Silverstein, P. & Chartier, C. R. (2023). Registered Report: Researcher Predictions of Effect Generalizability Across Global Samples. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vwqsa
Researcher Predictions of Effect Generalizability Across Global SamplesKathleen Schmidt, Priya Silverstein, & Christopher R. Chartier<p>The generalizability of effects is an increasing concern among researchers in psychological science. Traditionally, the field has relied on university samples from Europe and North America to make claims about humans writ large. The proposed re...Social sciencesChris Chambers2023-02-16 03:49:35 View
08 Sep 2023
STAGE 1

Evaluation of spatial learning and wayfinding in a complex maze using immersive virtual reality. A registered report

Evaluation of an immersive virtual reality wayfinding task

Recommended by ORCID_LOGO based on reviews by Conor Thornberry, Gavin Buckingham and 1 anonymous reviewer
The Virtual Maze Task (VMT) is a digital desktop 2D spatial learning task that has been used for research into the effect of sleep and dreaming on memory consolidation (e.g. Wamsley et al, 2010). One limitation of this task has been low rates of reported dream incorporation. Eudave and colleagues (2023) have created an immersive virtual reality (iVR) version of the VMT, which they believe might be more likely to be incorporated into dreams. As an initial step in validating this task for research, they propose a within-subjects study to compare three measures of spatial learning between the 2D desktop and iVR versions. Based on a review of relevant literature, the prediction is that performance will be similar between the two task versions. The planned sample size (n = 62) is sufficient for a .9 power test of equivalence within effect size bounds of d = -.47 to .47. Additional independent variables (gender, perspective-taking ability) and dependent measures (self-reported cybersickness and sense of presence) will be recorded for exploratory analyses.
 
The study plan was refined across four rounds of review, with input from two external reviewers and the recommender, after which it was judged to satisfy the Stage 1 criteria for in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/wba2v
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Eudave, L., Martínez, M., Valencia, M., & Roth D. (2023). Evaluation of spatial learning and wayfinding in a complex maze using immersive virtual reality. A registered report. In principle acceptance of Version 5 by Peer Community in Registered Reports.
 
Wamsley, E. J., Tucker, M., Payne, J. D., Benavides, J. A., & Stickgold, R. (2010). Dreaming of a learning task is associated with enhanced sleep-dependent memory consolidation. Current Biology, 20, 850–855. https://doi.org/10.1016/j.cub.2010.03.027
 
† There is one minor change that the authors should make to the Methods section, which is sufficiently small that it can be incorporated at Stage 2: "if both tests reject the null hypothesis (observed data is less/greater than the lower/upper equivalence bounds), conditions are considered statistically equivalent" >> suggest changing "less/greater" to "greater/lesser" for correct correspondence with "lower/upper".
Evaluation of spatial learning and wayfinding in a complex maze using immersive virtual reality. A registered reportEudave L., Martínez M., Valencia M., Roth D.<p style="text-align: justify;"><strong>Objectives</strong>: Mazes have traditionally been used as tools for evaluating spatial learning and navigational abilities in humans. They have been also utilized in sleep and dream research, as wayfinding ...Life SciencesRobert McIntosh2023-03-31 17:21:20 View
18 Aug 2023
STAGE 2
(Go to stage 1)

Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation

Incorporating open research practices into the undergraduate curriculum increases understanding of such practices

Recommended by ORCID_LOGO based on reviews by Kelsey McCune, Neil Lewis, Jr., Lisa Spitzer and 1 anonymous reviewer
In a time when open research practices are becoming more widely used to combat questionable research practices (QRPs) in academia, this Registered Report by Pownall and colleagues (2023) empirically investigated the practice of preregistering study plans, which allows us to better understand to what degree such practices increase awareness of QRPs and whether experience with preregistration helps reduce engagement in QRPs. This investigation is timely because results from these kinds of studies are only recently becoming available and the conclusions are providing evidence that open research practices can improve research quality and reliability (e.g., Soderberg et al. 2021, Chambers & Tzavella 2022). The authors crucially focused on the effect of preregistering the undergraduate senior thesis (of psychology students in the UK), which is a key stage in the development of an academic.
 
Pownall and colleagues found that preregistration did not affect attitudes toward QRPs, but it did improve student understanding of open research practices. Using exploratory analyses, they additionally found that those who preregistered were those students who reported that they had more opportunity, motivation, and greater capability. This shows how important it is to incorporate the teaching of open research practices such that students can increase their capability, motivation, and opportunity to pursue such practices, whether it is preregistration or other practices that are better known to reduce QRPs (such as registered reports; Krypotos et al. 2022). 
 
After four rounds of review and revisions, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/9hjbw
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Chambers C. D. & Tzavella, L. (2022). The past, present, and future of Registered Reports. Nature Human Behaviour, 6, 29-42. https://doi.org/10.1038/s41562-021-01193-7
 
2. Krypotos, A. M., Mertens, G., Klugkist, I., & Engelhard, I. M. (2022). Preregistration: Definition, advantages, disadvantages, and how it can help against questionable research practices. In Avoiding Questionable Research Practices in Applied Psychology (pp. 343-357). Cham: Springer International Publishing.
 
3. Pownall, M., Pennington, C. R., Norris, E., Juanchich, M., Smaile, D., Russell, S., Gooch, D., Rhys Evans, T., Persson, S., Mak, M. H. C., Tzavella, L., Monk, R., Gough, T., Benwell, C. S. Y., Elsherif, M., Farran, E., Gallagher-Mitchell, T., Kendrick, L. T., Bahnmueller, J., Nordmann, E., Zaneva, M., Gilligan-Lee, K., Bazhydai, M., Jones, A., Sedgmond, J., Holzleitner, I., Reynolds, J., Moss, J., Farrelly, D., Parker, A. J. & Clark, K. (2023). Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation [Stage 2 Registered Report], acceptance of Version 4 by Peer Community in Registered Reports. https://psyarxiv.com/xg2ah
 
4. Soderberg C. K., Errington T. M., Schiavone S. R., Bottesini J., Thorn F. S., Vazire S., Esterling K. M. & Nosek B. A. (2021) Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5, 990–997. https://doi.org/10.1038/s41562-021-01142-4
Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate DissertationMadeleine Pownall, Charlotte R. Pennington, Emma Norris, Marie Juanchich, David Smaile, Sophie Russell, Debbie Gooch, Thomas Rhys Evans, Sofia Persson, Matthew HC Mak, Loukia Tzavella, Rebecca Monk, Thomas Gough, Christopher SY Benwell, Mahmoud El...<p>Research shows that questionable research practices (QRPs) are present in undergraduate final-year dissertation projects. One entry-level Open Science practice proposed to mitigate QRPs is ‘study preregistration’, through which researchers outl...Life Sciences, Social sciencesCorina Logan2023-03-25 11:38:54 View