Announcements
=============================================================================
IMPORTANT ANNOUNCEMENT: To accommodate reviewer and recommender holiday schedules, we will be closed to ALL submissions from 1st Jul - 1st Sep. During this time, reviewers can submit reviews and recommenders can issue decisions, but no new or revised submissions can be made by authors.
The one exception to this rule is that authors using the scheduled track who submit their initial Stage 1 snapshot prior to 1st Jul can choose a date within the shutdown period to submit their full Stage 1 manuscript.
We recommend that authors submit at least 1-2 weeks prior to commencement of the shutdown period to enable time to make any required revisions prior to in-depth review.
=============================================================================
We are recruiting recommenders (editors) from all research fields!
Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.
Latest recommendations
Id | Title * | Authors * ▼ | Abstract * | Picture | Thematic fields * | Recommender | Reviewers | Submission date | |
---|---|---|---|---|---|---|---|---|---|
10 Jan 2025
STAGE 1
![]() Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior ScaleLoïs Fournier, Alexandre Heeren, Stéphanie Baggio, Luke Clark, Antonio Verdejo-García, José C. Perales, Joël Billieux https://osf.io/wevc4Assessing Impulsivity Measurement (UPPS-P-20-R)Recommended by Veli-Matti KarhulahtiImpulsivity, as a construct, operates by an established history with various models and theories (Leshem & Glicksohn 2007) having accumulated evidence of relevance especially for mental disorders. One of the dominant models, the Impulsive Behavior Model, is conventionally measured in survey studies with UPPS-P scales, a short version of which was recently assessed in a large cross-cultural project (Fournier et al. 2024). In the present study, Fournier and colleagues (2025) aim to further test the revised 20-item scale in English via a three-phase protocol involving evaluations of construct validity, internal consistency reliability, test-retest reliability, convergent validity, and criterion validity. As such, the study contributes to ongoing important development of useful and up-to-date survey scales, which can help researchers avoid measurement issues (Flake & Fried 2020) in various fields where, in this case, impulsivity plays a role.
The study was reviewed over three rounds by two reviewers, with respective topic and methods expertise. Based on detailed responses to reviewers’ feedback and the recommender’s comments on the construct, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/wevc4 (under temporary private embargo)
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. List of eligible PCI-RR-friendly journals:
References 1. Flake, J. K. & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465. https://doi.org/10.1177/2515245920952393
2. Fournier, L., Bőthe, … & Billieux, J. (2024). Evaluating the factor structure and measurement invariance of the 20-item short version of the UPPS-P Impulsive Behavior Scale across multiple countries, languages, and gender identities. Assessment, 10731911241259560. https://doi.org/10.1177/10731911241259560
3. Fournier, L., Heeren, A., Baggio, S., Clark, L., Verdejo-García, A., Perales J.C., Billieux J. (2025) Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior Scale. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/wevc4
4. Leshem, R. & Glicksohn, J. (2007). The construct of impulsivity revisited. Personality and individual Differences, 43, 681-691. https://doi.org/10.1016/j.paid.2007.01.015
| Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior Scale | Loïs Fournier, Alexandre Heeren, Stéphanie Baggio, Luke Clark, Antonio Verdejo-García, José C. Perales, Joël Billieux | <p style="text-align: justify;">The UPPS-P Impulsive Behavior Scale is a well-established psychometric instrument for assessing impulsivity, a key psychological construct transdiagnostically involved in the etiology of numerous psychiatric and neu... | ![]() | Social sciences | Veli-Matti Karhulahti | Ivan Ropovik | 2024-06-27 17:47:17 | View |
08 Sep 2022
STAGE 1
![]() How to succeed in human modified environmentsLogan CJ, Shaw R, Lukas D, McCune KB http://corinalogan.com/ManyIndividuals/mi1.htmlThe role of behavioural flexibility in promoting resilience to human environmental impactsRecommended by Chris ChambersUnderstanding and mitigating the environmental effects of human expansion is crucial for ensuring long-term biosustainability. Recent research indicates a steep increase in urbanisation – including the expansion of cities – with global urban extent expanding by nearly 10,000 km^2 per year between 1985 and 2015 (Liu et al, 2020). The consequences of these human modified environments on animal life are significant: in order to succeed, species must adapt quickly to environmental changes, and those populations that demonstrate greater behavioural flexibility are likely to cope more effectively. These observations have, in turn, prompted the question of whether enhancing behavioural flexibility in animal species might increase their resilience to human impacts.
In the current research, Logan et al. (2022) will use a serial reversal learning paradigm to firstly understand how behavioural flexibility relates to success in avian species that are already successful in human modified environments. The authors will then deploy these flexibility interventions in more vulnerable species to establish whether behavioural training can improve success, as measured by outcomes such as foraging breadth, dispersal dynamics, and survival rate.
The Stage 1 manuscript was submitted via the programmatic track and will eventually produce three Stage 2 outputs focusing on different species (toutouwai, grackles, and jays). Following two rounds of in-depth review, the recommender judged that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/wbsn6
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. List of eligible PCI RR-friendly journals: References
1. Liu, X., Huang, Y., Xu, X., Li, X., Li, X., Ciais, P., Lin, P., Gong, K., Ziegler, A. D., Chen, A., et al. (2020). High-spatiotemporal-resolution mapping of global urban change from 1985 to 2015. Nature Sustainability, 1–7. https://doi.org/10.1038/s41893-020-0521-x
2. Logan, C.J., Shaw, R., Lukas, D. & McCune, K.B. (2022). How to succeed in human modified environments, in principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/wbsn6
| How to succeed in human modified environments | Logan CJ, Shaw R, Lukas D, McCune KB | <p>Human modifications of environments are increasing, causing global changes that other species must adjust to or suffer from. Behavioral flexibility (hereafter ‘flexibility’) could be key to coping with rapid change. Behavioral research can cont... | ![]() | Life Sciences | Chris Chambers | 2022-05-06 12:12:05 | View | |
14 Feb 2024
STAGE 1
![]() Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) TemplateLisa Spitzer & Stefanie Mueller https://doi.org/10.23668/psycharchives.14119Examining the restrictiveness of the PRP-QUANT TemplateRecommended by Daniel LakensThe Psychological Research Preregistration-Quantitative Template has been created in 2022 to provide more structure and detail to preregistrations. The goal of the current study is to test if the PRP-QUANT template indeed provides greater restriction of the flexibility in a study for preregistered hypotheses than other existing templates. This question is important because one concern that has been raised about the practice of preregistration is that the quality of preregistrations is often low. Metascientific research has shown that preregistrations are often of low quality (Bakker et al., 2020), and hypothesis tests from preregistrations are still selectively reported (van den Akker, van Assen, Enting, et al., 2023). It is important to improve the quality of preregistrations, and if a better template can help, it is a cost-effective approach to improve quality if the wider adoption of the better template can be promoted.
In the current study, Spitzer and Mueller (2024) will follow the procedure of a previous meta-scientific study by Heirene et al. (2021). 74 existing preregistrations with the PRP-QUANT template are available, and will be compared with an existing dataset coded by Bakker and colleagues (2020). The sample size is limited, but allows detecting some differences that would be considered large enough to matter, even though there might be smaller differences that would not be detectable based on the currently available sample size. Nevertheless, given that there is a need for improvement, even preliminary data might already be useful to provide tentative recommendations. Restrictiveness will be coded in 23 items, and adherence to or deviations from the preregistration are coded as well. As such deviations are common, the question whether this template reduced the likelihood of deviations is important. Two coders will code all studies.
The study should provide a useful initial evaluation of the PRP-QUANT template, and has the potential to have practical implications if the PRP-QUANT template shows clear benefits. Both authors have declared COI's related to the PRP-QUANT template, making the Registered Report format a fitting approach to prevent confirmation bias from influencing the reported results.
This Stage 1 manuscript was evaluated over two rounds of in-depth review by two expert reviewers and the recommender. After the revisions, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/vhezj
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence. List of eligible PCI RR-friendly journals:
References
1. van den Akker, O. R., van Assen, M. A. L. M., Bakker, M., Elsherif, M., Wong, T. K., & Wicherts, J. M. (2023). Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology. Behavior Research Methods. https://doi.org/10.3758/s13428-023-02277-0
2. Bakker, M., Veldkamp, C. L. S., Assen, M. A. L. M. van, Crompvoets, E. A. V., Ong, H. H., Nosek, B. A., Soderberg, C. K., Mellor, D., & Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), e3000937. https://doi.org/10.1371/journal.pbio.3000937
3. Spitzer, L. & Mueller, S. (2024). Stage 1 Registered Report: Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vhezj
4. Heirene, R., LaPlante, D., Louderback, E. R., Keen, B., Bakker, M., Serafimovska, A., & Gainsbury, S. M. (2021). Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison. PsyArXiv. https://doi.org/10.31234/osf.io/nj4es
| Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template | Lisa Spitzer & Stefanie Mueller | <p>Preregistration can help to restrict researcher degrees of freedom and thereby ensure the integrity of research findings. However, its ability to restrict such flexibility depends on whether researchers specify their study plan in sufficient de... | ![]() | Social sciences | Daniel Lakens | 2023-06-01 10:39:20 | View | |
23 Jan 2025
STAGE 1
![]() Mapping methodological variation in experience sampling research from design to data analysis: A systematic reviewLisa Peeters, Wim Van Den Noortgate, M. Annelise Blanchard, Gudrun Eisele, Olivia Kirtley, Richard Artner, Ginette Lafit https://osf.io/8mwguMethodological Variation in the Experience Sampling Methods: Can We Do ESM Better?Recommended by Thomas EvansThe replication crisis/credibility revolution has driven a vast number of changes to our research environment (Korbmacher et al., 2023) including a much needed spotlight on issues surrounding measurement (Flake & Fried, 2020). As general understanding and awareness has increased surrounding the 'garden of forking paths' or 'researcher degrees of freedom' (Simmons et al., 2011), and the various decisions made during the scientific process that could impact the conclusions drawn by the process, so too should our interest in meta-research that tells us more about the methodological processes we follow, and how discretionary decisions may influence the design, analysis and reporting of a project.
Peeters et al. (2025) have proposed a systematic literature review of this nature, mapping the methodological variation in experience sampling methods (ESM) from the design stage all the way to dissemination. It starts this journey by mapping how ESM studies vary e.g., in design, considering a variety of factors like sample size, number of measurements, and sampling scheme. It also evaluates reporting quality, rationales provided, and captures the extent of open science practices adopted. Covering many parts of the research process that get assumed, unreported or otherwise unjustified, the proposed work looks set to springboard an important body of work that can tell us more effectively how to design, implement and report ESM studies.
The Stage 1 submission was reviewed over one round of in-depth review with two reviewers. Based on detailed responses to reviewers’ feedback, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/ztvn3
Level of bias control achieved: Level 1. At least some of the data/evidence that will be used to the answer the research question has been accessed and observed by the authors, including key variables, but the authors certify that they have not yet performed any of their preregistered analyses, and in addition they have taken stringent steps to reduce the risk of bias. List of eligible PCI-RR-friendly journals:
References
1. Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465. https://doi.org/10.1177/2515245920952393
2. Korbmacher, M., Azevedo, F., Pennington, C. R., Hartmann, H., Pownall, M., Schmidt, K., ... & Evans, T. (2023). The replication crisis has led to positive structural, procedural, and community changes. Communications Psychology, 1, 3. https://doi.org/10.1038/s44271-023-00003-2
3. Peeters, L., Van Den Noortgate, W., Blanchard, M. A., Eisele, G., Kirtley, O., Artner, R., & Lafit, G. (2025). Mapping Methodological Variation in ESM Research from Design to Data Analysis: A Systematic Review. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/ztvn3
4. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. https://doi.org/10.1177/0956797611417632
| Mapping methodological variation in experience sampling research from design to data analysis: A systematic review | Lisa Peeters, Wim Van Den Noortgate, M. Annelise Blanchard, Gudrun Eisele, Olivia Kirtley, Richard Artner, Ginette Lafit | <p><strong>Aim</strong>. The Experience Sampling Method (ESM) has become a widespread tool to study time-varying constructs across many subfields of psychological and psychiatric research. This large variety in subfields of research and constructs... | Social sciences | Thomas Evans | 2024-09-04 10:39:37 | View | ||
21 Mar 2023
STAGE 1
![]() Convenience Samples and Measurement Equivalence in Replication ResearchLindsay J. Alley, Jordan Axt, Jessica Kay Flake https://osf.io/32unbDoes data from students and crowdsourced online platforms measure the same thing? Determining the external validity of combining data from these two types of subjectsRecommended by Corina LoganComparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008).
In this registered report, Alley and colleagues plan to tackle the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform (Alley et al. 2023). They will determine whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups. Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results will be of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). If Alley and colleagues show that these two types of subjects have measurement equivalence, then this indicates that it is more likely that equivalence could hold for other studies relying on these type of subjects as well. If measurement equivalence is not found, then it is a warning to others to evaluate their experimental design to improve validity. In either case, it gives researchers a way to test measurement equivalence for themselves because the code is well annotated and openly available for others to use.
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA). Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question AND they have taken additional steps to maximise bias control and rigour (e.g. conservative statistical threshold; recruitment of a blinded analyst; robustness testing, multiverse/specification analysis, or other approach)
List of eligible PCI RR-friendly journals:
References
Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research, in principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/7gtvf
Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
| Convenience Samples and Measurement Equivalence in Replication Research | Lindsay J. Alley, Jordan Axt, Jessica Kay Flake | <p>A great deal of research in psychology employs either university student or online crowdsourced convenience samples (Chandler & Shapiro, 2016; Strickland & Stoops, 2019) and there is evidence that these groups differ in meaningful ways ... | Social sciences | Corina Logan | 2022-11-29 18:37:54 | View | ||
Convenience Samples and Measurement Equivalence in Replication ResearchLindsay J. Alley, Jordan Axt, Jessica Kay Flake https://osf.io/s5t3vData from students and crowdsourced online platforms do not often measure the same thingRecommended by Corina LoganComparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008). In the current study, Alley and colleagues (2023) tackled the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform. They determined whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups. Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results are of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). The authors show that these two types of subjects do not often have measurement equivalence, which is a warning to others to evaluate their experimental design to improve validity. They provide useful recommendations for researchers on how to to implement equivalence testing in their studies, and they facilitate the process by providing well annotated code that is openly available for others to use. After one round of review and revision, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation. URL to the preregistered Stage 1 protocol: https://osf.io/7gtvf
Level of bias control achieved: Level 2. At least some data/evidence that was used to answer the research question had been accessed and partially observed by the authors prior to Stage 1 IPA, but the authors certify that they had not yet observed the key variables within the data that were used to answer the research question AND they took additional steps to maximise bias control and rigour.
List of eligible PCI RR-friendly journals:
References
1. Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
2. Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
3. Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/s5t3v
| Convenience Samples and Measurement Equivalence in Replication Research | Lindsay J. Alley, Jordan Axt, Jessica Kay Flake | <p>A great deal of research in psychology employs either university student or online crowdsourced convenience samples (Chandler & Shapiro, 2016; Strickland & Stoops, 2019) and there is evidence that these groups differ in meaningful ways ... | Social sciences | Corina Logan | Alison Young Reusser | 2023-08-31 20:26:43 | View | |
03 Mar 2025
STAGE 1
![]() Shape of SNARC: How task-dependent are Spatial-Numerical Associations? A highly powered online experimentLilly Roth, Krzysztof Cipora, Annika T. Overlander, Hans-Christoph Nuerk, Ulf-Dietrich Reips https://osf.io/gsajbShedding light on task influence in the SNARC effectRecommended by Mario DalmasoThe Spatial-Numerical Association of Response Codes (SNARC) effect (Dehaene et al., 1993) is a key phenomenon in numerical cognition. It describes the tendency for individuals to respond faster to smaller numbers with a left-side key and to larger numbers with a right-side key, suggesting a mental mapping of numerical magnitudes onto space. While this effect has been widely replicated, its precise nature remains debated, particularly regarding its task dependency.
In this Stage 1 Registered Report, Roth et al. (2025) present a highly powered study that systematically investigates whether the SNARC effect differs in two widely used numerical cognition tasks: magnitude classification (MC) and parity judgment (PJ). In the MC task, participants determine whether a presented number is smaller or larger than a reference value (typically 5). This task explicitly requires magnitude processing, making numerical magnitude directly relevant to the response. In contrast, the PJ task requires participants to judge whether a number is odd or even, a decision that does not explicitly involve numerical magnitude.
The authors address a fundamental theoretical question in numerical cognition: while the SNARC effect in PJ is often considered continuous, does it follow a categorical pattern in MC? To investigate this, the study directly compares continuous and categorical representations of the SNARC effect across these two tasks, using Bayesian statistical approaches to determine the best-fitting model. By systematically analysing the SNARC effect in these widely used paradigms, this work aims to refine our understanding of how numerical magnitudes are mapped onto space and whether this mapping depends on task demands. The findings of this study will provide crucial insights into the cognitive mechanisms underlying numerical-spatial associations, highlighting the extent to which task structure shapes the emergence of the SNARC effect.
Three expert reviewers provided valuable feedback across multiple rounds of review. Based on detailed responses to the reviewers’ and recommender’s comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
References
1. Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122(3), 371–396. https://doi.org/10.1037/0096-3445.122.3.371
2. Roth, L., Cipora, K., Overlander, A. T., Nuerk, H.-C., & Reips, U.-D. (2025). Shape of SNARC: How task-dependent are Spatial-Numerical Associations? A highly powered online experiment. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/968ad
| Shape of SNARC: How task-dependent are Spatial-Numerical Associations? A highly powered online experiment | Lilly Roth, Krzysztof Cipora, Annika T. Overlander, Hans-Christoph Nuerk, Ulf-Dietrich Reips | <p>Spatial-Numerical Associations (SNAs) are fundamental to numerical cognition. They are essential for number representation and mathematics learning. However, SNAs are highly dependent on the experimental situation and task. Understanding this d... | Life Sciences | Mario Dalmaso | Peter Wühr | 2024-05-27 13:14:25 | View | |
28 Nov 2023
STAGE 1
![]() One and only SNARC? A Registered Report on the SNARC Effect’s Range DependencyLilly Roth, John Caffier, Ulf-Dietrich Reips, Hans-Christoph Nuerk, Krzysztof Cipora https://osf.io/gr94fIs the SNARC effect modulated by absolute number magnitude?Recommended by Robert McIntoshThe Spatial-Numerical Association of Response Codes (SNARC) effect refers to the fact that smaller numbers receive faster responses with the left hand, and larger numbers with the right hand (Dehaene et al., 1993). This robust finding implies that numbers are associated with space, being represented on a mental number line that progresses from left to right. The SNARC effect is held to depend on relative number magnitude, with the mental number line dynamically adjusting to the numerical range used in a given context. This characterisation is based on significant effects of relative number magnitude, with no significant influence of absolute number magnitude. However, a failure to reject the null hypothesis, within the standard frequentist statistical framework, is not firm evidence for the absence of an effect. In this Stage 1 Registered Report, Roth and colleagues (2023) propose two experiments adapted from Dahaene’s (1993) original methods, with a Bayesian statistical approach to confirm—or rule out—a small effect (d = 0.15) of absolute number magnitude in modulating the classic SNARC effect.
The study plan was refined across two rounds of review, with input from two external reviewers and the recommender, after which it was judged to satisfy the Stage 1 criteria for in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/ae2c8
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. List of eligible PCI RR-friendly journals:
References Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122(3), 371–396. https://doi.org/10.1037/0096-3445.122.3.371
Roth, L., Caffier, J., Reips, U.-D., Nuerk, H.-C., & Cipora, K. (2023). One and only SNARC? A Registered Report on the SNARC Effect’s Range Dependency. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/ae2c8
| One and only SNARC? A Registered Report on the SNARC Effect’s Range Dependency | Lilly Roth, John Caffier, Ulf-Dietrich Reips, Hans-Christoph Nuerk, Krzysztof Cipora | <p>Numbers are associated with space, but it is unclear how flexible these associations are. In this study, we will investigate whether the SNARC effect (Spatial-Numerical Association of Response Codes; Dehaene et al., 1993), which describes faste... | Social sciences | Robert McIntosh | 2022-11-30 12:36:08 | View | ||
One and only SNARC? Spatial-Numerical Associations are not fully flexible and depend on both relative and absolute magnitudeLilly Roth, John Caffier, Ulf-Dietrich Reips, Hans-Christoph Nuerk, Annika Tave Overlander, Krzysztof Cipora https://osf.io/ajqpkA Registered Report demonstration that the SNARC effect depends on absolute as well as relative number magnitudeRecommended by Robert McIntoshThe Spatial-Numerical Association of Response Codes (SNARC) effect refers to the fact that smaller numbers receive faster responses with the left hand, and larger numbers with the right hand (Dehaene et al., 1993). This robust finding implies that numbers are associated with space, being represented on a mental number line that progresses from left to right. The SNARC effect is held to depend on relative number magnitude, with the mental number line dynamically adjusting to the numerical range used in a given context. This characterisation is based on significant effects of relative number magnitude, with no significant influence of absolute number magnitude. However, a failure to reject the null hypothesis is not firm evidence for the absence of an effect. In this Registered Report, Roth and colleagues (2024) report two large-sample online experiments, with a Bayesian statistical approach to confirm—or refute—a role for absolute number magnitude in modulating the classic SNARC effect (smallest effect size of interest, d = 0.15).
Experiment 1 closely followed Dehaene’s (1993) original methods, and found strong evidence for an influence of relative magnitude, and moderate-to-strong evidence against an influence of absolute magnitude. Experiment 2 was designed to exclude some potential confounds in the original method, and this second experiment found strong evidence for both relative and absolute magnitude effects, of comparable effect sizes (in the range of d = .24 to .42). This registered study demonstrates that the SNARC effect is not ‘fully flexible’, in the sense of depending only on relative number magnitude; it is also shaped by absolute magnitude.
This Stage 2 manuscript was evaluated by the recommender and one external reviewer. Following appropriate minor revisions, the recommender judged that the manuscript met the Stage 2 criteria for recommendation.
URL to the preregistered Stage 1 protocol: https://osf.io/ae2c8
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. List of eligible PCI RR-friendly journals: References 1. Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122, 371–396. https://doi.org/10.1037/0096-3445.122.3.371
2. Roth, L., Caffier, J., Reips, U.-D., Nuerk, H.-C., Overlander, A. T. & Cipora, K. (2023). One and only SNARC? Spatial-Numerical Associations are not fully flexible and depend on both relative and absolute magnitude [Stage 2]. Acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/epnd4
| One and only SNARC? Spatial-Numerical Associations are not fully flexible and depend on both relative and absolute magnitude | Lilly Roth, John Caffier, Ulf-Dietrich Reips, Hans-Christoph Nuerk, Annika Tave Overlander, Krzysztof Cipora | <p>Numbers are associated with space, but it is unclear how flexible these associations are. We investigated whether the SNARC effect (Spatial-Numerical Association of Response Codes; Dehaene et al., 1993; i.e., faster responses to small/large num... | Life Sciences | Robert McIntosh | 2024-06-10 15:00:30 | View | ||
28 Jan 2025
STAGE 1
![]() See me, judge me, pay me: Gendered effort moralization in work and careLeopold H. O. Roth, Tassilo T. Tissot, Thea Fischer, S. Charlotte Masak https://osf.io/rz6yuA gender difference in effort moralization?Recommended by Adrien FillonEffort moralization is the well known idea that, unrelated to actual performance, people doing more effort are judged better, attributed more morality and seen as better collaborators than people doing less effort. However, the series of studies on this topic mostly used vignettes with a man or a neutral protagonist. The current study by Roth et al. (2025) proposes to tackle the gender problem by testing the difference in attribution morality between a man and a woman protagonist, and two contexts: a “care” and a “work” context, mirroring the stereotypes associated with men and women.
The authors included two different and adequate power analyses, various interpretation of the possible effects, and filtering to ensure a high quality of data collection. They also provide a supplementary repository including the qualtrics survey, R script, and simulated data.
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers’ and the recommender’s comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance. URL to the preregistered Stage 1 protocol: https://osf.io/xd87r Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists, and no part will be generated until after IPA. List of eligible PCI RR-friendly journals:
References
Roth, L. H. O., Tissot, T. T., Fischer, T. & Masak, S. C. (2025). See me, judge me, pay me: Gendered effort moralization in work and care. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/xd87r
| See me, judge me, pay me: Gendered effort moralization in work and care | Leopold H. O. Roth, Tassilo T. Tissot, Thea Fischer, S. Charlotte Masak | <p>The display of high effort at work is commonly rewarded with more positive moral judgements and increased cooperation partner attractiveness. This effect was shown to hold, even if higher effort is unrelated to better performance. Yet, current ... | Social sciences | Adrien Fillon | 2024-09-09 15:12:30 | View |
FOLLOW US
MANAGING BOARD
Chris Chambers
Zoltan Dienes
Corina Logan
Benoit Pujol
Maanasa Raghavan
Emily S Sena
Yuki Yamada