Announcements
We are recruiting recommenders (editors) from all research fields!
Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.
Latest recommendations
Id | Title * | Authors * ▼ | Abstract * | Picture | Thematic fields * | Recommender | Reviewers | Submission date | |
---|---|---|---|---|---|---|---|---|---|
The Effect of Brooding about Societal Problems on Conspiracy Beliefs: A Registered ReportLuisa Liekefett, Simone Sebben, Julia C. Becker https://osf.io/3e8wcBrooding increases conspiracy beliefs but with practical significance to be determinedRecommended by Chris ChambersThe world is seemingly awash with conspiracy theories – from well-trodden examples such as fake Moon landings, the 9/11 truth movement, and Holocaust denial, to relative newcomers including COVID as a bioweapon, QAnon, and the belief that the science of climate change has been invented or falsified. While there is a public perception that conspiracy theories are becoming more prevalent, recent evidence suggests that the rate of conspiracism is relatively stable over time (Uscinski et al., 2022). At any point in history, it seems that a certain proportion of people find themselves vulnerable to conspiracy beliefs, but what distinguishes those who do from those who don’t, and what are the causal factors?
In the current study, Liekefett et al. (2023) investigated the critical role of rumination – a perseverative and repetitive focus on negative content leading to emotional distress. In particular, the authors asked whether one component of rumination referred to as brooding (dwelling on one’s worries and distressing emotions) has a specific causal role in the formation of conspiracy beliefs. In a series of preliminary experiments, the authors first established a procedure for successfully inducing rumination, identifying various boundary conditions and requirements for a successful design. In the main study (N=1,638 to 2,007 depending on the analysis), they asked whether the induction of brooding causes a significant increase in conspiracy beliefs. Manipulation checks were also included to confirm intervention fidelity (independently of this hypothesis), and exploratory analyses tested the effect of various moderators, as well as the causal role of a complementary manipulation of reflection – a component of rumination in which attention is focused on the issue at hand rather than one’s emotions.
As expected by the authors' preliminary work, manipulation checks independently confirmed the effectiveness of the brooding intervention. In answer to the main research question, participants who brooded over the worries and negative emotions associated with an issue were more susceptible to conspiracy beliefs compared to a control group. However, while this effect of brooding was statistically significant, the confidence interval of the effect size estimate overlapped with the authors' proposed smallest effect size of interest (d = 0.20), suggesting that the practical value of the effect remains to be determined.
Overall the findings are consistent with a range of psychological theories suggesting that rumination induces negative affect and/or narrows attention to negative information, which in turn may make conspiracy theories seem more probable and render individuals more vulnerable to cognitive bias. The authors note the importance of future work to define the smallest effect of practical significance, analagous to the criteria used to determine the 'minimal clinically important difference’ in medical research.
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
URL to the preregistered Stage 1 protocol: https://osf.io/y82bs
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. List of eligible PCI RR-friendly journals: References
1. Uscinski, J., Enders, A., Klofstad, C., Seelig, M., Drochon, H., Premaratne, K. & Murthi, M. (2022) Have beliefs in conspiracy theories increased over time? PLOS ONE 17: e0270429. https://doi.org/10.1371/journal.pone.0270429
2. Liekefett, L. Sebben, S. & Becker, J. C. (2023). The Effect of Brooding about Societal Problems on Conspiracy Beliefs: A Registered Report. Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/3e8wc | The Effect of Brooding about Societal Problems on Conspiracy Beliefs: A Registered Report | Luisa Liekefett, Simone Sebben, Julia C. Becker | <p>This Stage 2 Registered Report concerns the relationship between rumination, a repetitive style of negative thinking, and conspiracy beliefs (Stage 1 protocol: https://osf.io/y82bs, date of in-principle-acceptance: 23/05/2023). Based on four pi... | Humanities, Social sciences | Chris Chambers | 2023-10-19 17:46:59 | View | ||
24 Feb 2025
STAGE 1
![]() Gold in, gold out. Quality appraisal and risk of bias tools to assess non-intervention studies for systematic reviews in the behavioural sciences: A scoping reviewLucija Batinović, Jade S. Pickering, Olmo R. van den Akker, Dorothy Bishop, Mahmoud Elsherif, Thomas Rhys Evans, Melissa Gibbs, Tamara Kalandadze, Janneke Staaks, Marta Topor https://osf.io/7p8bmScoping review of quality appraisal and risk of bias tools and their relevance for behavioral sciencesRecommended by Antica CulinaSystematic reviews and meta-analyses are becoming more popular across sciences, often influencing future research, policies, interventions, and similar. The conclusions of evidence synthesis will depend on the quality of the primary studies (i.e. evidence) included. Thus, the quality and risk of bias in these primary studies must be essential components of evidence synthesis. However, in many scientific fields, including behavioural sciences, this is rarely so.
In this Stage 1 manuscript, Batinović et al. (2025) propose to conduct a systematic map of the existing tools to assess methodological quality of risk of bias tools across scientific fields, and map their applicability for primary studies within the broad field of behavioral sciences. The review will provide a comprehensive overview of how existing tools can be applied to the behavioral sciences, and identify gaps for future development of relevant tools in the field. The protocol and its methods were thoroughly developed, and are suitable to reach the research aims.
The Stage 1 submission was evaluated by two expert reviewers. After two rounds of revision, the recommender judged that the manuscript met the Stage 1 criteria, and the manuscript was awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/4gy5b
Level of bias control achieved: Level 4. At least some of the data/evidence that will be used to answer the research question already exists AND is accessible in principle to the authors (e.g. residing in a public database or with a colleague) BUT the authors certify that they have not yet accessed any part of that data/evidence. List of eligible PCI RR-friendly journals:
References
1. Batinović, L., Pickering, J. S., van den Akker, O. R., Bishop, D., Elsherif, M., Evans, T. R., Gibbs, M., Kalandadze, T., Staaks, J., & Topor, M., Gold in, gold out. Quality appraisal and risk of bias tools to assess non-intervention studies for systematic reviews in the behavioural sciences: A scoping review. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/4gy5b | Gold in, gold out. Quality appraisal and risk of bias tools to assess non-intervention studies for systematic reviews in the behavioural sciences: A scoping review | Lucija Batinović, Jade S. Pickering, Olmo R. van den Akker, Dorothy Bishop, Mahmoud Elsherif, Thomas Rhys Evans, Melissa Gibbs, Tamara Kalandadze, Janneke Staaks, Marta Topor | <p>Systematic reviews depend critically on the methodological quality and bias levels of the studies they synthesise to provide the highest standard of evidence available for informing future research, practice, and policy. Despite the development... | Social sciences | Antica Culina | 2024-06-30 20:24:39 | View | ||
27 Mar 2024
STAGE 1
![]() Registered Report: Are anticipatory predictions enhanced in tinnitus and independent of hearing loss?L. Reisinger, G. Demarchi, S. Rösch, E. Trinka, J. Obleser, N. Weisz https://osf.io/t34p8Can predictive coding explain subjective tinnitus?Recommended by Chris ChambersSubjective tinnitus is a common disorder in which people experience a persistent sound in the absence of any external source. The underlying causes of tinnitus are debated – although the condition is strongly associated with hearing loss resulting from auditory damage, much remains to be understood about the neural processes that give rise to the phantom perception. Various classes of neurophysiological theories have been proposed, including the “altered gain” model – in which neurons in the auditory pathway increase their responsiveness to compensate for reduced auditory input following hearing loss – and the “noise cancellation” model – in which disrupted feedback connections from limbic regions are unable to tune out phantom signals. Although these theories account for much observed data, they have not been conclusively supported, and their ability to explain tinnitus is limited by the fact that hearing loss and tinnitus can arise independently and at different times.
In the current study, Reisinger et al. (2023) will test an emerging alternative theory based on a Bayesian predictive-coding framework (Sedley et al., 2016) in which the alteration of perceptual priors leads the auditory system to expect a sound that, if functioning normally, it should not expect. Using magnetoencephalography (MEG) in a sample of tinnitus patients (and carefully-matched controls for age, gender, and level of hearing loss), they will ask whether tinnitus is associated with anticipatory brain activation, tuned to the carrier-frequency of an expected auditory stimulus. Specifically, the authors predict that if the predictive-coding framework is correct then individuals with tinnitus should show different regularity-dependent pre-activations of carrier- frequency-specific information compared to the control group, while tone carrier-frequencies should be processed normally in tinnitus patients. They also predict that any such pre-activations should not be related to levels of reported subjective tinnitus distress, as measured with the short version of the Tinnitus Questionnaire (mini-TQ).
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/6gvpy
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence.
List of eligible PCI RR-friendly journals:
References
Reisinger, L., Demarchi, G., Rösch, , S., Trinka, E., Obleser, L., & Weisz, N. (2023). Registered Report: Are anticipatory predictions enhanced in tinnitus and independent of hearing loss? In principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/6gvpy
Sedley, W., Friston, K. J., Gander, P. E., Kumar, S., & Griffiths, T. D. (2016). An integrative tinnitus model based on sensory precision. Trends in Neurosciences, 39, 799-812. https://doi.org/10.1016/j.tins.2016.10.004
| Registered Report: Are anticipatory predictions enhanced in tinnitus and independent of hearing loss? | L. Reisinger, G. Demarchi, S. Rösch, E. Trinka, J. Obleser, N. Weisz | <p>Phantom perceptions occur without any identifiable environmental or bodily source. The mechanisms and key drivers behind phantom perceptions like tinnitus are not well understood. The dominant view suggests that tinnitus results from hyperactiv... | Life Sciences | Chris Chambers | 2023-01-03 08:35:12 | View | ||
Registered Report: Are anticipatory auditory predictions enhanced in tinnitus and independent of hearing loss?L. Reisinger, G. Demarchi, S. Rösch, E. Trinka, J. Obleser, N. Weisz https://osf.io/9wqjhEvidence for the role of predictive coding in subjective tinnitusRecommended by Chris ChambersSubjective tinnitus is a common disorder in which people experience a persistent sound in the absence of any external source. The underlying causes of tinnitus are debated – although the condition is strongly associated with hearing loss resulting from auditory damage, much remains to be understood about the neural processes that give rise to the phantom perception. Various classes of neurophysiological theories have been proposed, including the “altered gain” model – in which neurons in the auditory pathway increase their responsiveness to compensate for reduced auditory input following hearing loss – and the “noise cancellation” model – in which disrupted feedback connections from limbic regions are unable to tune out phantom signals. Although these theories account for much observed data, they have not been conclusively supported, and their ability to explain tinnitus is limited by the fact that hearing loss and tinnitus can arise independently and at different times.
In the current study, Reisinger et al. (2024) tested an emerging alternative theory based on a Bayesian predictive-coding framework (Sedley et al., 2016) in which the alteration of perceptual priors leads the auditory system to expect a sound that, if functioning normally, it should not expect. Using magnetoencephalography (MEG) in a sample of tinnitus patients (and carefully-matched controls for age, gender, and level of hearing loss), they asked whether tinnitus is associated with anticipatory brain activation, tuned to the carrier-frequency of an expected auditory stimulus. Specifically, the authors predicted that if the predictive-coding framework is correct then individuals with tinnitus should show different regularity-dependent pre-activations of carrier- frequency-specific information compared to the control group, while tone carrier-frequencies should be processed normally in tinnitus patients. They also predicted that any such pre-activations should not be related to levels of reported subjective tinnitus distress, as measured with the short version of the Tinnitus Questionnaire (mini-TQ).
The results broadly confirmed the hypotheses, with some caveats. Statistically significant differences in regularity-dependent pre-activations were observed between the tinnitus and control groups, however – curiously – the effects appear to be driven by below-chance decoding in the control group, complicating the interpretration. At the same time, consistent with expectations, frequency processing did not differ significantly between individuals with and without tinnitus, and the observed pre-activations were not significantly related to tinnitus distress. Overall, the findings cautiously support the conclusion that chronic tinnitus is associated with maladaptively upregulated predictive neural processing, and that this phenomenon is unlikely to be explained by either tinnitus distress or hearing loss.
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
URL to the preregistered Stage 1 protocol: https://osf.io/6gvpy
Level of bias control achieved: Level 3. At least some data/evidence that was used to the answer the research question had been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they had yet observed any part of the data/evidence prior to Stage 1 IPA.
List of eligible PCI RR-friendly journals:
References
1. Reisinger, L., Demarchi, G., Rösch, , S., Trinka, E., Obleser, L., & Weisz, N. (2024). Registered Report: Are anticipatory auditory predictions enhanced in tinnitus and independent of hearing loss? [Stage 2] Acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/9wqjh
2. Sedley, W., Friston, K. J., Gander, P. E., Kumar, S., & Griffiths, T. D. (2016). An integrative tinnitus model based on sensory precision. Trends in Neurosciences, 39, 799-812. https://doi.org/10.1016/j.tins.2016.10.004
| Registered Report: Are anticipatory auditory predictions enhanced in tinnitus and independent of hearing loss? | L. Reisinger, G. Demarchi, S. Rösch, E. Trinka, J. Obleser, N. Weisz | <p>Phantom perceptions occur without any identifiable environmental or bodily source. The mechanisms and key drivers behind phantom perceptions like tinnitus are not well understood. The dominant “altered-gain”-framework suggests that tinnitus res... | Life Sciences | Chris Chambers | 2024-02-21 16:17:33 | View | ||
10 Jan 2025
STAGE 1
![]() Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior ScaleLoïs Fournier, Alexandre Heeren, Stéphanie Baggio, Luke Clark, Antonio Verdejo-García, José C. Perales, Joël Billieux https://osf.io/wevc4Assessing Impulsivity Measurement (UPPS-P-20-R)Recommended by Veli-Matti KarhulahtiImpulsivity, as a construct, operates by an established history with various models and theories (Leshem & Glicksohn 2007) having accumulated evidence of relevance especially for mental disorders. One of the dominant models, the Impulsive Behavior Model, is conventionally measured in survey studies with UPPS-P scales, a short version of which was recently assessed in a large cross-cultural project (Fournier et al. 2024). In the present study, Fournier and colleagues (2025) aim to further test the revised 20-item scale in English via a three-phase protocol involving evaluations of construct validity, internal consistency reliability, test-retest reliability, convergent validity, and criterion validity. As such, the study contributes to ongoing important development of useful and up-to-date survey scales, which can help researchers avoid measurement issues (Flake & Fried 2020) in various fields where, in this case, impulsivity plays a role.
The study was reviewed over three rounds by two reviewers, with respective topic and methods expertise. Based on detailed responses to reviewers’ feedback and the recommender’s comments on the construct, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/wevc4 (under temporary private embargo)
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. List of eligible PCI-RR-friendly journals:
References 1. Flake, J. K. & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465. https://doi.org/10.1177/2515245920952393
2. Fournier, L., Bőthe, … & Billieux, J. (2024). Evaluating the factor structure and measurement invariance of the 20-item short version of the UPPS-P Impulsive Behavior Scale across multiple countries, languages, and gender identities. Assessment, 10731911241259560. https://doi.org/10.1177/10731911241259560
3. Fournier, L., Heeren, A., Baggio, S., Clark, L., Verdejo-García, A., Perales J.C., Billieux J. (2025) Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior Scale. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/wevc4
4. Leshem, R. & Glicksohn, J. (2007). The construct of impulsivity revisited. Personality and individual Differences, 43, 681-691. https://doi.org/10.1016/j.paid.2007.01.015
| Development and evaluation of a revised 20-item short version of the UPPS-P Impulsive Behavior Scale | Loïs Fournier, Alexandre Heeren, Stéphanie Baggio, Luke Clark, Antonio Verdejo-García, José C. Perales, Joël Billieux | <p style="text-align: justify;">The UPPS-P Impulsive Behavior Scale is a well-established psychometric instrument for assessing impulsivity, a key psychological construct transdiagnostically involved in the etiology of numerous psychiatric and neu... | ![]() | Social sciences | Veli-Matti Karhulahti | Ivan Ropovik | 2024-06-27 17:47:17 | View |
08 Sep 2022
STAGE 1
![]() How to succeed in human modified environmentsLogan CJ, Shaw R, Lukas D, McCune KB http://corinalogan.com/ManyIndividuals/mi1.htmlThe role of behavioural flexibility in promoting resilience to human environmental impactsRecommended by Chris ChambersUnderstanding and mitigating the environmental effects of human expansion is crucial for ensuring long-term biosustainability. Recent research indicates a steep increase in urbanisation – including the expansion of cities – with global urban extent expanding by nearly 10,000 km^2 per year between 1985 and 2015 (Liu et al, 2020). The consequences of these human modified environments on animal life are significant: in order to succeed, species must adapt quickly to environmental changes, and those populations that demonstrate greater behavioural flexibility are likely to cope more effectively. These observations have, in turn, prompted the question of whether enhancing behavioural flexibility in animal species might increase their resilience to human impacts.
In the current research, Logan et al. (2022) will use a serial reversal learning paradigm to firstly understand how behavioural flexibility relates to success in avian species that are already successful in human modified environments. The authors will then deploy these flexibility interventions in more vulnerable species to establish whether behavioural training can improve success, as measured by outcomes such as foraging breadth, dispersal dynamics, and survival rate.
The Stage 1 manuscript was submitted via the programmatic track and will eventually produce three Stage 2 outputs focusing on different species (toutouwai, grackles, and jays). Following two rounds of in-depth review, the recommender judged that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/wbsn6
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. List of eligible PCI RR-friendly journals: References
1. Liu, X., Huang, Y., Xu, X., Li, X., Li, X., Ciais, P., Lin, P., Gong, K., Ziegler, A. D., Chen, A., et al. (2020). High-spatiotemporal-resolution mapping of global urban change from 1985 to 2015. Nature Sustainability, 1–7. https://doi.org/10.1038/s41893-020-0521-x
2. Logan, C.J., Shaw, R., Lukas, D. & McCune, K.B. (2022). How to succeed in human modified environments, in principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/wbsn6
| How to succeed in human modified environments | Logan CJ, Shaw R, Lukas D, McCune KB | <p>Human modifications of environments are increasing, causing global changes that other species must adjust to or suffer from. Behavioral flexibility (hereafter ‘flexibility’) could be key to coping with rapid change. Behavioral research can cont... | ![]() | Life Sciences | Chris Chambers | 2022-05-06 12:12:05 | View | |
14 Feb 2024
STAGE 1
![]() Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) TemplateLisa Spitzer & Stefanie Mueller https://doi.org/10.23668/psycharchives.14119Examining the restrictiveness of the PRP-QUANT TemplateRecommended by Daniel LakensThe Psychological Research Preregistration-Quantitative Template has been created in 2022 to provide more structure and detail to preregistrations. The goal of the current study is to test if the PRP-QUANT template indeed provides greater restriction of the flexibility in a study for preregistered hypotheses than other existing templates. This question is important because one concern that has been raised about the practice of preregistration is that the quality of preregistrations is often low. Metascientific research has shown that preregistrations are often of low quality (Bakker et al., 2020), and hypothesis tests from preregistrations are still selectively reported (van den Akker, van Assen, Enting, et al., 2023). It is important to improve the quality of preregistrations, and if a better template can help, it is a cost-effective approach to improve quality if the wider adoption of the better template can be promoted.
In the current study, Spitzer and Mueller (2024) will follow the procedure of a previous meta-scientific study by Heirene et al. (2021). 74 existing preregistrations with the PRP-QUANT template are available, and will be compared with an existing dataset coded by Bakker and colleagues (2020). The sample size is limited, but allows detecting some differences that would be considered large enough to matter, even though there might be smaller differences that would not be detectable based on the currently available sample size. Nevertheless, given that there is a need for improvement, even preliminary data might already be useful to provide tentative recommendations. Restrictiveness will be coded in 23 items, and adherence to or deviations from the preregistration are coded as well. As such deviations are common, the question whether this template reduced the likelihood of deviations is important. Two coders will code all studies.
The study should provide a useful initial evaluation of the PRP-QUANT template, and has the potential to have practical implications if the PRP-QUANT template shows clear benefits. Both authors have declared COI's related to the PRP-QUANT template, making the Registered Report format a fitting approach to prevent confirmation bias from influencing the reported results.
This Stage 1 manuscript was evaluated over two rounds of in-depth review by two expert reviewers and the recommender. After the revisions, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/vhezj
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed ANY part of the data/evidence. List of eligible PCI RR-friendly journals:
References
1. van den Akker, O. R., van Assen, M. A. L. M., Bakker, M., Elsherif, M., Wong, T. K., & Wicherts, J. M. (2023). Preregistration in practice: A comparison of preregistered and non-preregistered studies in psychology. Behavior Research Methods. https://doi.org/10.3758/s13428-023-02277-0
2. Bakker, M., Veldkamp, C. L. S., Assen, M. A. L. M. van, Crompvoets, E. A. V., Ong, H. H., Nosek, B. A., Soderberg, C. K., Mellor, D., & Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), e3000937. https://doi.org/10.1371/journal.pbio.3000937
3. Spitzer, L. & Mueller, S. (2024). Stage 1 Registered Report: Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vhezj
4. Heirene, R., LaPlante, D., Louderback, E. R., Keen, B., Bakker, M., Serafimovska, A., & Gainsbury, S. M. (2021). Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison. PsyArXiv. https://doi.org/10.31234/osf.io/nj4es
| Restriction of researcher degrees of freedom through the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template | Lisa Spitzer & Stefanie Mueller | <p>Preregistration can help to restrict researcher degrees of freedom and thereby ensure the integrity of research findings. However, its ability to restrict such flexibility depends on whether researchers specify their study plan in sufficient de... | ![]() | Social sciences | Daniel Lakens | 2023-06-01 10:39:20 | View | |
23 Jan 2025
STAGE 1
![]() Mapping methodological variation in experience sampling research from design to data analysis: A systematic reviewLisa Peeters, Wim Van Den Noortgate, M. Annelise Blanchard, Gudrun Eisele, Olivia Kirtley, Richard Artner, Ginette Lafit https://osf.io/8mwguMethodological Variation in the Experience Sampling Methods: Can We Do ESM Better?Recommended by Thomas EvansThe replication crisis/credibility revolution has driven a vast number of changes to our research environment (Korbmacher et al., 2023) including a much needed spotlight on issues surrounding measurement (Flake & Fried, 2020). As general understanding and awareness has increased surrounding the 'garden of forking paths' or 'researcher degrees of freedom' (Simmons et al., 2011), and the various decisions made during the scientific process that could impact the conclusions drawn by the process, so too should our interest in meta-research that tells us more about the methodological processes we follow, and how discretionary decisions may influence the design, analysis and reporting of a project.
Peeters et al. (2025) have proposed a systematic literature review of this nature, mapping the methodological variation in experience sampling methods (ESM) from the design stage all the way to dissemination. It starts this journey by mapping how ESM studies vary e.g., in design, considering a variety of factors like sample size, number of measurements, and sampling scheme. It also evaluates reporting quality, rationales provided, and captures the extent of open science practices adopted. Covering many parts of the research process that get assumed, unreported or otherwise unjustified, the proposed work looks set to springboard an important body of work that can tell us more effectively how to design, implement and report ESM studies.
The Stage 1 submission was reviewed over one round of in-depth review with two reviewers. Based on detailed responses to reviewers’ feedback, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/ztvn3
Level of bias control achieved: Level 1. At least some of the data/evidence that will be used to the answer the research question has been accessed and observed by the authors, including key variables, but the authors certify that they have not yet performed any of their preregistered analyses, and in addition they have taken stringent steps to reduce the risk of bias. List of eligible PCI-RR-friendly journals:
References
1. Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465. https://doi.org/10.1177/2515245920952393
2. Korbmacher, M., Azevedo, F., Pennington, C. R., Hartmann, H., Pownall, M., Schmidt, K., ... & Evans, T. (2023). The replication crisis has led to positive structural, procedural, and community changes. Communications Psychology, 1, 3. https://doi.org/10.1038/s44271-023-00003-2
3. Peeters, L., Van Den Noortgate, W., Blanchard, M. A., Eisele, G., Kirtley, O., Artner, R., & Lafit, G. (2025). Mapping Methodological Variation in ESM Research from Design to Data Analysis: A Systematic Review. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/ztvn3
4. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. https://doi.org/10.1177/0956797611417632
| Mapping methodological variation in experience sampling research from design to data analysis: A systematic review | Lisa Peeters, Wim Van Den Noortgate, M. Annelise Blanchard, Gudrun Eisele, Olivia Kirtley, Richard Artner, Ginette Lafit | <p><strong>Aim</strong>. The Experience Sampling Method (ESM) has become a widespread tool to study time-varying constructs across many subfields of psychological and psychiatric research. This large variety in subfields of research and constructs... | Social sciences | Thomas Evans | 2024-09-04 10:39:37 | View | ||
21 Mar 2023
STAGE 1
![]() Convenience Samples and Measurement Equivalence in Replication ResearchLindsay J. Alley, Jordan Axt, Jessica Kay Flake https://osf.io/32unbDoes data from students and crowdsourced online platforms measure the same thing? Determining the external validity of combining data from these two types of subjectsRecommended by Corina LoganComparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008).
In this registered report, Alley and colleagues plan to tackle the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform (Alley et al. 2023). They will determine whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups. Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results will be of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). If Alley and colleagues show that these two types of subjects have measurement equivalence, then this indicates that it is more likely that equivalence could hold for other studies relying on these type of subjects as well. If measurement equivalence is not found, then it is a warning to others to evaluate their experimental design to improve validity. In either case, it gives researchers a way to test measurement equivalence for themselves because the code is well annotated and openly available for others to use.
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA). Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question AND they have taken additional steps to maximise bias control and rigour (e.g. conservative statistical threshold; recruitment of a blinded analyst; robustness testing, multiverse/specification analysis, or other approach)
List of eligible PCI RR-friendly journals:
References
Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research, in principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/7gtvf
Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
| Convenience Samples and Measurement Equivalence in Replication Research | Lindsay J. Alley, Jordan Axt, Jessica Kay Flake | <p>A great deal of research in psychology employs either university student or online crowdsourced convenience samples (Chandler & Shapiro, 2016; Strickland & Stoops, 2019) and there is evidence that these groups differ in meaningful ways ... | Social sciences | Corina Logan | 2022-11-29 18:37:54 | View | ||
Convenience Samples and Measurement Equivalence in Replication ResearchLindsay J. Alley, Jordan Axt, Jessica Kay Flake https://osf.io/s5t3vData from students and crowdsourced online platforms do not often measure the same thingRecommended by Corina LoganComparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008). In the current study, Alley and colleagues (2023) tackled the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform. They determined whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups. Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results are of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). The authors show that these two types of subjects do not often have measurement equivalence, which is a warning to others to evaluate their experimental design to improve validity. They provide useful recommendations for researchers on how to to implement equivalence testing in their studies, and they facilitate the process by providing well annotated code that is openly available for others to use. After one round of review and revision, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation. URL to the preregistered Stage 1 protocol: https://osf.io/7gtvf
Level of bias control achieved: Level 2. At least some data/evidence that was used to answer the research question had been accessed and partially observed by the authors prior to Stage 1 IPA, but the authors certify that they had not yet observed the key variables within the data that were used to answer the research question AND they took additional steps to maximise bias control and rigour.
List of eligible PCI RR-friendly journals:
References
1. Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
2. Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
3. Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/s5t3v
| Convenience Samples and Measurement Equivalence in Replication Research | Lindsay J. Alley, Jordan Axt, Jessica Kay Flake | <p>A great deal of research in psychology employs either university student or online crowdsourced convenience samples (Chandler & Shapiro, 2016; Strickland & Stoops, 2019) and there is evidence that these groups differ in meaningful ways ... | Social sciences | Corina Logan | Alison Young Reusser | 2023-08-31 20:26:43 | View |
FOLLOW US
MANAGING BOARD
Chris Chambers
Zoltan Dienes
Corina Logan
Benoit Pujol
Maanasa Raghavan
Emily S Sena
Yuki Yamada