Incorporating open research practices into the undergraduate curriculum increases understanding of such practices

Parker

they additionally found that those who preregistered were those students who reported that they had more opportunity, motivation, and greater capability.This shows how important it is to incorporate the teaching of open research practices such that students can increase their capability, motivation, and opportunity to pursue such practices, whether it is preregistration or other practices that are better known to reduce QRPs (such as registered reports; Krypotos et al. 2022).After four rounds of review and revisions, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.URL to the preregistered Stage 1 protocol: https://osf.io/9hjbwLevel of bias control achieved: Level 6.No part of the data or evidence that was used to answer the research question was generated until after IPA.List of eligible PCI RR-friendly journals: Thank you for your revision and for responding to the comments, which addressed almost all of the remaining points.I found the new exploratory analysis result, that those who planned to preregister, but didn't, rated their capability as lower, really interesting and useful information for teachers to have.
Thank you for providing clarifications on how your Stage 1 and previous Stage 2 version show that the reduced sample size was not a deviation.However, the issue of making it clear which of your results were below or above your power threshold is still there.I think the confusion comes down to two things: 1) "This means that we were only able to detect stronger effects rather than moderate effects, of which none were found."It is unclear whether "of which none were found" indicates that you found no strong effects in your results, therefore you were unable to detect any effects in your study, or whether it indicates that you only had strong effects and no moderate effects were found, which would indicate that you had the power to detect your strong effects.Clarifying this sentence would help resolve the issue.
2) It would make it much clearer if, in the Results section, you added text after the presentation of each result to note whether the effect size was larger or smaller than the threshold for which you were able to detect effects at 80% power.
Once this issue has been resolved, I'm happy to issue IPA.I look forward to reading your next version.

Evaluation round #3
Dear authors, Thank you very much for your revision and point by point response to the comments made in the previous round.I sent the revised version back to one of the reviewers, but, unfortunately, they do not have time to rereview it.Therefore, I am making the decision based on the information I have and I am asking for a revision to re-address some of the reviewer points in a more comprehensive way.Specifically, I would like you to better address the following comments from Round 2 in order to make the most of this excellent piece of research and add more value.1) Neil Lewis, Jr. noted that "It could be beneficial to connect the current results with those broader calls about what is necessary for moving the open science movement forward".In your revision, you added only one general sentence that briefly cited other articles.Your results could be more meaningfully connected with the broader literature if you provided more detail and delved more in to the interesting practical and theoretical aspects of these connections.
2) Lisa Spitzer expressed interest in "exploratory analyses of students who wanted to preregister but then did not.Perhaps it might be interesting to look at their results of capability, opportunity, and motivation?"In your response, you declined to conduct the analyses due to "keeping the paper within scope" and a "very long word count".At PCI RR, there are no word limits, and, while you might have a target journal in mind that imposes word limits, your article at PCI RR is independent from this.I err in favor of adding value to the research to get as much as you can out of all of your hard work.While it is entirely your choice about whether you conduct a post-hoc analysis, you might consider whether this would add value to the data you were able to collect and, if you feel it relevant, go ahead with it.To be clear, you don't need to re-address this point in your revision, I just wanted to bring it up in the context of there not being any word limits at PCI RR so you are free to conduct the analysis if you want to.
3) The anonymous reviewer had doubts about your Stage 2 sample size (n=89: 52=experimental group, 37=control group) being much smaller than what was expected (n=200: 100=experimental group, 100=control group) at Stage 1.Here is the reviewer's comment: "My most serious concern with this Stage 2 report is the drastic difference in the planned and achieved sample size.While the Stage 1 proposed to collect of final sample of 200, with 100 participants in each group, the final sample comprised less than half of this planned amount, and only 37 subjects in one group.I appreciate that this study was subject to recruitment and retention issues, and that the study was conducted under time pressure, but this strikes me as a major drawback in the Registered Report context.What is the achieved power, based on the analyzed sample, for the effect size previously proposed at Stage 1?This concerns me both in terms of the reliability of the observed effects, as well as our ability to confidently interpret the null findings." This calls into question whether your Stage 2 meets the review criterion "2C.Whether the authors adhered precisely to the registered study procedures" and, consequently, "2E.Whether the authors' conclusions are justified given the evidence" (https://rr.peercommunityin.org/help/guide_for_recommenders#h_6759646236401613643390905). The much smaller sample size was not discussed with PCI RR as this deviation was starting to unfold during the course of the research.I would like a full justification about 1) whether this small sample size is a deviation from the Stage 1 plan and, if so, 2) explain exactly how it differs from the Stage 1 plan and why you think it is still scientifically valid using the details set out in your power analyses, as well as any other pieces of evidence that can show this.3) If your small sample size is not a deviation from the Stage 1 plan, please explain exactly why and use details from your Stage 1 power analysis (as well as any other evidence you can bring to bear on the issue) to show why this is the case.A summary of these details should also be included in the article to help readers understand this point because this question will come up for future readers as it already has during the review process.I appreciate that you attempted to address this comment in your response, however, there isn't enough detail in your response for me to be able to empirically evaluate whether your article meets the above two Stage 2 review criteria.
Additionally, I checked for your data and code and was able to find the data sheet (https://osf.io/download/zdu8f/), but I was only able to find the code for the Stage 1 power analysis (https://osf.io/download/jpmbt/) and not the code for the Stage 2 Results section.I checked your submission checklist and you state that all analysis code has been provided and is at the following URL: https://osf.io/5qshg/?view_only=.
However, I was not able to find the code at this repository.Please provide the remaining code and a direct link to the file at OSF (rather than a general link to the broader project).expected sample size and how it effects the results, and it will be important to make sure you carefully respond to these points and revise your manuscript accordingly.
Once I receive the revision and point by point response to all reviewer comments, I will send it back to a subset of the reviewers.I look forward to your resubmission.

Reviewed by Kelsey McCune, 20 April 2023
In my review of this Stage 2 manuscript, I found that the authors were completely consistent with the registered report from Stage 1.The one deviation (5-item rather than 11-item scale), and the failure to meet the preregistered sample size were openly stated and logically explained in the context of their study constraints.
I was pleased and impressed with how easy it was to read the manuscript (at both stages, really) and to see the additions in the post-study write up.While I am not in the author's field, the discussion and conclusion points seem well founded based on the results, and present important directions for future research.
The only minor comment I have is for the authors to carefully review the text throughout for spelling and grammar errors arising as a consequence of the changes in verb tense.
• Whenever percentages are presented, I recommend presenting the percentages first and n second (when I first read the results section, I thought the numbers indicating the n were part of the percentages), e.g., L853: "(n = 29, 55.8%)" � "(55.8%,n = 29)" • Some page and line numbers were incorrect/jumbled • L413 "inclusion criteria was": "were" • L414 "participants confirmed they met this": "these" • L635 "The same sample of students were": "was" • L667 "uploaded": I recommen using "uploading" instead of "uploaded" to make it more consistent with point 1 ("creating").• L862 "(see Supplementary Information; https://osf.io/v4fb2,for our full analysis plan)": Since you have included this table in the manuscript, please refer to the version in the manuscript instead of the supplementary material.
• L957: I did not understand what "JMMG1" means -is this a mistake perhaps?
• L8-11 (Discussion): You mention the COM-B model in the conclusions, but I would suggest also mentioning the model here when discussing the results concerning reported capability, opportunity, and motivation.
• L9 (Discussion): I recommend deleting the "then" since it might be confusing.
• L18 (Discussion): Here, the authors refer to a paper by Toth et al. (2021).We recently also surveyed psychological researchers regarding attitudes, motivations, and obstacles regarding preregistration, and came to similar conclusions about perceived obstacles.We also found that supervisors were the biggest influence on students' decision to preregister or not, which also aligns with the arguments made by the authors.Thus, it might be interesting to refer to our paper as well ("Registered report: Survey on attitudes and experiences regarding preregistration in psychological research", Spitzer & Mueller, 2023, https://doi.org/10.1371/journal.pone.0281086)-but of course this is not a must.
• L52 (Discussion): delete "too" • Table 2: I would recommend avoiding the term "The final planned sample size is therefore 200 participants" as this could lead to misunderstandings (even if the section in which the deviation is described is mentioned afterwards).Alternatively, the final N could also be mentioned here.
• Table 3: Please indicate the scale again.
• Table 4: I recommend using the term "fabrication of data" instead of "falsification of data" because this might be confused with Popper's Falsification Principle.
Overall, in my opinion, this Registered Report meets PCI-RR's criteria for stage 2 Registered Reports: It is clear which edits were made to the stage 1 Registered Report, and the hypotheses, as well as the reported methods and procedures align with what was planned a priori.The drawn conclusions are justified given the evidence.Deviations are also described and justified in the paper.The biggest deviation is the smaller sample size of only 89 instead of the targeted 200 participants.We already discussed this risk in the stage 1 Registered Report and the authors had implemented respective countermeasures.I find it very important to clarify to the reader that the non-significant findings are probably due to the low power, which I think the authors do to a sufficient extent.Therefore, in my opinion, the fact that the sample size is smaller than planned is not an obstacle for recommendation.
The methodological rigour of the study is commendable.Additionally, I think the authors have done a good job of describing all deviations and limitations.Overall, I believe this study is an important starting point for further discussions, which I look forward to.I hope that the authors find my comments helpful for revising their manuscript.
All the best, Lisa Spitzer

Reviewed by anonymous reviewer 1, 16 May 2023
This Stage 2 report reflects a major effort to evaluate the impacts of undergrad study pre-registration on statistics and open science attitudes.
My most serious concern with this Stage 2 report is the drastic difference in the planned and achieved sample size.While the Stage 1 proposed to collect of final sample of 200, with 100 participants in each group, the final sample comprised less than half of this planned amount, and only 37 subjects in one group.I appreciate that this study was subject to recruitment and retention issues, and that the study was conducted under time pressure, but this strikes me as a major drawback in the Registered Report context.What is the achieved power, based on the analyzed sample, for the effect size previously proposed at Stage 1?This concerns me both in terms of the reliability of the observed effects, as well as our ability to confidently interpret the null findings.
The introduction and hypotheses match the Stage 1.
The procedures seem to adhere to the Stage 1 plan, with minor deviations (e.g., the use of a 5-point COM-B scale rather than 11).However, I believe readers would benefit from an explicit section for 'deviations from registration' that clearly delineates and explains any deviations, and whether or not they change anything about the results interpretation.
Exploratory analyses are justified and informative.
The conclusions are largely justified given the evidence, although at points I think they could adhere a bit more closely to the data.E.g., the discussion states: "Our findings suggest that the process of preregistration can bolster students' understanding of Open Science terminology more broadly, which suggests that this practice may indeed be a useful way of providing an entry point into the wider Open Science conversation."Since the study did not assess understanding Open Science terminology, I think it is more appropriate to state that it may improve their confidence with Open Science concepts.Moreover, since most of the study hypotheses were not met, I think that warrants further discussion of why that might be the case and what implications it has for the utility of preregistration.The discussion still clearly leans in the direction of pursuing widespread adoption and investigation of Open Science practices, rather than concluding that pre-registration experience has no influence on understanding statistical rigor or attitudes toward QRPs (as the data suggest).

Evaluation round #1
DOI or URL of the preprint: https://osf.io/2fvpy Version of the preprint: 1

4 1 Authors
Advances in Cognitive Psychology • Advances in Methods and Practices in Psychological Science • Cambridge Educational Research eDOI or URL of the preprint: https://osf.io/4jfa7Version of the preprint:

I 1 Authors
'm looking forward to your response and revision.All my best, Corina Evaluation round #2 DOI or URL of the preprint: https://osf.io/numr3Version of the preprint: received feedback from the same four reviewers from Stage 1, and they have mostly minor comments for you to address in a revision.One reviewer has a larger concern about the much smaller than