Close printable page
Recommendation

Does incorporating open research practices into the undergraduate curriculum decrease questionable research practices?

ORCID_LOGO and ORCID_LOGO based on reviews by Kelsey McCune, Neil Lewis, Jr., Lisa Spitzer and 1 anonymous reviewer
A recommendation of:

Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 08 July 2021
Recommendation: posted 29 September 2021, validated 29 September 2021
Cite this recommendation as:
Logan, C. and Chambers, C. (2021) Does incorporating open research practices into the undergraduate curriculum decrease questionable research practices?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=48

Related stage 2 preprints:

Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation
Madeleine Pownall, Charlotte R. Pennington, Emma Norris, Marie Juanchich, David Smaile, Sophie Russell, Debbie Gooch, Thomas Rhys Evans, Sofia Persson, Matthew HC Mak, Loukia Tzavella, Rebecca Monk, Thomas Gough, Christopher SY Benwell, Mahmoud Elsherif, Emily Farran, Thomas Gallagher-Mitchell, Luke T. Kendrick, Julia Bahnmueller, Emily Nordmann, Mirela Zaneva, Katie Gilligan-Lee0, Marina Bazhydai, Andrew Jones, Jemma Sedgmond, Iris Holzleitner, James Reynolds, Jo Moss, Daniel Farrelly, Adam J. Parker, and Kait Clark
https://doi.org/10.31234/osf.io/xg2ah

Recommendation

In a time when open research practices are becoming more widely used to combat questionable research practices (QRPs) in academia, this Stage 1 Registered Report by Pownall and colleagues (2021) will empirically investigate the practice of preregistering study plans, which will allow us to better understand to what degree such practices increase awareness of QRPs and whether experience with preregistration helps reduce engagement in QRPs. This investigation is timely because results from these kinds of studies are only recently becoming available and the conclusions are providing evidence that open research practices can improve research quality and reliability (e.g., Soderberg et al. 2020, Chambers & Tzavella 2021). The authors crucially focus on the effect of preregistering the undergraduate senior thesis (of psychology students in the UK), which is a key stage in the development of an academic. This data will help shape the future of how we should teach open research practices and what effect we as teachers can have on budding research careers. The five expert peer reviews were of an extremely high quality and were very thorough. The authors did an excellent job of addressing all of the comments in their responses and revised manuscript versions, which resulted in only one round of peer review, plus a second revision based on Recommender feedback. As such, this registered report meets the Stage 1 criteria and is therefore awarded in-principle acceptance (IPA). We wish the authors the best of luck with the study and we look forward to seeing the results.

URL to the preregistered Stage 1 protocol: https://osf.io/9hjbw

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.

List of eligible PCI RR-friendly journals:

References

  1. Pownall M, Pennington CR, Norris E, Clark K. 2021. Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report. OSF, stage 1 preregistration, in principle acceptance of version 1 by Peer Community in Registered Reports.   https://doi.org/10.17605/OSF.IO/9HJBW
  2. Chambers C, Tzavella L (2021). The past, present, and future of Registered Reports. https://doi.org/10.31222/osf.io/43298
  3. Soderberg CK, Errington TM, Schiavone SR, Bottesini J, Thorn FS, Vazire S, Esterling KM, Nosek BA (2021) Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5, 990–997. https://doi.org/10.1038/s41562-021-01142-4
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #2

DOI or URL of the report: https://osf.io/6heja/

Version of the report: 1

Author's Reply, 15 Sep 2021

Decision by ORCID_LOGO and ORCID_LOGO, posted 14 Sep 2021

Dear Madeleine Pownall, Charlotte Pennington, Emma Norris, and Kait Clark,
Thank you very much for carefully addressing the reviewer comments in your revised manuscript and in the response document. We think you did an excellent job revising and we only have two comments to be addressed in a minor revision (points 9 and 10 from Reviewer 4).

Point 9 from Reviewer 4 (anonymous, page 16 of the author’s response) regards the many forms a preregistration can take and how this could be a confound in the results. We agree with this point because what gets included in a preregistration is open for interpretation. For example, we have seen preregistration templates that only asked authors to specify the research question. We wouldn’t consider that kind of a preregistration sufficient to be an intervention and cause a difference between T1 and T2, which could lead to no significant differences between the control and preregistration groups (point 10 from Reviewer 4). This could be addressed by having all preregistration participants send pdfs of their preregistration, then a team member could check them. It wouldn’t take too long to go through 200 preregistrations if one is only looking for the number of different sections included (e.g., research question, hypotheses, methods, analysis plan). The number of preregistration sections could then be included in the analysis, with the prediction that more sections results in a greater difference between groups at T2. Or, if you don’t want to include this in the analyses, you could throw out the participants who had preregistrations that did not include an analysis plan (because your RR is focused on attitudes about statistics and this is the part of the preregistration that would cause them to think about their statistics). A “section” could be defined loosely and refer simply to the presence of sentences that describe how they will conduct the analysis.

We look forward to receiving your revision.

All our best,
Corina Logan and Chris Chambers

Evaluation round #1

DOI or URL of the report: https://osf.io/cnveh/

Version of the report: 1

Author's Reply, 07 Sep 2021

Download author's reply Download tracked changes file

We attach our responses to all reviewer comments in the PDF. It has also been uploaded to our OSF page here, for full transparency: https://osf.io/hfm2z/

Decision by ORCID_LOGO and ORCID_LOGO, posted 23 Aug 2021

Dear Madeleine Pownall, Charlotte Pennington, Emma Norris, and Kait Clark,

Your Stage 1 registered report titled “Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report” has now been evaluated by five reviewers. My co-recommender, Chris Chambers, and I have made the first decision, which is: major revision. 

Your research is well planned and clearly written and we and the reviewers think it will be a rigorous investigation that will contribute to a useful empirical understanding of whether teaching open research practices (specifically preregistration) to undergraduates can reduce questionable research practices. The reviewers raise a some larger points concerning rationale, methodology (including justification of design decisions), and ethics that should be addressed in your revision, and they also provide some helpful comments/edits on where the manuscript can be even clearer (note that some edits/comments are available in pdf documents that are downloadable at the website). We would be happy to receive a revised manuscript that addresses all of the reviewer comments.

Regarding Dr. McCune’s comment to modify the structure of your study design table, please keep it in it’s current structure - this is a PCI RR requirement at Stage 1.

Regarding Dr. Aubert Bonn’s comment about potential baseline differences between the preregistered and non-preregistered groups depending on which group/lab they come from, in case you would find it useful, Richard McElreath has an example of how to investigate such potential differences in his book Statistical Rethinking (2nd edition, page 340, section “11.1.4 Aggregated binomial: Graduate school admissions“, https://xcelab.net/rm/statistical-rethinking/).

We look forward to your resubmission.

All our best,

Corina Logan and Chris Chambers

Reviewed by , 11 Aug 2021

This preregistration aims to test the effect on statistics confidence and awareness of questionable research practices when undergraduate dissertations are preregistered.  I think this is a necessary empirical investigation that will facilitate understanding and methods of the open science movement.  The introduction section is thorough, with sufficient background to support the proposed study.  I suggest adding an additional paragraph commenting on the impact to the open science movement of the findings of the proposed study.  The hypotheses, predictions, and interpretation of alternative results could be more detailed and I make specific comments on these in the attached track-changes pdf document.  The planned analyses seem sufficient to address the 3 research questions, however I do not have experience in the best analysis practices for survey-based studies.

Download the review

Reviewed by , 07 Aug 2021

“Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report” is an interesting manuscript that proposes an innovative design to study the effects of implementing preregistration into research methods pedagogy on student learning and attitudes. Overall, I think the manuscript is well-written and informative, asks an important question for both research and practice, and has a sound research design for testing the proposed hypotheses. The authors addressed most of the concerns that came to mind as I read the manuscript, so I only have a few minor questions and suggestions for the authors to consider prior to carrying out their study.

My first question is about why there is such a heavy focus on statistics as an outcome rather than other elements of research design. One of the things that I appreciate about preregistration is that it forces researchers (and students) to think about the relationship between many elements in the research process (strength of manipulation, effect sizes, sample characteristics that might moderate processes, etc.). The statistical elements are of course important, but the preregistration process reveals more than that—it forces us to wrestle with how all elements of research design are related. I was surprised to not see other research design related competencies (other than statistics and QRPs) being measured.

My second question is about the “forced entry” strategy for minimizing missing data…is that allowed? This could be a country-level difference so feel free to ignore this question if it is, but in the US we are not allowed to use forced responses—it is considered an ethical violation; we can use “request response” to nudge them toward answering, but participants have to have the option to not answer questions that they do not want to.

Overall, this is a great manuscript, and I commend the authors for conducting this important research.

Reviewed by ORCID_LOGO, 11 Aug 2021

I want to gratulate the authors to this interesting and thoroughly planned Registered Report, which I enjoyed reading and reviewing very much. 

 

Summary: The authors want to conduct a study among undergraduate psychology students in the UK, to assess if preregistration of the final-year dissertation influences attitudes towards statistics, QRPs, and open science. For this, a targeted sample size of 200 students will be recruited, that plan or plan not to preregister their dissertation. The design follows a 2 (pregistration: yes vs. no) between x 2 (timepoint: before and after dissertation) within subjects design.

 

I have added comments to the PDF of the Registered Report (see attached). Most of these concern minor points, such as:
1) Spelling.
2) Structure of the text.
3) Consistency of notation and formatting.
4) Depth and clarity of descriptions. Most often I added some questions I had during reading, for which I would find it beneficial if the authors would answer them in the text to enhance clarity and reproducibility of the study.
5) Enumeration of hypotheses. I would find it beneficial if the authors would differentiate between different sub-hypotheses and enumerate them, to reference them later during the analysis section.
6) Bot protection. Since the authors plan to advertise their study via social media, I recommend to implement some kind of bot protection.
7) Definition of preregistration in the study. I recommend that the authors update their definition of preregistration, since the current definition describes that preregistration is possible "before you collected your data". However, to address secondary data analysis preregistration, I would add that preregistration is also possible before data is analyzed.
8) Attention check. I recommend to implement a different attention check, since participants that simply agree to anything in the study, will pass the current attention check. 
9) Addition of some questions in the study. This is only based on personal interest.
10) Combination of frequentist and Bayes statistics. Unfortunately, I am no expert to Bayes statistics, but I understand that one method (frequentist vs. Bayes) would be sufficient, however, here both are combined. Maybe the other reviewers can contribute additional input to this question.
11) One hypothesis concerns the "understanding/awareness" of open science. Since understanding is not directly tested, but rather perceived understanding is assessed, I would recommend to use the term "perceived understanding".
12) Sometimes, an ANOVA is implemented for a comparison of two groups (preregistration: yes vs. no). From my perspective, a t-test would be the more appropriate analysis.
(more detailed descriptions of these comments are provided in the attached PDF)

 

Additionally, I would like to discuss some major points which I think are important to consider before conducting the study:
1) I would like to discuss more the data collection at Time 1. First, I suggest that quota sampling could be used to control for different group sizes (preregistration: yes vs. no). Alternatively, if no quota sampling is used, I think the authors should discuss more what would be the consequences of the case that there may be very unequal group sizes.
2) Also, in general, I am unsure if enough participants can be collected at Time 1, since only a short time period is indicated for this data collection (September - October 2021). I would appreciate it if the authors could elaborate on how they will proceed if less than the targeted 240 students can be collected by the end of October, or if only a small proportion of participants indicate planning to preregister.
(more detailed descriptions of these comments are provided in the attached PDF)​

 

Overall, I think that the study addresses a valid research question with coherent and plausible hypotheses. I perceived the described methods as sound, feasible, and for the most parts, clear. Whenever open questions remain, I have marked them in the attached PDF. Additionally, I thought it was excellent that the authors had already addressed possible limitations themselves.  I feel like the study will be an important contribution and I hope that my comments will help the authors improve their study and manuscript.

All the best,
Lisa Spitzer

Download the review

Reviewed by anonymous reviewer 1, 21 Aug 2021

This proposed study aims to test the influence of study pre-registration on undergraduate attitudes towards statistics and questionable research practices. Given the enthusiasm for open science practices among early career researchers, and the certainty with which many of those practices are endorsed by some members of the field, it is important to empirically test the influence of those practices on meaningful research outcomes. Open science practices themselves should indeed be subject to the scrutiny of the Registered Report format, so I commend the team for taking on this sort of research. In order to meet the demands of this format, however, I believe the current report could use more detail in several key areas related to establishing the scientific premise, justifying the choice of measures and associated hypotheses, and ensuring that any results can be interpreted with confidence.

 

Overall, I think the study motivation and scientific premise should be strengthened. While undergraduate teaching and learning are inherently important, it’s a bit unclear exactly what issue this RR is tackling, or what guides the choice of measures and hypotheses. For instance, the intro lays out a number of dissertation struggles that undergraduates may face related to anxiety, disengagement, writing ability, and supervisory relationships (p. 6). Is pre-registration meant to mitigate these concerns? If so, how? It’s unclear to me whether the focus is on student well-being, research quality, or pedagogical value. If it is all of these things, I think the intro should more clearly establish why each is important to study in this context, and make a more explicit case for why/how pre-registration should be expected to improve these problem areas. Is the thinking that the mere act of preregistration will impart these benefits? Or will these students undergo some additional learning, which might have the same benefits without the actual preregistration? The intro also implies that open science practices are universally endorsed, but there is in fact a vocal pushback against many of these practices (e.g., Szollosi et al., 2020, TICS). I mention that, not to undermine the current study, but because I believe this perspective should be recognized and may offer further fuel to justify the proposed work.

 

After establishing the problem, I also think the report could clarify why these particular proposed measures were chosen, and also offer stronger justification that the measures will test what we think they’ll test. It’s not totally clear to me that these self-report questionnaires actually tap into the pedagogical effectiveness of pre-registration. For instance, is there any evidence that student attitudes toward statistics relate to their knowledge or competence with statistics? Instead of asking for confidence in open science terms, why not actually measure ability to define the terms accurately? I think the report could be strengthened overall if it included more objective measures of knowledge (stats and open science), and engagement in QRPs (i.e., did you do any of these things?), rather than just attitudes.

 

I would also like to see more details about recruitment and study criteria. What do we know right now about the uptake of pre-registration in this cohort? As in, how feasible will it be to recruit equal numbers in both groups, and for those groups to be matched? The report acknowledges this might be an issue, but I think it would be a rather serious one if all the pre-registration participants end up coming from a certain set of targeted schools, and all the non-pre-registerers are from different schools.

 

Is the *only* inclusion criteria that participants be a final-year undergraduate student, studying Psychology at a U.K institution. For instance, what happens at Time 1 if they respond that they’ve already pre-registered their study? Is the only possible reason that data would be excluded if a participant fails the attention check? I’m just asking to ensure that all the methods are specified in enough detail to be truly reproducible and transparent. 

 

Regarding the sample size, I understand that the proposal is based on time and resource considerations, but we would still need to be reasonably assured that the outcomes would be valuable. What was the input to the power analyses, and do we have reason to believe that the cited effect size (80% statistical power to detect an effect size of np2 = .04) would constitute a meaningful effect in this context?

 

I very much appreciate that hypotheses and tests are specified in a table, but I think the interpretation, alternative outcomes, and theory portions of the table should be more explicit.  For instance, the final column applies broadly to all hypotheses and tests, stating “This may call into question various situational, contextual, and personal factors that impact how preregistration may be useful to students in this context “ Ideally, this section would include straightforward, concrete predictions that are guided by theory, for each hypothesis, and specific implications if findings do or don’t match expectations.

 

The “Limitations” section acknowledges that pre-registrations may differ in their rigor, but states that this issue is beyond the scope of the current study. This strikes me as an incredibly consequential issue, if we truly want to understand the impact of pre-registration on student outcomes. I understand why a common, plain-language definition of preregistration is provided, but I think there needs to be some further confirmation that the participants actually conducted a valid preregistration. For instance, is it acceptable if students just saved a personal pdf with their “plan,” or do they need to have submitted it to an accepted repository? There is then WIDE variability in the detail with which a pre-registration is specified. OSF alone provides 8 possible options for preregistration. There is then also WIDE variability in adherence to the registered methods/analyses. What if a large number of students submit a bare minimum, superficial pre-registration, just because it’s required of their program? Many scientists have speculated that this may become the norm, and I worry that it could set up this study up for failure to find any differences between groups. Why not ask whether the participants actually followed through with the registration, or to what extent they deviated?

 

In the end, if you find no differences between groups with these methods, would you confidently declare that preregistration has no benefits? If not, why not? If there is a difference, would we know it is due to the act of pre-registering, or could it be explained by some other factor? As in, do students in a pre-registration context just get more exposure to open science issues as part of their process, and could the same benefits therefore be achieved by briefly teaching students about QRPs and Open Science? The above concerns all boil down to ensuring that any result is meaningful (null or otherwise), which should be a primary goal of the design and RR proposal. I know some of these concerns may seem persnickety, but they are aimed at the overarching goals of ensuring that any results can be confidently interpreted, and that the methods are precisely reproducible.