Recommendation

Does incorporating open research practices into the undergraduate curriculum decrease questionable research practices?

and based on reviews by Noémie Aubert Bonn, Neil Lewis, Jr., Kelsey McCune, Lisa Spitzer and 1 anonymous reviewer
A recommendation of:
toto

Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report

Abstract
Submitted: 08 July 2021, Recommended: 29 September 2021

Recommendation

In a time when open research practices are becoming more widely used to combat questionable research practices (QRPs) in academia, this Stage 1 Registered Report by Pownall and colleagues (2021) will empirically investigate the practice of preregistering study plans, which will allow us to better understand to what degree such practices increase awareness of QRPs and whether experience with preregistration helps reduce engagement in QRPs. This investigation is timely because results from these kinds of studies are only recently becoming available and the conclusions are providing evidence that open research practices can improve research quality and reliability (e.g., Soderberg et al. 2020, Chambers & Tzavella 2021). The authors crucially focus on the effect of preregistering the undergraduate senior thesis (of psychology students in the UK), which is a key stage in the development of an academic. This data will help shape the future of how we should teach open research practices and what effect we as teachers can have on budding research careers. The five expert peer reviews were of an extremely high quality and were very thorough. The authors did an excellent job of addressing all of the comments in their responses and revised manuscript versions, which resulted in only one round of peer review, plus a second revision based on Recommender feedback. As such, this registered report meets the Stage 1 criteria and is therefore awarded in-principle acceptance (IPA). We wish the authors the best of luck with the study and we look forward to seeing the results.

URL to the preregistered Stage 1 protocol: https://osf.io/9hjbw

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.

List of eligible PCI RR-friendly journals:

References

  1. Pownall M, Pennington CR, Norris E, Clark K. 2021. Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report. OSF, stage 1 preregistration, in principle acceptance of version 1 by Peer Community in Registered Reports.   https://doi.org/10.17605/OSF.IO/9HJBW
  2. Chambers C, Tzavella L (2021). The past, present, and future of Registered Reports. https://doi.org/10.31222/osf.io/43298
  3. Soderberg CK, Errington TM, Schiavone SR, Bottesini J, Thorn FS, Vazire S, Esterling KM, Nosek BA (2021) Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5, 990–997. https://doi.org/10.1038/s41562-021-01142-4
Cite this recommendation as:
Corina Logan and Chris Chambers (2021) Does incorporating open research practices into the undergraduate curriculum decrease questionable research practices?. Peer Community in Registered Reports, 100002. https: //doi.org/10.24072/pci.rr.100002

Evaluation round #2

14 Sep 2021

DOI or URL of the report: https://osf.io/6heja/

Version of the report: 1

Author's Reply

Decision by and

Dear Madeleine Pownall, Charlotte Pennington, Emma Norris, and Kait Clark,
Thank you very much for carefully addressing the reviewer comments in your revised manuscript and in the response document. We think you did an excellent job revising and we only have two comments to be addressed in a minor revision (points 9 and 10 from Reviewer 4).

Point 9 from Reviewer 4 (anonymous, page 16 of the author’s response) regards the many forms a preregistration can take and how this could be a confound in the results. We agree with this point because what gets included in a preregistration is open for interpretation. For example, we have seen preregistration templates that only asked authors to specify the research question. We wouldn’t consider that kind of a preregistration sufficient to be an intervention and cause a difference between T1 and T2, which could lead to no significant differences between the control and preregistration groups (point 10 from Reviewer 4). This could be addressed by having all preregistration participants send pdfs of their preregistration, then a team member could check them. It wouldn’t take too long to go through 200 preregistrations if one is only looking for the number of different sections included (e.g., research question, hypotheses, methods, analysis plan). The number of preregistration sections could then be included in the analysis, with the prediction that more sections results in a greater difference between groups at T2. Or, if you don’t want to include this in the analyses, you could throw out the participants who had preregistrations that did not include an analysis plan (because your RR is focused on attitudes about statistics and this is the part of the preregistration that would cause them to think about their statistics). A “section” could be defined loosely and refer simply to the presence of sentences that describe how they will conduct the analysis.

We look forward to receiving your revision.

All our best,
Corina Logan and Chris Chambers


Evaluation round #1

23 Aug 2021

DOI or URL of the report: https://osf.io/cnveh/

Version of the report: 1

Author's Reply

Download author's reply Download tracked changes file

We attach our responses to all reviewer comments in the PDF. It has also been uploaded to our OSF page here, for full transparency: https://osf.io/hfm2z/

Decision by and

Dear Madeleine Pownall, Charlotte Pennington, Emma Norris, and Kait Clark,

Your Stage 1 registered report titled “Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report” has now been evaluated by five reviewers. My co-recommender, Chris Chambers, and I have made the first decision, which is: major revision. 

Your research is well planned and clearly written and we and the reviewers think it will be a rigorous investigation that will contribute to a useful empirical understanding of whether teaching open research practices (specifically preregistration) to undergraduates can reduce questionable research practices. The reviewers raise a some larger points concerning rationale, methodology (including justification of design decisions), and ethics that should be addressed in your revision, and they also provide some helpful comments/edits on where the manuscript can be even clearer (note that some edits/comments are available in pdf documents that are downloadable at the website). We would be happy to receive a revised manuscript that addresses all of the reviewer comments.

Regarding Dr. McCune’s comment to modify the structure of your study design table, please keep it in it’s current structure - this is a PCI RR requirement at Stage 1.

Regarding Dr. Aubert Bonn’s comment about potential baseline differences between the preregistered and non-preregistered groups depending on which group/lab they come from, in case you would find it useful, Richard McElreath has an example of how to investigate such potential differences in his book Statistical Rethinking (2nd edition, page 340, section “11.1.4 Aggregated binomial: Graduate school admissions“, https://xcelab.net/rm/statistical-rethinking/).

We look forward to your resubmission.

All our best,

Corina Logan and Chris Chambers

Reviewed by , 11 Aug 2021

This preregistration aims to test the effect on statistics confidence and awareness of questionable research practices when undergraduate dissertations are preregistered.  I think this is a necessary empirical investigation that will facilitate understanding and methods of the open science movement.  The introduction section is thorough, with sufficient background to support the proposed study.  I suggest adding an additional paragraph commenting on the impact to the open science movement of the findings of the proposed study.  The hypotheses, predictions, and interpretation of alternative results could be more detailed and I make specific comments on these in the attached track-changes pdf document.  The planned analyses seem sufficient to address the 3 research questions, however I do not have experience in the best analysis practices for survey-based studies.

Download the review

Reviewed by , 07 Aug 2021

“Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report” is an interesting manuscript that proposes an innovative design to study the effects of implementing preregistration into research methods pedagogy on student learning and attitudes. Overall, I think the manuscript is well-written and informative, asks an important question for both research and practice, and has a sound research design for testing the proposed hypotheses. The authors addressed most of the concerns that came to mind as I read the manuscript, so I only have a few minor questions and suggestions for the authors to consider prior to carrying out their study.

My first question is about why there is such a heavy focus on statistics as an outcome rather than other elements of research design. One of the things that I appreciate about preregistration is that it forces researchers (and students) to think about the relationship between many elements in the research process (strength of manipulation, effect sizes, sample characteristics that might moderate processes, etc.). The statistical elements are of course important, but the preregistration process reveals more than that—it forces us to wrestle with how all elements of research design are related. I was surprised to not see other research design related competencies (other than statistics and QRPs) being measured.

My second question is about the “forced entry” strategy for minimizing missing data…is that allowed? This could be a country-level difference so feel free to ignore this question if it is, but in the US we are not allowed to use forced responses—it is considered an ethical violation; we can use “request response” to nudge them toward answering, but participants have to have the option to not answer questions that they do not want to.

Overall, this is a great manuscript, and I commend the authors for conducting this important research.

Reviewed by , 11 Aug 2021

I want to gratulate the authors to this interesting and thoroughly planned Registered Report, which I enjoyed reading and reviewing very much. 

 

Summary: The authors want to conduct a study among undergraduate psychology students in the UK, to assess if preregistration of the final-year dissertation influences attitudes towards statistics, QRPs, and open science. For this, a targeted sample size of 200 students will be recruited, that plan or plan not to preregister their dissertation. The design follows a 2 (pregistration: yes vs. no) between x 2 (timepoint: before and after dissertation) within subjects design.

 

I have added comments to the PDF of the Registered Report (see attached). Most of these concern minor points, such as:
1) Spelling.
2) Structure of the text.
3) Consistency of notation and formatting.
4) Depth and clarity of descriptions. Most often I added some questions I had during reading, for which I would find it beneficial if the authors would answer them in the text to enhance clarity and reproducibility of the study.
5) Enumeration of hypotheses. I would find it beneficial if the authors would differentiate between different sub-hypotheses and enumerate them, to reference them later during the analysis section.
6) Bot protection. Since the authors plan to advertise their study via social media, I recommend to implement some kind of bot protection.
7) Definition of preregistration in the study. I recommend that the authors update their definition of preregistration, since the current definition describes that preregistration is possible "before you collected your data". However, to address secondary data analysis preregistration, I would add that preregistration is also possible before data is analyzed.
8) Attention check. I recommend to implement a different attention check, since participants that simply agree to anything in the study, will pass the current attention check. 
9) Addition of some questions in the study. This is only based on personal interest.
10) Combination of frequentist and Bayes statistics. Unfortunately, I am no expert to Bayes statistics, but I understand that one method (frequentist vs. Bayes) would be sufficient, however, here both are combined. Maybe the other reviewers can contribute additional input to this question.
11) One hypothesis concerns the "understanding/awareness" of open science. Since understanding is not directly tested, but rather perceived understanding is assessed, I would recommend to use the term "perceived understanding".
12) Sometimes, an ANOVA is implemented for a comparison of two groups (preregistration: yes vs. no). From my perspective, a t-test would be the more appropriate analysis.
(more detailed descriptions of these comments are provided in the attached PDF)

 

Additionally, I would like to discuss some major points which I think are important to consider before conducting the study:
1) I would like to discuss more the data collection at Time 1. First, I suggest that quota sampling could be used to control for different group sizes (preregistration: yes vs. no). Alternatively, if no quota sampling is used, I think the authors should discuss more what would be the consequences of the case that there may be very unequal group sizes.
2) Also, in general, I am unsure if enough participants can be collected at Time 1, since only a short time period is indicated for this data collection (September - October 2021). I would appreciate it if the authors could elaborate on how they will proceed if less than the targeted 240 students can be collected by the end of October, or if only a small proportion of participants indicate planning to preregister.
(more detailed descriptions of these comments are provided in the attached PDF)​

 

Overall, I think that the study addresses a valid research question with coherent and plausible hypotheses. I perceived the described methods as sound, feasible, and for the most parts, clear. Whenever open questions remain, I have marked them in the attached PDF. Additionally, I thought it was excellent that the authors had already addressed possible limitations themselves.  I feel like the study will be an important contribution and I hope that my comments will help the authors improve their study and manuscript.

All the best,
Lisa Spitzer

Download the review

Reviewed by anonymous reviewer, 21 Aug 2021

This proposed study aims to test the influence of study pre-registration on undergraduate attitudes towards statistics and questionable research practices. Given the enthusiasm for open science practices among early career researchers, and the certainty with which many of those practices are endorsed by some members of the field, it is important to empirically test the influence of those practices on meaningful research outcomes. Open science practices themselves should indeed be subject to the scrutiny of the Registered Report format, so I commend the team for taking on this sort of research. In order to meet the demands of this format, however, I believe the current report could use more detail in several key areas related to establishing the scientific premise, justifying the choice of measures and associated hypotheses, and ensuring that any results can be interpreted with confidence.

 

Overall, I think the study motivation and scientific premise should be strengthened. While undergraduate teaching and learning are inherently important, it’s a bit unclear exactly what issue this RR is tackling, or what guides the choice of measures and hypotheses. For instance, the intro lays out a number of dissertation struggles that undergraduates may face related to anxiety, disengagement, writing ability, and supervisory relationships (p. 6). Is pre-registration meant to mitigate these concerns? If so, how? It’s unclear to me whether the focus is on student well-being, research quality, or pedagogical value. If it is all of these things, I think the intro should more clearly establish why each is important to study in this context, and make a more explicit case for why/how pre-registration should be expected to improve these problem areas. Is the thinking that the mere act of preregistration will impart these benefits? Or will these students undergo some additional learning, which might have the same benefits without the actual preregistration? The intro also implies that open science practices are universally endorsed, but there is in fact a vocal pushback against many of these practices (e.g., Szollosi et al., 2020, TICS). I mention that, not to undermine the current study, but because I believe this perspective should be recognized and may offer further fuel to justify the proposed work.

 

After establishing the problem, I also think the report could clarify why these particular proposed measures were chosen, and also offer stronger justification that the measures will test what we think they’ll test. It’s not totally clear to me that these self-report questionnaires actually tap into the pedagogical effectiveness of pre-registration. For instance, is there any evidence that student attitudes toward statistics relate to their knowledge or competence with statistics? Instead of asking for confidence in open science terms, why not actually measure ability to define the terms accurately? I think the report could be strengthened overall if it included more objective measures of knowledge (stats and open science), and engagement in QRPs (i.e., did you do any of these things?), rather than just attitudes.

 

I would also like to see more details about recruitment and study criteria. What do we know right now about the uptake of pre-registration in this cohort? As in, how feasible will it be to recruit equal numbers in both groups, and for those groups to be matched? The report acknowledges this might be an issue, but I think it would be a rather serious one if all the pre-registration participants end up coming from a certain set of targeted schools, and all the non-pre-registerers are from different schools.

 

Is the *only* inclusion criteria that participants be a final-year undergraduate student, studying Psychology at a U.K institution. For instance, what happens at Time 1 if they respond that they’ve already pre-registered their study? Is the only possible reason that data would be excluded if a participant fails the attention check? I’m just asking to ensure that all the methods are specified in enough detail to be truly reproducible and transparent. 

 

Regarding the sample size, I understand that the proposal is based on time and resource considerations, but we would still need to be reasonably assured that the outcomes would be valuable. What was the input to the power analyses, and do we have reason to believe that the cited effect size (80% statistical power to detect an effect size of np2 = .04) would constitute a meaningful effect in this context?

 

I very much appreciate that hypotheses and tests are specified in a table, but I think the interpretation, alternative outcomes, and theory portions of the table should be more explicit.  For instance, the final column applies broadly to all hypotheses and tests, stating “This may call into question various situational, contextual, and personal factors that impact how preregistration may be useful to students in this context “ Ideally, this section would include straightforward, concrete predictions that are guided by theory, for each hypothesis, and specific implications if findings do or don’t match expectations.

 

The “Limitations” section acknowledges that pre-registrations may differ in their rigor, but states that this issue is beyond the scope of the current study. This strikes me as an incredibly consequential issue, if we truly want to understand the impact of pre-registration on student outcomes. I understand why a common, plain-language definition of preregistration is provided, but I think there needs to be some further confirmation that the participants actually conducted a valid preregistration. For instance, is it acceptable if students just saved a personal pdf with their “plan,” or do they need to have submitted it to an accepted repository? There is then WIDE variability in the detail with which a pre-registration is specified. OSF alone provides 8 possible options for preregistration. There is then also WIDE variability in adherence to the registered methods/analyses. What if a large number of students submit a bare minimum, superficial pre-registration, just because it’s required of their program? Many scientists have speculated that this may become the norm, and I worry that it could set up this study up for failure to find any differences between groups. Why not ask whether the participants actually followed through with the registration, or to what extent they deviated?

 

In the end, if you find no differences between groups with these methods, would you confidently declare that preregistration has no benefits? If not, why not? If there is a difference, would we know it is due to the act of pre-registering, or could it be explained by some other factor? As in, do students in a pre-registration context just get more exposure to open science issues as part of their process, and could the same benefits therefore be achieved by briefly teaching students about QRPs and Open Science? The above concerns all boil down to ensuring that any result is meaningful (null or otherwise), which should be a primary goal of the design and RR proposal. I know some of these concerns may seem persnickety, but they are aimed at the overarching goals of ensuring that any results can be confidently interpreted, and that the methods are precisely reproducible.

Reviewed by , 10 Aug 2021

PCI Registered Report Peer-Review
 
Registered Report Title: Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report
 
Authors: Madeleine Pownall, Charlotte R. Pennington, Emma Norris, Kait Clark
 
 
REVIEW
 
Thank you very much for sharing the Registered Report “Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report” with me and for asking for feedback.
 
The report presents a study to assess whether pre-registration of undergraduate Psychology dissertations may have benefits on students’ attitudes towards statistics and questionable research practices (QRP) and on their understanding of open science concepts. I read the registered report with great interest and believe that the project will help answer an interesting question which is still understudied at present. The report is clearly structured and well written, and my overall impressions are positive.
 
Despite my positive appraisal, there are a few points I noticed which I believe may help the authors strengthen their work. Three points are more important and I detail them further below. I then follow with minor points which I present in a list in the end of this review.
 
Before starting with my comments, I would like to mention that I have not used Bayesian analyses and my statistical knowledge is unfortunately rather rusty, I therefore recommend that the authors do not use my absence of comments on the statistical analysis as a confirmation that the planned analysis is adequate.
 
Important points:


The first major point I would like to raise concerns the research questions and the way in which the project is described. As I was reading through the project, I was very confused by the comparison of the two groups, thinking to myself that the group who is likely to preregister their dissertation is also likely to study in a center where awareness to open science is markedly better. In fact, if the two groups were to be compared one to the other, I believe that the study would look at the influence of different institutions and research groups more than the influence of the pre-registration process itself. In an ideal scenario, randomly assigning a condition to each participant would avoid this problem, but I can understand that this may not be realistic, especially in a project on such a large scale. Nonetheless, even with the design kept as is, it is not until page 17 that I understood that the authors plan to compare the difference/interaction/progress between T1 and T2, thereby cancelling differences between prior knowledge and attitudes.
This distinction is very important and should be made clear throughout the report. For example, the hypotheses presented on page 9 fail to capture the distinction. The first hypothesis states that “Students who preregister their dissertation will have higher positive affect towards statistics, higher self-reported competence with statistics, higher perceived value of statistics, and less difficulty with statistics at T2 compared to students who do not preregister their dissertation.” In this phrasing, the hypothesis H1 appears to only compare both groups at T2, rather than to compare the groups on their improvement between T1 and T2. The same goes for H2, where the authors mention that “Students who preregister their undergraduate dissertations will have a reduced endorsement of QRPs compared with students who do not preregister their dissertation.” While it should state that they will show a greater reduction of their endorsement of QRPs between T1 and T2 than the control group. And so forth, also for H3. The authors should make sure that the manuscript clearly explains that the results will compare both groups the difference/improvement/interaction between T1 and T2 to avoid confusion and early criticism from readers.
Also related to this point, not only is it possible that the groups differ from the baseline, it is also possible that, if the experimental group rates higher on the different variables at T1, they may show less improvement between T1 and T2 than a group who started at a lower point but is nonetheless in the most research-intensive learning-period of their degree. While I do not have a recommendation for this potential issue, I think that the authors should consider this possibility and maybe discuss whether the analysis, as it currently stands, may risk masking the full effect of preregistered reports by leaving less room for improvement overall in the experimental group. This is all hypothetical, but I thought that it should be addressed, or at least reflected upon before starting data collection. The authors should clearly explain the limitations of this non-random assignment in the ‘Risk and mitigation’.
 
A second important point concerns the COM-B measure and its role in answering the research question. From the report, I did not find it clear what the COM-B will contribute nor how it will be used in interpreting the data. The authors mention that the COM-B results will be used to compare groups, but I am unsure what this finding would mean about the preregistration process per se. If I understood properly, in most cases, the decision to preregister a dissertation project comes from the research laboratory or the supervisor. The COM-B could then provide information on how prepared and motivated those planning on pre-registering their study really feel, but I am unsure whether a group comparison is truly helpful. I also feel that, for the control group or those who are not planning on performing a preregistration, the COM-B questions will be highly abstract (in fact I found most COM-B questions very abstract, but I detail this point later on when discussing the ‘Study materials’.
In this regard, I was surprised that the authors did not plan to use the COM-B to assess how prepared students feel about their final dissertation project (rather than about the preregistration), in which case they could do a T1-T2 comparison and see whether the preregistration helps improve the COM-B scores of the experimental group.
 
Finally, the last major point that I noticed in the design is the lack of knowledge about the type of dissertations that students are undertaking. It is possible that some students focus on exploratory qualitative studies or literature studies and therefore do not feel able to do a preregistration, but also do not strengthen their statistical skills and confidence in the same way as other students who conduct a more quantitative study. A few more details about the specific study type, whether the dissertation is fit for preregistration, and the training acquired between T1 and T2 (integrity, statistical, open science, etc.) would be important to capture to exclude possible confounding factors and biases.
 
Smaller points that are easier to address:

Note: For the sake of clarity, I added continuous line numbers to the document available in the OSF. I will refer to these page and line numbers (Pxlx) where relevant in the following points.
 
The abstract should mention the fact that participants are not randomly assigned to the groups, but self-assigned based on their completion of a preregistration.
 
P4l64 compat should read combat
 
P4l72 From the beginning, it is not very clear what the ‘Attitude towards statistics’ means. A few examples of points may be helpful to avoid confusion early on.
 
P6l109 The first sentence of this paragraph would benefit from a context setting to mention that this is in the UK since this differs in different settings.
 
P7l142 not clear what this sentence means. Should ‘report worries’ be ‘students worry’?
 
P7l147 The sentence starting with “Indeed, an undergraduate publication…” should be moved one sentence ahead, it is out of context where it stands.
 
P9l186 The term ‘utility’ used in this paragraph is not entirely clear, especially in the first sentence of ‘utility in…. dissertation provision’. A better term may increase readability.
 
P9l188 “to improve students' … endorsement of QRPs” doesn’t sound quite right. I believe the authors mean the opposite.
 
P9l190 It should be clear that T1 is always pre-preregistration. It becomes clear later on but I noted it as a question at this point in the manuscript.
 
P9l193 the term ‘affect of statistics’ appears to invert the roles tested, i.e., to look at hos statistics impact students rather than how the student perceives statistics. 
 
P9l195 Is “self-reported competence” with statistics different from “less difficulty with statistics”?
 
P9l198 Endorsement seems like the test is about self-reported behaviours, where the questions are mostly about acceptance/tolerance with or perception of QPR.
 
P9l200 confidence in terminology sounds like faith in. Even if wordier, it may be clearer to say confidence in their understanding of OS terminology.
 
P10l223 Remuneration or compensation?
 
P11 maybe link to the power calculation available on the OSF?
 
P11l238 name the university granting the ethics approval
 
P11l241 “ensuring they meet the inclusion criteria” à How could this be ensured? Was a registry of students consulted? Was any assurance granted?
 
P13l268 The authors should mention that the grades are self-reported. The questionnaire also allows respondents to select ‘Prefer not to answer’. What happens in this case? Will the datat be included or excluded?
 
P13l278 From what I understood, respondents mention whether they plan to perform a preregistration at T1, and mention whether they did it at T2. Are sub-groups created for controlling the answers of participants who planned it but in the end did not do it and those who did not plan it but in the end did it? At T1 are participants allowed to answer that they do not know whether they will preregister their study or not?
 
P14l297 This may be a standard questionnaire, but I find the term ‘sensible’ very ambiguous in this test and potentially problematic for non-native speakers (i.e., it is often mixed with ‘sensitive’, and a ‘sensitive issue could approximate ‘problematic’).
 
P14l310 maybe ass a few words to mention what an attention check is. I understood later, but it is not a term I heard very often.
 
P15l335 “The same sample of students will be asked to complete the above measures again at Time 2” unclear that the COM-B is not included then.
 
P16l347 I feel that asking whether participants plan on publishing in open access may also be relevant here, although an option to mention that they do not for financial reason may be needed then.
 
P17l382 ‘additive’ effect is not so clear. I would also say that going through the preregistration process has an effect, rather than the process itself.
 
P18l395 reregistration à preregistration
 
P23l439 “Unlike with statistics attitudes…” This sentence contains many negatives and could be improved if rephrased slightly.
 
P24l442 The section on the qualitative analysis is not clear. What will this analysis be used for? Which question will it answer or complement?
 
OSF Study material:

  • Demographic question 8 is not discussed in the manuscript. How will this information be used?
     
  • Attitudes towards QRP
    • Study Design: “Collecting more data in order achieve significance” à Missing a ‘to’ and would be more accurate if phrased as “Collecting more data than planned…”
    • Reporting & analysis QRP: “selectively reporting studies” is jargon and may be very difficult to understand by psychology undergraduate students.

  • Understanding Open Science
    • What are the descriptors used in the scale 1-7
       

  • Brief COM-B measure

    • I would recommend to avois 0s in a scale for ease of calculation without having to turn the values in log
    • These 6 questions are extremely difficult to grasp. The ‘definition’ section should be rephrased using the second pronoun and using simpler terms and examples. As it currently stand I can anticipate that respondents will guess more than answer knowingly to these highly abstract conceptual questions.
    • As stated above, maybe consider using this scale about the dissertation itself.
       

  • Attention check

    • Are there multiple choices to this question to make sure the participant reads through?
       

  • Post-only questions

    • The question starting with “If yes” should come directly under the first question, not under the question about groups.
       

  • Value of pre-registration

    • I would highly encourage authors to repeat the definition of preregistration at this point, and maybe every now and then in the questionnaire.

 
I hope some of these points will be useful to the authors. I want to congratulate the authors once again in their efforts, and welcome any follow-up questions they may have.
 
Kind regards,
Noémie


Noémie Aubert Bonn
Postdoctoral researcher | Hasselt University (Belgium) and Amsterdam UMC (Netherlands)
Currently working from Manchester, UK
noemie.aubertbonn@uhasselt.be

Download the review

User comments

No user comments yet