Thank you again for the detailed revisions! All comments have been addressed and I don’t have anything else to add. Best of luck with your research!
Best,
Lewend
DOI or URL of the report: https://osf.io/f25eu
Version of the report: 2
Dear Editor,
We would like to thank you for taking the time to handle the revision process of our stage 1 manuscript submission. We are happy to see that the reviewers were satisfied with our prior revision, and we found that their suggestions on how to improve the manuscript further to be clear and well-informed.
Please find our responses in the document below and the action we have taken based on each issue raised by the reviewers.
Best wishes on behalf of the authors,
Sebastian B. Bjørkheim
Dear authors,
Thank you for the in-depth revision of your manuscript. I received reviews from the three reviewers, and they all were satisfied with the revision of the manuscript. One reviewer provided several minor suggestions to implement for the next round.
Two reviewers took time to review the R script and had several questions and suggestions to improve it. These points need to be addressed before acceptance of the Registered Report.
Regarding the issue of the result section with reviewer 2, it is actually possible to write a result section with dummy results to improve understandability of the procedure for a RR. It is up to you to do so, but if you don't, please ensure that all points regarding the R script and the procedure are sufficiently detailed in the manuscript/code and response to the reviewers in the next round.
Best regards,
Adrien Fillon
I read the revised version of the manuscript with interest. The authors have done a great job in responding to the comments of the editor and reviewers. The reformulation of the hypotheses, the planned multiverse analyses, and the steps taken to deal with the risk of bias improve the manuscript considerably.
I do, however, have a reservation about the script. I am not familiar with doing RI-CLPM with R (but more on MPlus), but the script seems somewhat different from what I am used to. In particular, it doesn't include the between components, allowing to distinguish between variance from within variance. But I am relying on Hamaker et al.(2015) and Mulder & Hamaker (2021). Perhaps the authors rely on other references? If so, it may be interesting to cite them.
References
Hamaker, E. L., Kuiper, R. M., & Grasman, R. P. P. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102–116. https://doi.org/10.1037/a0038889
Mulder, J. D., & Hamaker, E. L. (2021). Three Extensions of the Random Intercept Cross-Lagged Panel Model. Structural Equation Modeling: A Multidisciplinary Journal, 28(4), 638–648. https://doi.org/10.1080/10705511.2020.1784738
DOI or URL of the report: https://osf.io/3qhgk
Version of the report: 1
Please see the attached documents that contain our reply to the editor and peer reviewers, and a copy of the latest manuscript with "tracked changes" turned on. These documents are also available at the OSF page of this project (https://osf.io/2af9x/)
Dear Authors,
I now have received three deep and elaborated reviews concerning your manuscripts. This is a difficult review to handle given its high risk of bias due to the use of existing data, for which you already published studies on related hypotheses.
The three reviewers agreed on four key points that prevent a recommendation yet:
- The first hypothesis, especially H1a as already been tested in other published studies, and you need to detail more how this test is new, in terms of key measures. More precisely, how the operationalization differs from last studies and how this hypothesis can inform the theory beyond what was already conducted on similar measures with the same dataset.
- Rationale for hypothesis 3 is not sufficiently documented (Erik Løhre also mentioned that hypothesis 2 could be also more explained in the introduction).
- Crucially, not enough has been made to prevent the risk of bias associated by the accessibility of the data. Indeed, the manuscript is a Level 1 submission (Please see section 3.6 here: https://rr.peercommunityin.org/about/full_policies#h_95790490510491613309490336); and especially "Submissions at Level 1 or 2 will usually be required to include stringent countermeasures against overfitting, such as the adoption of conservative inferential statistical thresholds, recruitment of a blinded analyst, or multiverse/specification analysis" For this specific case, I would like the authors to include the three countermeasures proposed because of the high level of potential bias involved. Note that regarding the adoption of conservative inferential statistical threshold, I am in favor of lowering the alpha level instead of conducting a power analysis mentioned by reviewers. This is motivated by the following study: https://open.lnu.se/index.php/metapsychology/article/view/2460.
- Two reviewers asked for the providing of a statistical script. This is especially important for two reasons: 1) It is unclear about what kind of controls could be involved in the "positive association" hypotheses, and 2) we need to understand what will be done in the multiverse/specification analyses.
More than the 4 points above, reviewer 1 also asked authors to ensure the specificity of the question: how risk is defined and operationalized. if it is for oneself (as stated in the introduction) or for others and in general (as operationalized in some questions); but also defining compliance versus preventive behaviors.
Finally, reviewer 1 stated "Third, It is not clear which participants would be retained (only participants who responded to all four waves?) and no power analysis is carried out to ensure the relevance of the sample available." The last part, again, could be fixed with the providing of a syntax/analytical code and lowering the threshold for significance with a rationale associated. However, authors also need to be more specific around the exact sample used especially because the sample size for T4 is more or less half of T1.
I hope these reviews will be beneficial to you. I look forward to seeing a revision with the mentioned improvements. Note that once I will receive the new version, I plan to send it again to reviewer 2 and 3 asking them to check if the manuscript is aligned with the four key points mentioned above.
Kind regards,
Adrien Fillon
The research question is not really new, but it is interesting and may be of some importance in establishing potential bidirectionality between risk perception and compliance with preventive behaviour on a longitudinal and (fairly) representative database.
The hypotheses are credible and precise. However, hypothesis H3 should be introduced and justified.
The data has already been collected and the targeted measures from the first wave have been used in three manuscripts (published or pre-printed). In one of these manuscripts, the authors even test the link between perceived risk and compliance. Although the authors state this, it is a concern to me. Indeed, the authors have already tested H1a, so the level of prior knowledge of the data is not satisfactory. The authors state that it is not exactly the same items that have been used; but this poses a limit to the measurement if the authors consider that it is not the same concepts, or if it is conceptually the same thing, then the authors have already tested H1a.
Second, there are discrepancies between the theoretical reasoning and the methodology used. In particular, when perceived risk is defined, it is defined as a risk to oneself. On the other hand, it is measured using several items relating to the risk associated with the disease for one's own health, that of others, and even for one's life in general. This should be specified if and why differences in effects can be expected. Second, and similarly, compliance involves a range of different behaviours. Some authors have argued for a distinction between different preventive behaviours (e.g., distancing from hygiene). The authors should have taken this into account in order to shed more light on their research question.
Third, It is not clear which participants would be retained (only participants who responded to all four waves?) and no power analysis is carried out to ensure the relevance of the sample available.
To conclude, I have serious reservations about whether these hypotheses, with these data, can be tested in an RR format. This does not preclude the interest of the question, but rather the appropriateness of the publication process.
Thank you for the opportunity to review the Stage 1 report of “Relationship between perceived risk and compliance to infection control measures during the first year of a pandemic”. The registered report proposes to test the association between risk perception and compliance with covid-19 control measures among a representative Norwegian sample, while also testing how this association changes over time. The report is concise, well-structured, easy to read, and clearly spells out the contribution to the literature. This is my first time reviewing a registered report based on secondary data. The authors are very transparent in reporting prior knowledge with the data that they plan to use. Overall, I think this report generally meets the Stage 1 criteria, but I have some minor comments and suggestions that I hope will be useful.
I wish the authors good luck with their paper!
Kind regards,
Lewend Mayiwar