The relationship between risk and compliance during the first year of the Covid-19 pandemic in Norway
Relationship between perceived risk and compliance to infection control measures during the first year of a pandemic
Abstract
Recommendation: posted 04 November 2024, validated 08 November 2024
Fillon, A. (2024) The relationship between risk and compliance during the first year of the Covid-19 pandemic in Norway. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=584
Recommendation
The Stage 1 manuscript was evaluated over three rounds of in-depth review. Based on detailed responses to reviewers’ and the recommender’s comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
List of eligible PCI RR-friendly journals:
References
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.
Reviewed by Lewend Mayiwar, 31 Oct 2024
Thank you again for the detailed revisions! All comments have been addressed and I don’t have anything else to add. Best of luck with your research!
Best,
Lewend
Evaluation round #2
DOI or URL of the report: https://osf.io/f25eu
Version of the report: 2
Author's Reply, 12 Sep 2024
Dear Editor,
We would like to thank you for taking the time to handle the revision process of our stage 1 manuscript submission. We are happy to see that the reviewers were satisfied with our prior revision, and we found that their suggestions on how to improve the manuscript further to be clear and well-informed.
Please find our responses in the document below and the action we have taken based on each issue raised by the reviewers.
Best wishes on behalf of the authors,
Sebastian B. Bjørkheim
Decision by Adrien Fillon, posted 07 May 2024, validated 07 May 2024
Dear authors,
Thank you for the in-depth revision of your manuscript. I received reviews from the three reviewers, and they all were satisfied with the revision of the manuscript. One reviewer provided several minor suggestions to implement for the next round.
Two reviewers took time to review the R script and had several questions and suggestions to improve it. These points need to be addressed before acceptance of the Registered Report.
Regarding the issue of the result section with reviewer 2, it is actually possible to write a result section with dummy results to improve understandability of the procedure for a RR. It is up to you to do so, but if you don't, please ensure that all points regarding the R script and the procedure are sufficiently detailed in the manuscript/code and response to the reviewers in the next round.
Best regards,
Adrien Fillon
Reviewed by Gaëlle Marinthe, 27 Apr 2024
I read the revised version of the manuscript with interest. The authors have done a great job in responding to the comments of the editor and reviewers. The reformulation of the hypotheses, the planned multiverse analyses, and the steps taken to deal with the risk of bias improve the manuscript considerably.
I do, however, have a reservation about the script. I am not familiar with doing RI-CLPM with R (but more on MPlus), but the script seems somewhat different from what I am used to. In particular, it doesn't include the between components, allowing to distinguish between variance from within variance. But I am relying on Hamaker et al.(2015) and Mulder & Hamaker (2021). Perhaps the authors rely on other references? If so, it may be interesting to cite them.
References
Hamaker, E. L., Kuiper, R. M., & Grasman, R. P. P. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102–116. https://doi.org/10.1037/a0038889
Mulder, J. D., & Hamaker, E. L. (2021). Three Extensions of the Random Intercept Cross-Lagged Panel Model. Structural Equation Modeling: A Multidisciplinary Journal, 28(4), 638–648. https://doi.org/10.1080/10705511.2020.1784738
Reviewed by Lewend Mayiwar, 04 May 2024
Reviewed by Erik Løhre, 11 Apr 2024
Evaluation round #1
DOI or URL of the report: https://osf.io/3qhgk
Version of the report: 1
Author's Reply, 27 Mar 2024
Please see the attached documents that contain our reply to the editor and peer reviewers, and a copy of the latest manuscript with "tracked changes" turned on. These documents are also available at the OSF page of this project (https://osf.io/2af9x/)
Decision by Adrien Fillon, posted 15 Jan 2024, validated 15 Jan 2024
Dear Authors,
I now have received three deep and elaborated reviews concerning your manuscripts. This is a difficult review to handle given its high risk of bias due to the use of existing data, for which you already published studies on related hypotheses.
The three reviewers agreed on four key points that prevent a recommendation yet:
- The first hypothesis, especially H1a as already been tested in other published studies, and you need to detail more how this test is new, in terms of key measures. More precisely, how the operationalization differs from last studies and how this hypothesis can inform the theory beyond what was already conducted on similar measures with the same dataset.
- Rationale for hypothesis 3 is not sufficiently documented (Erik Løhre also mentioned that hypothesis 2 could be also more explained in the introduction).
- Crucially, not enough has been made to prevent the risk of bias associated by the accessibility of the data. Indeed, the manuscript is a Level 1 submission (Please see section 3.6 here: https://rr.peercommunityin.org/about/full_policies#h_95790490510491613309490336); and especially "Submissions at Level 1 or 2 will usually be required to include stringent countermeasures against overfitting, such as the adoption of conservative inferential statistical thresholds, recruitment of a blinded analyst, or multiverse/specification analysis" For this specific case, I would like the authors to include the three countermeasures proposed because of the high level of potential bias involved. Note that regarding the adoption of conservative inferential statistical threshold, I am in favor of lowering the alpha level instead of conducting a power analysis mentioned by reviewers. This is motivated by the following study: https://open.lnu.se/index.php/metapsychology/article/view/2460.
- Two reviewers asked for the providing of a statistical script. This is especially important for two reasons: 1) It is unclear about what kind of controls could be involved in the "positive association" hypotheses, and 2) we need to understand what will be done in the multiverse/specification analyses.
More than the 4 points above, reviewer 1 also asked authors to ensure the specificity of the question: how risk is defined and operationalized. if it is for oneself (as stated in the introduction) or for others and in general (as operationalized in some questions); but also defining compliance versus preventive behaviors.
Finally, reviewer 1 stated "Third, It is not clear which participants would be retained (only participants who responded to all four waves?) and no power analysis is carried out to ensure the relevance of the sample available." The last part, again, could be fixed with the providing of a syntax/analytical code and lowering the threshold for significance with a rationale associated. However, authors also need to be more specific around the exact sample used especially because the sample size for T4 is more or less half of T1.
I hope these reviews will be beneficial to you. I look forward to seeing a revision with the mentioned improvements. Note that once I will receive the new version, I plan to send it again to reviewer 2 and 3 asking them to check if the manuscript is aligned with the four key points mentioned above.
Kind regards,
Adrien Fillon
Reviewed by anonymous reviewer 1, 21 Dec 2023
The research question is not really new, but it is interesting and may be of some importance in establishing potential bidirectionality between risk perception and compliance with preventive behaviour on a longitudinal and (fairly) representative database.
The hypotheses are credible and precise. However, hypothesis H3 should be introduced and justified.
The data has already been collected and the targeted measures from the first wave have been used in three manuscripts (published or pre-printed). In one of these manuscripts, the authors even test the link between perceived risk and compliance. Although the authors state this, it is a concern to me. Indeed, the authors have already tested H1a, so the level of prior knowledge of the data is not satisfactory. The authors state that it is not exactly the same items that have been used; but this poses a limit to the measurement if the authors consider that it is not the same concepts, or if it is conceptually the same thing, then the authors have already tested H1a.
Second, there are discrepancies between the theoretical reasoning and the methodology used. In particular, when perceived risk is defined, it is defined as a risk to oneself. On the other hand, it is measured using several items relating to the risk associated with the disease for one's own health, that of others, and even for one's life in general. This should be specified if and why differences in effects can be expected. Second, and similarly, compliance involves a range of different behaviours. Some authors have argued for a distinction between different preventive behaviours (e.g., distancing from hygiene). The authors should have taken this into account in order to shed more light on their research question.
Third, It is not clear which participants would be retained (only participants who responded to all four waves?) and no power analysis is carried out to ensure the relevance of the sample available.
To conclude, I have serious reservations about whether these hypotheses, with these data, can be tested in an RR format. This does not preclude the interest of the question, but rather the appropriateness of the publication process.
Reviewed by Lewend Mayiwar, 22 Dec 2023
Thank you for the opportunity to review the Stage 1 report of “Relationship between perceived risk and compliance to infection control measures during the first year of a pandemic”. The registered report proposes to test the association between risk perception and compliance with covid-19 control measures among a representative Norwegian sample, while also testing how this association changes over time. The report is concise, well-structured, easy to read, and clearly spells out the contribution to the literature. This is my first time reviewing a registered report based on secondary data. The authors are very transparent in reporting prior knowledge with the data that they plan to use. Overall, I think this report generally meets the Stage 1 criteria, but I have some minor comments and suggestions that I hope will be useful.
- The authors specify using a p-value cut-off of 5%. But given the very large sample size (in each time point), I wonder whether this cut-off should be reduced. Some have argued that when sample size is very large, p-values right under .05 (e.g., .04) might be taken as evidence for the null (e.g., see: https://journals.sagepub.com/doi/10.1177/25152459221080396).
- This brings me to my next comment: In Table 3, if I have interpreted this correctly, the authors specify that lack of support for a hypothesis will be interpreted as indicating the absence of an association. This means that p-values right below 5% would be taken as evidence for an association when in fact they might indicate evidence for the absence of an association. I’ve never worked with sample sizes this large, and I have no experience with the methodology proposed in this study, but I wonder if one might want to complement traditional null hypothesis significance testing with alternative methods that quantify evidence for the null more directly (e.g., equivalence testing or Bayesian analysis).
- On page 6, the authors write that they will also test the reversed relationship, that is, whether compliance predicts perceived risk. Although I think I know why the authors want to test this, I think it might be good to provide a brief justification, and perhaps to also explain why they want to test it non-directionally.
- Just a suggestion, and I don’t mean to complicate things, but it would be helpful to see the whole Results section written out based on simulated data and share the analytical code (by uploading it to the OSF page). This would make it easier to compare the Stage 1 report with the Stage 2 report and would allow the authors to potentially get feedback on their analysis script and detect potential errors.
- This is a very minor suggestion, but I think this part on page 6 can be removed: “We will use a registered report publication process to enhance the transparency and rigor of our research methodology, study design and analysis plan. This approach ensures that the significance of our study is evaluated based on the research question and methodology, rather than the outcomes.”
- On page 6 there is a missing closing parenthesis (in “(H3”).
I wish the authors good luck with their paper!
Kind regards,
Lewend Mayiwar