DOI or URL of the report: https://osf.io/f84zy
Version of the report: 2
Hi,
For the most part, the reviewers are happy with your revisions (and I thank them for their reviews). One comment is still outstanding from R1 in relation to your design table. Please could you address this comment, then I can recommend the report?
Thanks
=====
Note from PCI RR Managing Board: We are now in the July-August shutdown period. During this time, authors are generally unable to submit new or revised submissions. However, we are going to give you the opportunity to resubmit despite the shutdown. You won't be able to do this the usual way. Instead, please email us (at contact@rr.peercommunityin.org) with the following:
In the subject line of the email please state the submission number (#) and title. We will then submit the revision on your behalf.
The authors have addressed my concerns emerged in the previous round of reviews.
In particular, the introduction is now clearer and some statements have been revised and some methodological details made more explicit - this will also help in setting the bias-control level.
My major concern remains on the Design Table which is a RR requirement.
In the present version, the authors have opted for listing only one outcome in the "Interpretation given different outcomes" while "Theory that could be shown wrong by the outcomes" contains the interpretation of the alternative ones. The first column should list all different outcomes and interpretations, and the second one link the various outcomes to existing theories.
I believe that the authors have sufficiently addressed my main concerns, and that this manuscript can proceed to Stage 2. I do hope that the authors will provide detailed reporting of their pre-registered results, and in the case of additional exploratory analyses, these should be outlined clearly.
DOI or URL of the report: https://osf.io/7zxkc
Version of the report: 1
I have received two reviews of high quality - apologies for the length of time it has taken. As you can see, both reviews see some merit in your stage 1 submission, however there are some concerns which would could be addressed before it can be recommended
To further elaborate on some points. I would report a different measure of internal consistency rather than alpha (such as McDonald’s omega - https://journals.sagepub.com/doi/full/10.1177/2515245920951747).
How might you consider any issues with multicolliniearity in your regression models?
The authors of this manuscript - which appears to be a pre-registered analysis of a previously collected dataset - aimed at investigating how personality traits influence our outlook and choices in life, and may also influence how we evaluate and how we respond to an extreme event such as the 2020 COVID-19 pandemic.
The question in itself is interesting, but I believe the implementation as a registered report is suboptimal at best.
First and foremost, it is unclear whether original data from either datasets were already analyzed for different purposes and/or whether they are currently not accessible to the authors - I find this rather unlikely consider the large number of respondents. As the authors mention that the two datasets were not combined yet, how will they evaluate whether the final sample size will be sufficient to test their predictions?
In general, I find all planned analyses and predictions rather exploratory - maybe this is confounded by the large number of hypotheses. The design table does not help much with this since it is quite crowded and could probably be simplified.
The authors state in the Design Table (but it seems not in the main text) "Due to a large sample size, even small effects are likely to show significant effects. Due to small effects having impact on infections in large populations, we will consider effects larger than ƒ2 = 0.01 to be theoretically meaningful for our research question." How will the authors rule out that is not just another "statistical artefact" similar to the ones mentioned in the introduction? What do they mean by "theoretically meaningful" when their earlier consideration is in fact very practical (i.e., small effects having impact on infections in large populations).
In general, it does not seem that the aim of the report and the applied methods derive clearly from the introduction - which at the moment is a rather disconnected list of previous research on various factors that might or might not correlate with each other. The report should make transparent to the reader how their predictiond and planned tests are answering specific questions and gaps in the literature.
Finally, I find some statements rather speculative "The results may facilitate how future pandemics are handled, in particular in terms of adjusting public health information to be effective for reaching individuals with personalities that may otherwise be resistant to seeing the risk or to comply with infection control measures" and should be significantly toned down. Similarly this statement "Perhaps due to a sense of urgency, most of the research on how personality may influence pandemic behaviour was not performed in accordance with current standards for open and transparent research, (i.e., controlling the degrees of freedom in measurement, analysis, and hypothesis development). If measuring a number of personality traits along with several different pandemic attitudes, beliefs or behaviours (which may be indexed in different ways) in large cross-sectional studies, a high number of potential relationships can be discovered. This makes it difficult to discriminate true psychological mechanisms from spurious false positives findings that may emerge from multiple comparisons and undisclosed analytic flexibility " would let the reader think that new data are going to be collected, but then they found out this is not the case. Rather, what they later gather from the paper is that the datataset involved in this RR was indeed collected with the same procedure it criticizes.
- The literature review in the introduction could be expanded on more. The authors should review additional work that examined personality and individual differences and pandemic responses during the Covid-19 pandemic. This can then help pinpoint why the additional examination of the Big Five trait would be relevant, especially in the face of other research that as explored different psychological mechanisms. Basically, why do personality differences matter? What can the current work contribute to the existing literature? The journal Social and Personality Psychology Compass also had a few special issues on social/personality psych and the pandemic, and it would be good for the authors to look at some of the published findings there to include in their literature review, e.g., Panish et al. (2023; https://compass.onlinelibrary.wiley.com/doi/full/10.1111/spc3.12885), etc.
- The second paragraph in the intro makes the claim that much of the existing work on personality and pandemic behavior was not performed according to current standards of open science - I don’t know if this claim can really be substantiated, or if it’s even a necessary statement to make. If the authors would like to persist with this type of statement, I recommend framing it more positively, e.g., “adopting more open practices can help clarify or further confirm previous findings” or something along these lines.
- Some of the sections describing Big Five personality and pandemic perceptions/behaviors seem a bit repetitive and the writing could be more concise/more organized.
- The authors make the argument that because much of the existing findings rely on cross-sectional studies in which personality is measured simultaneously with pandemic perceptions/behaviors, that these responses can influence each other. The authors’ own data collected personality measures 1.5yrs before the pandemic. While this can get around the issue the authors described, it does potentially create a new one: That is, although personality changes are generally shown to be gradual, it may be possible that a “sudden” global event such as a pandemic may lead to more drastic personality changes. As well, other events may have occurred during the two years between personality assessment and pandemic responses data collection that also impact participants’ personality. Essentially, is it a fair assumption that personality measures obtained 1.5-2yrs before are still the most accurate in reflecting participants’ actual personality characteristics during the pandemic?
- Given that the authors will use the TIPI in their study, it would be good to provide additional discussion on personality measurement differences and their impacts on related outcomes in their eventual discussion. While I understand the reason for using the TIPI, the small number of items in this measure can lead to potential issues with criterion validity. Somewhat related, see Bakker & Lelkes (2018): https://www.journals.uchicago.edu/doi/full/10.1086/698928.
- The authors should make sure to report alpha reliability for all measures used, when they arrive at the analysis stage of the report.
- It would be good to provide more details on the specific analyses - right now, it looks like there will be two regression analyses reported? Will the authors control for any covariates? Will there be analyses examining potential mediation models, e.g., personality -> perceived risk -> compliance? I assume the authors will also report basic descriptives and zero-order correlations? If so, please include these in the Analyses section.
- The table included does shed some light on the analyses to be conducted - however, I noticed that one set of analyses only included 3 of the 5 traits as predictors (Extraversion, Openness, and Neuroticism), whereas the other analysis includes all five traits. I personally feel that these analyses should be consistent across the board and examine all 5 traits. That is, there is the possibility that the two traits not included in the first set of analysis (Agreeableness and Conscientiousness) could still predict risk perception. The authors should provide more justification for why they set up the regression models as they currently have.