DOI or URL of the report: https://osf.io/fph8d?view_only=5286ad5b89584a0ba7d1f238db9aa0b4
Version of the report: 2
Dear all,
thank you very much for evaluating our revision, and thank you Matt Williams for spotting the typo. We have uploaded a corrected version on OSF. We are happy to see that you are otherwise satisfied with our revision.
Thanks again for a very thorough and constructive review process.
Kind regards,
Luisa Liekefett, Simone Sebben and Julia C. Becker
As you will see, both reviewers are now fully satisfied and agree that the manuscript meets the Stage 1 criteria. I concur. There is just one very minor point to address in Matt Williams' review concerning what appears to be a typo in the Stage 1 manuscript. For a common typographic error, we could proceed with IPA and correct it at Stage 2, but given that it refers to a statistical coefficient it is best that we fix it now. I look forward to receiving your final revision and issuing the Stage 1 recommendation.
My thanks for the opportunity to review this revised submission. I have now read the responses to reviews (carefully), the manuscript (more quickly, focusing on the tracked changes), and briefly previewed the two survey prototypes. I am greatly impressed by how the authors considered our reviewer queries very deeply, and made proactive and sensible changes that genuinely improved what was already an excellent Stage 1 manuscript. I enjoyed reading the response letter and felt that I learned a good deal from it. I also really appreciated the authors going to the effort of sharing English prototypes of the surveys with us.
The only (miniscule) query I was able to come up with now was that I suspect there is a typo in "At the first look (50% of data), the alpha level is.031. At the last look (100% of data), the alpha level is 0.30" (perhaps 0.03?)
I am happy to recommend that this Stage 1 RR now be accepted, and look forward to reading the Stage 2 manuscript a little further down the track!
All my comments and concerns have been successfully addressed by the authors.
I can only wish them the best of luck with the submission of their registered report and their data collection!
DOI or URL of the report: https://osf.io/q5639?view_only=5286ad5b89584a0ba7d1f238db9aa0b4
Version of the report: 1
I have now received two very detailed and constructive evaluations of your submission. As you will see, the reviews are broadly very positive while also raising a range of points for further consideration, including a range of core issues relating to study design (including suitability of controls and potential carry-over effects), calls for additional methodological detail, and clarification of the analysis plan and key aspects of the study materials.
Concerning the point raised in Matt Williams' review regarding preregistration of the pilot studies, I agree that reporting of these details in the main text is less crucial given the purpose of this preliminary research; however it is always good practice to mention preregistration (and changes from protocol) where applicable, so please do mention in the main text of the manuscript where specific pilot studies were preregistered on AsPredicted. In addition, please document all relevant deviations from those protocols in the Supplementary Information file.
I hope you will find these reviews helpful in further strengthening your (already impressive) proposal and look forward to receiving your revised manuscript in due course.
Please see my attached review.
Download the reviewThe submitted registered report addresses an important theoretical question related to one of the potential antecedents of conspiracy beliefs: ruminative thinking. Although there is prior empirical evidence assessing the association with different types of thinking (e.g., analytical thinking) and cognitive biases, I consider the present research informative insofar as it delves into the dysfunctional cognitive and emotional elements arguably characterizing the thinking processes often associated to conspiracy beliefs.
I briefly provided comments regarding the theoretical introduction, the already conducted studies, and the interpretation of the available results, and I then focused on the proposed follow-up study that you plan to conduct.
Introduction
I appreciate the use of a state-of-the-art definition of conspiracy beliefs, deviating from traditional definitions that tailored the phenomenon to fake or implausible conspiracies.
The explanation of the conceptual association between rumination and conspiracy beliefs through negative affect, negative cognitive biases, and persecutory delusions is generally convincing. However, there are a few points regarding the 2 latter issues that could be clearer.
Regarding negative bias thinking, I think the argumentation could benefit from explaining other biases associated with conspiracy beliefs (e.g., agency perception; Douglas et al., 2016; catastrophizing; Green & Douglas, 2018), in order to establish a clearer link with those biases that rumination favours.
As for persecutory delusions, although I find the argumentation compelling, explicitly stating that “both [persecutory delusions and conspiracy beliefs] share the conviction that harm is going to occur” might not be totally accurate. It is not always the case that specific conspiracy beliefs are anticipatory of harm, but rather descriptive narratives of harmful actions in the past (e.g., 9/11 conspiracy theories). Of course, this can lead people to distrust the malevolent group responsible for these past actions and engage into new conspiracy beliefs about their future behavior, but this would not apply to the conspiracy belief that originated after the original threatening event.
Pilot 1
I would appreciate if you could justify the use of two rumination measures. Do they measure exactly the same? If not, how do they differ? This could be helpful for the reader, considering that you point out earlier the relevance of distinguishing between different elements of rumination (brooding vs. reflection).
I would warn the reader that, beyond this study being a correlational study, your statistical power is also not that high for the detection of half of the correlations you observed (the minimum correlation you can detect with your sample size is r = .2153, with 90% power and alpha .05). So these preliminary results should be taken cautiously.
Pilots 2a and 2b
In the explanation of your design, I would change “participants were randomly assigned to a rumination AND a control condition” to “participants were randomly assigned to a rumination OR a control condition” to make it clearer that your design is between-subjects.
As for the manipulation in these two pilot studies, I had some comments (i.e., individual relevance of the topic for engaging in rumination, variability in how much people wrote and how much spent ruminating about it). However, you partially addressed them in Pilot 3, and certainly in your proposed study, so I will not comment further.
Pilot 3
Manipulation
It is a shame that the effect was not there, however, despite the manipulation checks being optimistic, I still wondered whether this type of manipulation can induce rumination through an online survey setting. One concern is the time, which might not be sufficient for people to engage in actual intense rumination.
A second issue is the one on which you base the justification of your follow-up study, namely, that the manipulation could have induced a mix of brooding and reflection. In the end, some of the manipulation checks do not allow one to distinguish between brooding and reflecting on a negative event. One can analytically approach an event with negative consequences, reflect on it, and experience negative affect due to the valence of the event, but not necessarily due to the type of thinking process.
Yet, I believe that the new brooding condition you proposed in the registration of your follow-up study can help to counter this issue, by focusing participants on the negative emotional aspects associated with the societal topic, and importantly, a reiteration of those thoughts. This will hopefully do the trick! Otherwise, you may need to think of other study settings (perhaps in the lab or an ambulatory assessment?) to provide participants a longer time window to experience this ruminative, recursive thinking.
Interpretation of available findings
I very much appreciated the clarity with which you discussed the set of mixed results and their potential explanations. It is never easy to describe and interpret such a puzzling set of findings!
I agree with you that the prediction regarding reflection is less clear. I think it is important to mention the temporal dimension in which reflection happens within the process of internalization of conspiracy beliefs. While reflection can protect against engaging in conspiratorial thinking when a threatening event occurs, for conspiracy believers, reflection could contribute to strengthening the justifications supporting the conspiracy narrative (see van Prooijen et al., 2020).
Van Prooijen, J.-W., Klein, O., & Milošević Đorđević, J. (2020). Social-cognitive processes underlying belief in conspiracy theories. In M. Butter & P. Knight (Eds.), Handbook of Conspiracy Theories (pp. 168-180). Oxon, UK: Routledge.
Registered Report of Follow-up Study
As I said, I think this study could address some of the issues of Pilot 3.
I have some concerns about potential carry-over effects between T1 and T2, especially considering the short time between the two assessment time points (minimum of 24h) and the emphasis made in introducing the concept and the definition of conspiracy theories within the study. I believe that these two features could artificially increase conspiracy beliefs in T2, and reduce the strength of your manipulation. In the end, participants might already anticipate that they have to think about the “societal topics” in conspiratorial terms if they have read about it 24h ago.
One suggestion would be to increase the distance between T1 and T2 (maybe a week), although I understand that this may come at the expense of higher dropout rates and therefore higher data collection costs. Alternatively, you could consider masking the target societal topics (and related-conspiracy belief measures) among 4-6 other unrelated topics and items measuring non-conspiracy beliefs. In a similar line, I would suggest completely omitting the detailed preamble/definition of conspiracy theories included in the study materials (i.e., “Conspiracy theories are often discussed in the media. A conspiracy means that influential people join together in secret to pursue a common goal…”). I do not clearly see the benefit of including this paragraph (maybe I missed the point). However, I see its potential risk in framing participants’ mindset for the rest of the study (including T2) regarding the identification of conspiracy explanations in subsequent material (especially if only 24h pass between T1 and T2).
I very much like the distinction between the new “Brooding” and “Reflection” conditions. However, I think that some features should be kept constant across both conditions to avoid any confound due to the current differences, i.e., in the number of questions (7 vs. 4) and in their level of requested justifications (i.e. Why does this concern make you feel so bad?). Regarding this issue, you could for example substitute Q2 in the “Reflection” condition for: Which argument do you find particularly compelling in favour of this explanation being true? Why do you find this argument compelling?/ Which argument do you find particularly compelling against this explanation being true? Why do you find this argument compelling?
The instructions for the Manipulation Checks section use weird and complicated wording (i.e., think about the 5min before we asked you the questions about the conspiracies). I would simplify it with the suggested wording below:
“During the 5 minutes, we gave you to think about X, to what extent have you…
· Had depressing thoughts about X.
· Ruminated about unpleasant thoughts and feelings that X triggers in me.
· thought a lot about how bad my worries about X make me feel.
· Etc.”
I also appreciated the detailed justification of the SESOI and I would not find a more compelling justification for a different effect size.
I think the detailed pattern of results you expect for the Manipulation Checks would be more easily summarized in a figure (bar plot?), highlighting the ones you finally use for your stopping rules.
Analysis Plan
I expected the “Reflection” condition to be included in the analysis plan, considering the justification given to interpret the mixed results of Pilots 1-3. I understand that brooding is the main condition of interest, as this should conceptually account for the effect of rumination on conspiracy beliefs. However, it is important to discard that reflection has a similar effect on conspiracy beliefs (implausible, but possible). Thus, I would like to see this included as part of the plan for the main analyses, which would entail changing your Welch t-test for an ANOVA framework with the 3 conditions as predictor of the difference in conspiracy beliefs between T1 and T2.
As for the sequential approach, it is unclear whether you are using corrected or uncorrected effect sizes for your stopping criteria and your equivalence tests. It is recommended to correct for bias when stopping earlier (in 1/3 or 2/3). This should be specified in your preregistration.
Hope you find some of these comments helpful, and I wish you the best of luck for this next study!