Probing the interaction between interpretation bias and repetitive negative thinking in subclinical psychopathology
Pathway between Negative Interpretation Biases and Psychological Symptoms: Rumination as a Transdiagnostic Mediator in a Longitudinal Study
Abstract
Recommendation: posted 20 June 2022, validated 21 June 2022
Chambers, C. (2022) Probing the interaction between interpretation bias and repetitive negative thinking in subclinical psychopathology. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=182
Recommendation
URL to the preregistered Stage 1 protocol: https://osf.io/89n7u (currently under private embargo)
List of eligible PCI RR-friendly journals:
- F1000Research
- Peer Community Journal
- PeerJ
- Psychology of Consciousness: Theory, Research and Practice
- Royal Society Open Science
- Swiss Psychology Open
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.
Reviewed by Rita Pasion, 13 Jun 2022
I think the authors were capable of handling the major issues I highlight in my review. Congratulations and good luck for data collection!
Evaluation round #1
DOI or URL of the report: https://osf.io/89n7u
Author's Reply, 07 Jun 2022
Decision by Chris Chambers, posted 18 Apr 2022
Two reviewers have now kindly evaluated the Stage 1 manuscript. As you will see, the reviews are critical but constructive, raising a broad range of issues that will need to be addressed to achieve in-principle acceptance. The reviewers' main concerns include improving the focus of the introduction and clarity of key theoretical concepts, further justification and explanation of measures, increasing the sample size to achieve sufficient power (I agree that a power of at least 90% would be desirable), improvement in the presentation including visualisation of model paths, more precise specification of research questions, clarification of appropriate baselines/controls, suitability of Type I error control, and resolving ethical concerns related to suicidality. In my own reading, I noticed that the sample size calculation includes an estimated exclusion rate. This is acceptable but please also ensure that you state a commitment to recruit a defined minimum sample size regardless of exclusions.
The level of revisions required is substantial but within scope for a Stage 1 RR. On this basis, I am happy to offer you the opportunity to submit a major revision, which I will return to the reviewers for further evaluation.
Reviewed by Ariana Castro, 14 Apr 2022
Reviewed by Rita Pasion, 15 Apr 2022
I would like to thank the opportunity to review this manuscript. Overall, I feel it is a good example of how pre-registered reports should be done. The core information is provided, but I leave some additional suggestions.
General comment on introduction: It is hard to follow the main argument of the introduction because the study has too many variables and is testing too different things (which is what makes the study really interesting at the same time). Before creating subheadings, I would recommend authors provide a brief overview of the study and the topics that will be addressed– it would also be a good opportunity to (briefly) present from the begging the main goals and contributes of the study. Authors can also try to relate the different sections to make the argument more fluid.
I think authors can make clearer the rationale beyond random intercept cross-lagged panel mediation (not only by focusing on how it overcomes “common” cross-lagged procedures). There are a lot of advantages in this approach.
The authors did a great job estimating the sample size but I’m wondering if it is not possible to increase the power a bit (90/95%) to avoid being on the exact threshold of adequate power. I know sometimes it is hard to find the right balance between time/human/funding resources and adequate sample size, so it is just a suggestion.
I think CBQp is better explained than AST-D. The main goal of AST-D should be better clarified, so readers can link immediately the instrument with the hypothesis. It is a self-report measure of interpretation bias assessing how individuals process negative information in ambiguous scenarios.
I would recommend authors include a Figure to illustrate model paths. I found it interesting that in the introduction the negative bias was presented as preceding symptoms (path – interpretation bias predicting symptoms) but as far I can understand the opposite can also be tested in this longitudinal design (e.g., symptoms in T1 predicting negative bias in T2). It would be an important topic in the literature because it remains unclear if symptoms affect the way we see the world or if, alternatively, a negative bias is one of the main mechanisms (since the beginning) contributing to the etiology of internalizing disorders. There seems to be a genetic factor for internalizing, but also for cognitive styles associated with biased processing. Moreover, a study by Liu and colleagues (2019) shows that anxiety-induced states increase attention bias to negative stimuli and, simultaneously, the modification of attentional bias seems to influence anxiety under stressful conditions. It is likely a bidirectional association exists between both, but this study could advance our knowledge on this topic, increasing its impact. I leave this decision to the authors because I recognize it can increase the complexity of the manuscript.
Finally, I’m wondering if it would be advisable to use some p-correction for multiple comparisons (I anticipate a lot of p-values being analyzed). I had some good experiences with the false discovery rate. It is a less conservative approach than traditional methods (e.g., Bonferroni corrections). Furthermore, FDR adjusts for the actual p-value distribution of the data, while balancing type II versus type I error.