A new look at loneliness by testing hyperalterness
Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness
Abstract
Recommendation: posted 11 June 2024, validated 17 June 2024
Eisenbarth, H. and Schwarzkopf, D. (2024) A new look at loneliness by testing hyperalterness . Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=598
Recommendation
URL to the preregistered Stage 1 protocol: https://osf.io/fxngv
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
1. Bathelt, J., Dijk, C., & Otten, M. (2024). Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness. In principle acceptance of Version 5 by Peer Community in Registered Reports. https://osf.io/fxngv
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.
Reviewed by Marta Andreatta, 22 May 2024
I thank you the authors for having clarified all my questions.
Reviewed by anonymous reviewer 1, 06 Jun 2024
Thank you for revising the manuscript based on my and the other reviewer's comments. I have no further comments at this stage of the review process. Looking forward to seeing the results of this report!
Evaluation round #1
DOI or URL of the report: https://doi.org/10.31219/osf.io/j5v9b
Version of the report: 5
Author's Reply, 04 May 2024
Decision by Hedwig Eisenbarth and D. Samuel Schwarzkopf, posted 05 Apr 2024, validated 05 Apr 2024
Agreeing with the two reviewers, we recommend a revision of the report for more clarity (see suggestions).
Reviewed by anonymous reviewer 1, 04 Apr 2024
Review Registered Report Stage 1
“Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness” (Bathelt et al.)
1A. The scientific validity of the research question(s).
The authors apply a new roving oddball paradigm in combination with ERP analysis to distinguish between hypersensitivity and hyperalertness to social stimuli in lonely individuals. This research question is scientifically justifiable and derived from existing theories. Furthermore, it is defined with sufficient precision as to be answerable through quantitative research. It also falls within established ethical norms.
1B. The logic, rationale, and plausibility of the proposed hypotheses, as applicable.
The authors state clear, directional hypotheses which are based on empirical findings from a first study in individuals with “normal loneliness”. I was wondering whether the authors do expect individual differences in the extent of lonely individuals displaying hypersensitivity and/or hyperalertness? Maybe the authors could clarify this issue.
1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable).
In total, I found the methodology and analysis pipeline sound and feasible. The authors have provided a reasonable justification for choosing this sample size. Although the authors explain why they plan to include individuals with mood disorders, I remain skeptical as to whether this approach could confound potential findings. The authors may also consider to include the Brief Symptom Inventory to check the effect of comorbid mental health issues, as well as questionnaires on current stress levels (e.g. Perceived Stress Scale; Cohen et al., 1983).
1D. Whether the clarity and degree of methodological detail is sufficient to closely replicate the proposed study procedures and analysis pipeline and to prevent undisclosed flexibility in the procedures and analyses.
In general, methodological details are described sufficiently. However, I had some problems to understand the averaging of ERP trials (2.4.2). Why were the ERPs averaged over the 5th repetition (and not the 6th)? How many trials were available for averaging? Finally, I wonder whether the authors may consider to conduct ERP analyses which analyze all available temporal and spatial information (conducting TANOVAs or analyzing GFP; for details, see, e.g., Murray et al., 2008, Brain Topography; Cacioppo et al., 2015, Journal of Neuroscience Methods; Schiller et al., 2023, Brain Topography) rather than a priori disregarding specific spatial and temporal information. Having that said, the approach proposed by the authors seems valid, controlling for multiple testing (but why was alpha set to 0.02?).
1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the obtained results are able to test the stated hypotheses or answer the stated research question(s).
Not applicable.
Reviewed by Marta Andreatta, 03 Apr 2024
Dear Dr. Eisenbarth,
thank you for asking me to read and review this interesting study. I have very enjoyed reading it as the study is elegantly planned and the article is clearly written.
I have very few comments, which I would like to address to the authors.
1. In the abstract as well as in the hypotheses, the authors expect greater electro- cortical signal to angry vs. happy faces in those participants, who report high levels of loneliness. Based on the literature (and also as they correctly stated), this pattern of response is to expect also in those with low loneliness. I would therefore slightly reframe the hypotheses mentioning the group differences.
2. Were the participants for the screening recruited from the general population or were they psychology students?
3. If I recall correctly, in the DSM-5 there are no mood disorders, anymore.
4. I have appreciated the clear list of inclusion and exclusion criteria. However, these are partially repeated in the description of the participants. I would avoid repetitions.
5. The criteria of the two researchers for the good data quality are not mentioned and I was wondering what exactly these researchers consider as good or bad data quality.
6. I could not read any specific description for the blocks in the report and this should be precisely indicated.
7. Participants were or will be asked to press the space bar, when the red fixation cross appears over-imposed to the face. Do they have any time restriction for performing this response? If yes, how long did they have? If not, how do the authors consider those trials with very long reaction time?
8. The faces are presented for 0.2 sec but the epochs are until 1 sec after stimulus onset. How are the authors sure that the effects observed after face offset are related to it?
9. The task entailed 1500 trials. In the report, it is read “roughly 50 trials” per condition. I found the word “roughly” somehow inappropriate for the number of trials as this should be precise. Moreover, I count 12 conditions (female vs. male faces with either happy or angry expression for young, middle and old faces). I might count wrongly, but how exactly did the authors came to 50?
10. In the analyses for the electro-cortical signal, the authors will not consider laterality as factors, despite the cluster for the early window has it. Why?
11. What was the rationale for only considering the first and the fifth trials, and not the complete five trials?
12. What was the rationale for setting the alpha level by 0.02 and not 0.05?
13. The exploratory analyses are not very clear. It is read: “[…], we will conduct additional exploratory analyses that control for social anxiety and depression. If the conclusions change when these variables are included as predictors, […]”. What are these “additional exploratory analyses”? Regressions? Which type of regression? And if a regression is calculated, why do the authors need to calculate a mediation analyses?