DOI or URL of the report: https://osf.io/3z4bf/?view_only=ca95cb2546604b6ab7da562fbee68d39
Version of the report: Stage 1 Review 1: clean document: https://osf.io/sjvf3?view_only=ca95cb2546604b6ab7da562fbee68d39
The reviewers and I have now evaluated the revised manuscript. One reviewer and I found the revisions satisfactory. However, the second reviewer still requires a few changes. I think it is important to implement them before in-principle acceptance of Stage 1 because the authors will need to implement the suggestions in their meta-analysis. I am looking forward to the revised manuscript, which I expect will not take too long for the authors to prepare.
I have read the response to reviews letter and the marked revised mansucript and can confirm that the authors improved an already strong RR even further. I have no further suggestion and can recommend that the RR can be accepted by PCI.
Signed,
Sebastian Ocklenburg
I would like to thank the authors for carefully integrating my comments and suggestions. I only have a few concerns remaining:
Point m (as per author reply): Thank you for clarifying the coding procedure. However, I would like to re-iterate my point about the "score" as being separate from the measure. It is possible that a measure provides multiple scores (e.g., subscales for questionnaires or both accuracy and reaction time for tasks) that may capture different socio-cognitive constructs or capture the same ones in a different way. This might require different decisions with regard to construct classification or analysis (e.g., reverse coding) that would be important to note during data extraction.
Point r: I would recommend the authors consider excluding findings that are not standard correlations, or conduct sensitivity analyses, as these effects will likely complicate the meta-analytic findings and increase heterogeneity. For example, with multivariate techniques, the effects will not be comparable to standard correlations as the values integrate the effects of many other variables. This is quite different from a standard correlation and is highly dependent on the model. Partial correlations raise a similar concern, but could be integrated if clear patterns are identified (e.g., age & sex being partialled out) as with potential sensitivity analyses.
Point t: Please add "any" to specify "failing to report relevant details on any of the defined moderators...".
Addition: The addition of a measure of inter-rater reliability is welcomed, but I would like to caution the authors on the use of Cohen's Kappa, which can produce low values despite high agreement (the Kappa Paradox, https://doi.org/10.1016/j.jhsa.2024.01.006) with systematic reviews, due to the discrepant prevalences of include/exclude ratings. I would recommend Gwet's AC1 statistic or the Brennan-Prediger coefficient, as these performed well in our previous scoping review (https://doi.org/10.1038/s41537-022-00219-x, Supplementary Table 1).
Signed,
Katie Lavigne, Ph.D.
Assistant Professor, Department of Psychiatry, McGill University, Montreal, Quebec, Canada
Researcher, Douglas Research Centre, Montreal, Quebec, Canada
Lead, Douglas Open Science Program
DOI or URL of the report: https://osf.io/e4kdw?view_only=ca95cb2546604b6ab7da562fbee68d39
Version of the report: 1
Review of Stage 1 RR “Meta-analysis: Social cognition and structural connectivity”
Predefined criteria:
1A. The scientific validity of the research question(s).
The three research aims stated in section 1.4 all have high scientific validity and the introduction makes it clear why it makes sense to investigate these aims.
1B. The logic, rationale, and plausibility of the proposed hypotheses, as applicable.
While section 1.4 is named “Research aims and hypotheses” it actually does not contain any hypotheses. I would like to encourage the authors to provide clear, directional and testable hypotheses derived from the literature and the research aims. This is, however, not necessarily required according to the guidelines. If this a fully data-driven project, I would suggest to include a sentence stating so and give the rationale, why no hypotheses were given.
1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable)
This generally is well written and follows the standards in the field (PRISMA, etc.).
Just a few suggestions:
Screening: I would include some statistical measure of inter-rater coherences like Cohen’s Kappa.
One thing the authors may wish to consider, but is no must:
It becomes more and more standard to include formal risk of bias analyses in meta-analyses, e.g. following NOS:
https://www.ohri.ca/programs/clinical_epidemiology/oxford.asp
1D. Whether the clarity and degree of methodological detail is sufficient to closely replicate the proposed study procedures and analysis pipeline and to prevent undisclosed flexibility in the procedures and analyses
Yes, I think so.
1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the obtained results are able to test the stated hypotheses or answer the stated research question(s).
I think this is not likely to be an issue in this project.
Evaluation:
All together this is a very well-written Stage 1 RR that follows the standards for meta-analyses very well. I think it deserves IPA.
Signed,
Sebastian Ocklenburg