DOI or URL of the report: https://osf.io/9ev3c
Version of the report: 3
I have now obtained three reviews of your revised manuscript and we are close to being able to award Stage 1 in-principle acceptance (IPA). There is just one remaining comment to address concerning the reporting of excluded data. Once this is resolved, IPA should be forthcoming without requiring further in-depth review.
The authors addressed all my comments with sufficient depth and improved their manuscript accordingly. Therefore, I have no further comments.
Best wishes,
Pia
I am satisfied with the authors' responses to all reviewers' points, and with their revisions.
I wish to thank the authors for their detailed replies and adjustments to the manuscript.
I have just one remaining minor suggestion regarding the reporting of the MEG preprocessing. I appreciate the authors' clarification that the removal and interpolation of bad channels is implemented within the Maxfilter algorithm. In analogy with my suggestion of reporting the number of removed ICA components for each group, I would also suggest reporting the number of removed and interpolated channels for each group, again to ensure that the number of removed channels does not substantially differ accross groups.
I have no further questions or suggestions.
DOI or URL of the report: https://osf.io/4agp7
Version of the report: version 2
I have now obtained three very helpful and constructive reviews of your Stage 1 manuscript. As you will see, all of the reviews are broadly positive while also noting a range of concerns that will need to be addressed to meet the Stage 1 critieria. Headline issues to address include consideration of additional literature in the introduction, justification of hypotheses, justification of the target effect size in the power analysis, clarification and expansion of key methodological details (including exclusion criteria), and the consideration of additional analyses.
Overall I believe your submission is promising, and in-principle acceptance should be within reach following a comprehensive revision and response.
(I should note that I do have some prior familiarity with this line of research by the authors, and have previously had some discussion about interpretation of findings of a previous study. However, I do not believe this constitutes a conflict of interest.)
This manuscript details research background, rationale, questions, hypotheses and methods for a study that has completed data collection, but not yet commenced data analysis. It focuses on the timely question of whether differences in how individuals form sensory predictions is an important determinant of whether they develop tinnitus following hearing loss.
Overall, I think this is a very strong submission in a number of regards, and I have no significant concerns.
Firstly, the research question is very clearly laid out, and this follows logically from the authors' previous work. This study aims to replicate a previous finding, ensure it holds up even after controls are matched for hearing profile (as well as age and sex), and to address the further question of whether degree of hearing loss alone can account for alterations in auditory predictive tendencies. The hypotheses are well-organised and well-articulated, and for each one the implications are clearly summarised for both cases: that the null hypothesis is supported, or that it is refuted.
Secondly, the analysis is well-specified, as it is based on existing methods already shown to be well-suited to data from this experimental paradigm. Power calculations seem well-thought through and numbers justified.
My specific comments are very small, as laid out below, and acceptance for publication as a Registered Report should not be contingent on any of these:
- Line 67: "a highly predictive trigger" might be better phrased as "the main risk factor" for a few reasons (avoiding multiple uses of 'predictive' throughout the manuscript, and acknowledging a possible differences between risk factors, which are likely long-term, and triggers, which may be short-term and transient)
- Lines 114-117: The argument is made that the previous finding of stronger anticipatory predictions in people with (compared to without) tinnitus is interpreted as indicating that these predictive tendencies are a risk factor for tinnitus. This is a reasonable preferred explanation, but other reasonable possibilities include tinnitus being the cause of altered predictive tendencies, and also of their being a third factor that is responsible for both predictive tendencies and tinnitus development.
- Line 274: 'within the range of hearing' might be better phrased as 'within a region of normal audiometric thresholds'
I also have one larger point, though it is more of a suggestion for the authors to consider, rather than anything needing to necessarily be incorporated into this manuscript. The use of the time-generalised classifier to reveal anticipated stimuli is clearly very strong. However, the majority of studies examining stimulus-related predictions in tinnitus use some version of the mismatch negativity (MMN) paradigm. Therefore, to facilitate comparison of the results of this study to other studies of predictions in tinnitus, I wonder whether the authors might also perform some kind of equivalent to an MMN analysis of these data: i.e. a straightforward analysis based on the evoked field waveform itself. Whilst there are not straightforward 'standards' and 'deviants' here, it should still be possible to compare physically identical stimuli which differ according to how unexpected they were based on auditory sequence properties from that block (and whether or not they are a repetition of the preceding stimulis, as a more trivial factor to account for).
This Stage 1 registered report details a proposed research protocol to investigate anticipatory auditory predictions in tinnitus patients compared to age-, gender- and hearing level-matched control subjects. The authors plan to analyze already available MEG data from 80 subjects in total, using both an experimental design and analytical pipeline they have already utilized in previous studies. In this protocol, the research questions are well-define, address important questions in the field of tinnitus research, and correspond clearly to the proposed methodology. In addition, I would also like to commend the authors for the truly excellent overview provided in the introductory section.
The current protocol is based on earlier findings by the same group of authors, which have – to the best of my understanding – appeared only in preprint and not in any previously published articles (I am referring to Partyka et al., 2019). These previous results are well-distinguished from the current proposed work throughout the protocol. I agree with the authors that the matching of both experimental groups for hearing loss will ensure that the outcomes more closely relate to their core underlying hypotheses.
Throughout the protocol, I have identified some issues where more information should be provided by the authors to ensure both interpretability and reproducibility. I have outlined these items below in order of appearance:
L200 and following, Sampling plan: The sample size calculation yielded a minimum number of 80 participants. This calculation was based on an expected effect size of 0.75. I agree with the authors that a dataset including 40 tinnitus patients and 40 control subjects is larger than average in the field. However, it is not clear to me whether the expected effect size is solely a theoretical estimate or if it is based on their earlier findings (Partyka et al. 2019). Would the authors be able to provide the effect size obtained in their earlier work, so that the reader might more readily determine whether this effect size justified? Or would this not be feasible due to the differences in analytical methods used in both studies?
Moreover, the authors clearly state that the required sample size is at least 80. This means that data from each of the currently included 80 subjects will need to be utilized in order to answer the research questions. However, it might be the case that some of these data are of insufficient quality to be included in the final data analyses. I understand that the authors have not yet observed the data (and agree with their corresponding assessment of their registered report as Level 3). Nevertheless, is there any way to guarantee the usability of the entire dataset? Are there any quality control checks, perhaps already performed by independent researchers, that these data have passed before subjects were included in the dataset?
Line 225 and following, Participants: How did the authors perform the matching procedure based on hearing level? Specifically, were subjects matched based on their ‘hearing status’ (i.e. the different categories explained in L247-250), or based on pure tone averages at certain frequencies? If subjects are matched based on ‘hearing status’, there might still be important differences in hearing thresholds between both groups (for example, thresholds for participants with ‘high-frequency hearing loss’ might still differ substantially). Would the authors expect such potential differences to influence the results? As excluding potential confounding effects of hearing loss is crucial, something which the authors also stress at different points throughout the protocol, I would strongly recommend at least the inclusion of audiometric data in the final report. This would allow to judge whether there are any systematic quantitative differences in hearing levels between both groups. If such differences would exist, I would urge the authors to examine whether they had any effect on the final results.
Moreover, tinnitus patients often concurrently experience psychological complaints, such as elevated levels of anxiety and/or depression. For some studies examining neural activity in this patient population, these factors are also considered to have potential confounding effects. Do the authors expect these potential concurrent complaints to affect their results? If so, are the authors planning on taking any precautions to exclude potential confounding effects due to elevated psychological distress?
L256 and following, Stimuli and experimental procedure: Please add the sound intensity of the auditory stimuli that were provided to the participants.
L306 and following, MEG data acquisition and preprocessing: The authors do not mention the removal of bad trials or bad sensors (and subsequent interpolation) from the data. Are these steps not a planned element of the processing pipeline? If so, I would like the authors to point out why they choose not to remove bad sensors and/or trials in the protocol. If not, please add the necessary information about how this removal will be performed.
The authors describe the removal of ICA components containing unwanted artefacts. I would suggest that they report the number of components removed for each experimental group in their final report, in order to ensure that the number of removed components does not substantially differ across groups.
Would it be feasible to blind the involved researchers to the group to which the data belong (tinnitus vs. control) during MEG preprocessing and analysis? Is this planned and, if so, could this be added to the protocol?