DOI or URL of the report: https://doi.org/10.31234/osf.io/d8wes
Version of the report: 3
Dear Dr. Zoltan Dienes,
We have corrected the typo in the revised manuscript ("Registered_Report_Stage-2_Version_4.pdf"). We have also uploaded a PDF document indicating modifications in the manuscript with Tracked changes.
Thank you and the reviewers for the valuable comments and suggestions on the manuscript. We are happy that the quality of the manuscript significantly improved after the revisions based on review suggestions.
With kind regards,
Kishore Kumar Jagini,
(On behalf of the authors).
Thank you for your thorough revision of the mansucript. The reveiwer is very happy with the way you addressed their points, and there is just one typo to correct before we proceed to acceptance.
The authors have done an excellent job at addressing my concerns about the previous version. I have no further comments. Careful, though, with a small typo on page 29, line 11 (misplaced reference to Schneider & Shiffrin).
DOI or URL of the report: https://doi.org/10.31234/osf.io/d8wes
Version of the report: 2
The reviewers have thoroughly reviewed your Stage 2 manuscript and make some very useful suggestions for your consideration. There is a question of whether you have evidence that the auditory stimuli were heard at all. One reviewer recommends Bayes factors; I prefer them myself to significance tests, but they do not subsitute the pre-registered analyses of course, so you may include them in an exploratory section ( you may find this useful: https://psyarxiv.com/yqaj4).
Make sure you report the actual t values in the text for the main t-tests you conduct.
In the Discussion, the issue of subjective vs objective tests could be considered in a more structured way. At the moment you first bring it up then side with objective tests; then in a subsequent paragraph starting "The asymmetry in participants’ objective measures of awareness of statistical regularities between Experiment 1 and Experiment 2 is unclear" deal with one issue, then revert somewhat unrelatedly to the general issue of subjective vs objective measures, siding with subjective measures. It would be better to discuss weakness and strengths of subjectve vs objective measures in one place, then in a way consistent with that discussion, interpret your results.
In the present version of the article, the authors report the empirical results of the two preregistered studies approved in Stage 1. The method and results sections follow the preregistered protocol and all the new analyses, not originally included in the protocol, are clearly identified as such. The general discussion is consistent with the analyses reported in the results section. Therefore, I think that the manuscript should be accepted for publication. I only have minimal comments that the authors might want to take into account for the final version.
In the general discussion the authors mention that one potential explanation of the results is that participants may have paid little attention to the task-irrelevant stimuli. Indeed, this seems like the most plausible explanation and an interesting idea for future research is to test this hypothesis again in an experimental task that ensures that sounds receive some attention. But I even wonder if participants heard the tones at all! The preregistered protocol mentions that participants would perform a test to ensure that they could discriminate the task-irrelevant stimuli, but I could not find any information about that manipulation check in the manuscript. Did I miss it? Or is it simply not reported? I think the final version of the ms needs to include some information about the results of this discrimination test.
On page 27, the authors conclude that “Moreover, these results suggest that, at least under the conditions of Experiments 1 and 2, the participants are unable to learn associations between the location of the visual distractor and the auditory stimulus.” In truth, it is arguable that this is the case, because the awareness test of Experiment 1 suggests that participants did learn something, even if this was not translated into faster responses in valid trials. Of course, another possibility (and I find this one tempting) is that participants didn’t learn anything at all about the sounds and that the results of the awareness test in Experiment 1 are entirely driven by the test question itself. That is, if the awareness test question asks participants to rank positions when the left tone was presented they can speculate (ad hoc) that the most likely locations might have been on the left. And the opposite for the right tone. In other words, performance in the awareness test might be entirely driven by inferences made during the test itself and not by anything learned during the visual search task. Note that this kind of inferences are easier to make in Experiment 1, where participants might assume that sounds on the left side might be associated with distractors on the left side. It is less clear how participants can make a similar assumption about high/low pitch sounds in Experiment 2.
I wonder if anything could be done to test for this possibility in Experiment 1. If participants performance is driven by inferences made at test, then it follows that participants will be more likely to mention locations on the right-hand side for the right-tone and on the left-hand side for the left tone, but within each hemifield, there is very little reason to expect participants to perform above chance. In other words, if the analysis is restricted to responses on the same side as the tone, participants might not show above chance performance. If they do, this would show quite convincingly that their responses were driven by something they learned during the first stage of the experiment and not something based on inferences made at test.
Minor comments
p. 2, line 7, remove comma after “known”
p. 3, lines 24-25, rewrite “(for review see, (Frost et al., 2019)” as “(for a review, see Frost et al., 2019)”. That is, remove the middle parenthesis and move the comma.
p. 4 “However, in recent studies utilizing similar probabilistic tasks, testing the awareness of statistical regularities with more sensitive measures indicated the evidence of explicit knowledge of awareness (Giménez-Fernández et al., 2020; Vadillo et al., 2020). These studies cast doubts on the implicit nature of learning distractor statistical regularities in additional singleton tasks.” -à I understand that no changes should be introduced at this stage, but the authors might want to know that we have just published similar results with the additional singleton task: https://link.springer.com/article/10.3758/s13414-022-02608-x This might also be relevant in the general discussion, on page 28, when the ms states that “The relative contributions of whether the participants’ are “aware” or “unaware” of regularities on distractor suppression is not clear from the previous literature (Theeuwes et al., 2022).”
p. 9. Sample size planning for awareness is conducted in the d scale, but the reference value from Vadillo et al is in the h scale. I am not sure ds and hs can be considered equivalent. Wouldn’t it make more sense to run the power analysis on the h scale?
Also related to sample size, the authors didn’t commit to a specific N. They simply preregistered that N would be larger than 121. The final N was 124 in both experiments. I assume that no analysis took place before those 124 datasets were collected. Could this be stated explicitly in the ms?
In the methods sections, some verbs are now presented in the past tense, but others remain in the present or future. Sometimes, the text sounds a bit weird, like for instance: “The target (shape singleton) was [PAST] present in all the trials, and the target can be [PRESENT] either circle or diamond with equal probability. A blank display with intertrial interval (ITI) will be [FUTURE] randomly…”. I’d suggest changing all the verbs in the method sections to the past tense.
p. 17, line 23 “HpValD: 1022.227ms ± 137.409…” and elsewhere, please explain what ± refers to. In the next paragraph it seems to stand for SD, but in the figures the error bars stand for SEM. Perhaps it would be clearer to use the same dispersion statistic throughout the text.
p. 17 and elsewhere, would it be possible to report exact p-values instead of just “p > 0.02” or “p < 0.02”?
Figure 3, left panel: it is not easy to appreciate differences across conditions. Would it make sense to rescale the figure in the y-axis? Perhaps including just values from 700 to 1200 or so? Consider the same change for figure 6.
Figure 4. Perhaps the figure caption could remind the reader of the specific text in the awareness question? That would make the figure much easier to understand without going back to the method section. Also, change “definitly” to “definitely” in the x-axis. Incidentally, I found a bit weird that there was a bimodal distribution in participants responses, surprisingly similar for both experiments. Is there any potential explanation for this? Just curiosity…
p. 27 “Indeed, prior research suggested that allocating attention 1 to sensory events is required for statistical learning” Incidentaly, a recent study from our lab provides converging evidence for this in a related visual statistical learning paradigm: https://link.springer.com/article/10.3758/s13423-020-01722-x
Signed,
Miguel Vadillo
The present study investigated whether the spatial and non-spatial statistical regularities of task-irrelevant auditory stimulus could suppress the salient visual distractor and their results indicate no reliable effect of task-irrelevant cross-modal stimulus regularities on distractor suppression, irrespective of participants’ awareness of the relationship between distractor location and predictive auditory stimulus. I have several concerns.
Will the ability to discriminate the spatial location or two sound frequencies influence the main experimental task? If the participants can’t determine the spatial location of sound or the sound frequencies well, they can’t surely learn the cross-modal statistical regularities. It seems only those participants who showed a minimum of 75% accuracy were selected for participation in the experiment. But there are chances they can’t judge the sound location or frequencies correctly. This will bias all the results reported in this study.
The auditory stimuli were always simultaneously presented with visual search displays. Will the SOA between the two modalities influence how the stimuli were encoded? Could the SOA be a factor to influence cross-modal statistical learning?
Given the authors mainly reported non-significant differences between valid and invalid distractor locations, the Bayes factor should also be reported to confirm the null effect.
I suggest that the authors also show the results of the RTs for different distractor conditions across experimental time (or epoch) in the main results (especially the figure) which are now mentioned in the general discussion only.
Please correct the citation format and typo in the text.
DOI or URL of the report: https://psyarxiv.com/d8wes
Version of the report: 1
Dear Dr. Zoltan Dienes,
Thank you for allowing us to submit the second version. We have updated the manuscript according to your suggestions. We have uploaded the tracked changes file as well.
Sincerely,
(On behalf of authors)
Kishore Kumar Jagini
Indian Institute of Technology Gandhinagar, India.
The guidelines for how the introduction can change from Stage to Stage 2 say: " Aside from changes in tense (e.g. future tense to past tense), correction of typographic and grammatical errors, and correction of clear factual errors, the introduction, rationale and hypotheses of the Stage 2 submission must remain identical to those in the approved Stage 1 manuscript. "
The changes to your introduction as shown by tracked changes seem extensive. I appreciate that this may be an effort to make the writing smoother, now that you read it again, but a principle of RRs is the way you set up the study at Stage 1 should remain almost exactly the same for Stage 2, bar the minor changes just mentioned, which should be few and far between. Could you retain as much of the Stage 1 as you can, strictly unless there was a clear factual error or typo, and then resubmit?