Recommendation

Understanding the relationship between alpha oscillations and visual perception

ORCID_LOGO based on reviews by Chris Allen, Luca Ronconi and Alexander Jones
A recommendation of:

An #EEGManyLabs study to test the role of the alpha phase on visual perception (a replication and new evidence)

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 03 August 2023
Recommendation: posted 08 December 2023, validated 08 December 2023
Cite this recommendation as:
Chambers, C. (2023) Understanding the relationship between alpha oscillations and visual perception. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=540

Recommendation

For nearly a century, rhythmic patterns in electrical brain activity have been of major interest in neuroscience and electrophysiology, but much remains to be discovered about their causal contribution to cognition and behaviour. Low-frequency oscillations in the alpha band (~8-13 Hz) have been suggested to facilitate the organisation and delivery of visual information to higher-level systems, including those involved in perception and decision-making. If so, visual perception should also operate in cycles that are synchronous with – and determined by – the phase of ongoing low-frequency oscillatory activity.
 
In this #EEGManyLabs study, Ruzzoli et al. (2023) propose a large-scale, multi-lab investigation (9 labs; N=315 human participants) of the relationship between the phase of alpha oscillations and visual perception. The authors focus in particular on replicating a formative study by Mathewson et al. (2009) which reported that during high-amplitude alpha fluctuations, stimulus visibility depended on the time the stimulus was presented relative to the phase of the pre-stimulus alpha activity. In addition, the amplitude of visual evoked potentials recorded with EEG was larger when the target was presented at peaks in pre-stimulus alpha. To explain their findings, Mathewson proposed an influential pulsed inhibition hypothesis in which low alpha power boosts both cortical excitability and stimulus processing (and hence perception), while high alpha power makes stimulus processing dependent on the phase during the alpha cycle at which the stimulus is presented.
 
In the first of (up to) two studies, the authors will seek to directly replicate the key finding of Mathewson et al: that when alpha power is high, the oscillatory phase determines perceptual performance and event-related electrophysiological correlates in a masked visual detection task. Specifically, (a) alpha oscillations are predicted to modulate the probability of perceiving a target stimulus within a single oscillatory cycle, with detection rate associated with separated (and potentially opposite) phase angles, and (b) alpha phase at the onset of the stimulus should drive electrophysiological correlates of stimulus processing (including the amplitude and/or latency of the N1 ERP component).
 
Provided the results of this first study do not conclusively disconfirm these hypotheses, the authors will then conduct a follow-up study in which the temporal predictability of the target onset (in relation to a fixation stimulus) is reduced to test the more severe hypothesis that the observed correlations between alpha phase and perception are linked directly to ongoing oscillations, independent of temporal expectations.
 
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/scqj8​ (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Ruzzoli, M., Cuello, M. T., Molinaro, N., Benwell, C. S. Y., Berkowitz, D., Brignani, D., Falciati, L., Harris, A. M., Keitel, C., Kopčanová, M., Madan, C. R., Mathewson, K., Mishra, S., Morucci, P., Myers, N., Nannetti, F., Nara, S., Pérez-Navarro, J., Ro, T., Schaworonkow, N., Snyder, J. S., Soto-Faraco, S., Srinivasan, N., Trübutschek, D., Zazio, A., Mushtaq, F., Pavlov, Y. G., & Veniero, D. (2023). In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/scqj8​
 
2. Mathewson, K. E., Gratton, G., Fabiani, M., Beck, D. M., & Ro, T. (2009). To see or not to see: prestimulus α phase predicts visual awareness. Journal of Neuroscience, 29, 2725-2732. https://doi.org/10.1523/JNEUROSCI.3963-08.2009
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/3mtbw/?view_only=dc9d15340eaa468dab2bdd4066d25bf4

Version of the report: 2

Author's Reply, 04 Dec 2023

COMMENT:
The authors say they will not concatenate the data across the labs in order to look at variability across the labs. I’m not sure how concatenating and not concatenating in separate analyses might be mutually exclusive. Wouldn’t the most powerful test be based on the largest available sample and look at differences between labs separately? Such a concatenation might help delineate interpretations in the event of non-significant outcomes and address the concern well voiced in the introduction that if there is an effect, it might be much smaller than previously reported. Of course, such an analysis could be explored in a non-pre-registered way, which would mean no changes to the current manuscript, but I would encourage the authors to include it and integrate it now, possibly as a secondary analysis, as I think such a test has the potential to be informative. 


RESPONSE: 
We thank the reviewer for this comment, which had us reflecting for a while. As outlined in the RR, in line with the #EEGManyLabs agreed procedure (Pavlov et al., 2021), any conclusion on the replication and whether the second study should be performed will be based on a meta-analysis, including the results from the single labs. In addition, as described by Lagani et al., BMC Bioinformatics, 2016 (https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-016-1038-1) it seems that merging data would be detrimental to our study. This is because it would ignore the systematic variations associated with the data collected in different labs, which are instead considered by the meta-analytic approach. It is unclear to us if the reviewer has a specific procedure in mind which might address these confounding effects (in Lagani's paper, there are suggestions on which correction one might use, but as they demonstrate, after the correction, the 2 approaches are equivalent). 
Therefore, it is our understanding that the meta-analysis is indeed better suited for small effects. 
Furthermore, the Reviewer seems to suggest to add this analysis as an exploration. To this regard, we would like to reply that  comparing the meta-analytic vs merging data approaches formally is indeed one of the goals of #EEGManyLabs project overall, not only within a single replication. Finally, as by RR recomendations, we think it is better to leave explorations out of the registered pipeline, expecially in such kinds of projects. 

Decision by ORCID_LOGO, posted 29 Nov 2023, validated 29 Nov 2023

I now have three re-reviews of your Stage 1 submission, and the good news is that we are on the verge of being able to issue Stage 1 IPA. There is one remaining issue to address in the review of Chris Allen concerning the analysis plans. Once you have submitted a final revision and response, we should be ready to proceed.

Reviewed by ORCID_LOGO, 20 Nov 2023

The authors have addressed all my concerns very well. I have only one suggestion:

The authors say they will not concatenate the data across the labs in order to look at variability across the labs. I’m not sure how concatenating and not concatenating in separate analyses might be mutually exclusive. Wouldn’t the most powerful test be based on the largest available sample and look at differences between labs separately? Such a concatenation might help delineate interpretations in the event of non-significant outcomes and address the concern well voiced in the introduction that if there is an effect, it might be much smaller than previously reported. Of course, such an analysis could be explored in a non-pre-registered way, which would mean no changes to the current manuscript, but I would encourage the authors to include it and integrate it now, possibly as a secondary analysis, as I think such a test has the potential to be informative.

Reviewed by , 28 Nov 2023

The Authors have satisfactorily​ addressed all my concerns. Good luck for this important replication project!

Reviewed by ORCID_LOGO, 23 Nov 2023

The authors have responded to my queries, which were only minor, and I how no furter comments. Happy for this interesting study to now go ahead. Best of luck!  


Evaluation round #1

DOI or URL of the report: https://osf.io/cf7ad?view_only=dc9d15340eaa468dab2bdd4066d25bf4

Version of the report: 1

Author's Reply, 17 Nov 2023

Decision by ORCID_LOGO, posted 11 Oct 2023, validated 11 Oct 2023

I have now received three very helpful and constructive reviews of your submission. As you will see, the reviewers are overall very enthusiastic about this project, which in my reading (and theirs) already comes close to satisfying the Stage 1 criteria. Within the reviews you will find a range of useful comments and suggestions, including requests for clarification of rationale and design characteristics, including the sampling and analysis plans, and potential interpretration of findings. I look forward to receiving your revised manuscript. I recognise that you are working under pace and will endeavour to process your revision as quickly as possible.

Reviewed by ORCID_LOGO, 05 Oct 2023

Review for To see, not to see or when to see: An #EEGManyLabs replication study with a twist

This is one of the best pieces of work I have been asked to review! It aims to replicate an important study with foundational implications for influential theories of oscillatory brain function. There have been previous replication attempts in this area, including a Registered Report, but I am not aware of such a multi-lab attempt with potentially such high power (but see point 4 below). The outcome of this, I expect, will be of great interest to the related field.

Below I list some suggestions for adjustments in order of appearance in the manuscript, none of which I see as fundamentally critical.

1. Title: would it be possible for the title to be a bit more specific and informative? Something along the lines of ‘Testing the replicability of alpha phase determining visual perception’. I realise that’s not as catchy, but I found the current title a little vague.

Introduction:

2. The third sentence assumes subjective experience is a continuous flow, but phenomenology suggests this is not straightforwardly the case (see e.g., Husserl, The Phenomenology of Internal Time-Consciousness, or Busch and VanRullen, 2014, Is visual perception like a continuous flow or a series of snapshots? In Subjective Time, MIT Press, or Dainton, 2010/232023, Stanford Encyclopaedia of Philosophy, Temporal Consciousness ). This could be simply resolved by losing the first part of that sentence.

3. In the hypothesis at the end of the introduction, I thought the statement under a negative finding that “there is no evidence for visual perception to operate in cycles” was a little strong, and perhaps should be rephrased to refer to this study's evidence. This also relates to the point 9 below. Relatedly, I thought it would be potentially more informative if the decision criteria at the end of the introduction (which seems primary) were based on the outcome of the Bayesian analysis rather than the frequentist statistics.

Methods:

4. I found the first and second sentences of the participants section a little ambiguous. It could be read that the 7 labs will collectively provide a 35 participant data set, i.e., 5 each, OR, each lab would aim to contribute a 35 participant data set. My confusion was not helped by other sections seeming to imply that both the full data set had n=35 (e.g., “final sample size (N=35)”), and each lab is to produce a complete data set (e.g., “compute effect sizes (Cohen’s d) for each individual lab”). If it is the former, and given the reasons well specified in the introduction for effect sizes reported by Mathewson et al., being larger than we might expect in replication, then I do not think the study as it stands should be described as “high-powered”. If there is a real effect but it is smaller, as the authors suggest is likely, the study would be underpowered. Would it be possible to recalculate power estimates based on smaller effect sizes? Perhaps, based on the estimates of VanRullen et al., 2016 described in the introduction? I appreciate the intention behind the use of the Mathewson et al., effect sizes, but if they are used, I suggest explicitly describing the limitations in interpreting a negative outcome (also see point 9).

If, however, each lab is to contribute a 35-participant data set, then great, and I think this should be made clearer and the total minimum n should be stated. It might also help if the smallest effect size reliably detectable was stated. Inclusion of such should be informative either way. I also wondered whether a simpler concatenation of the data across labs might complement the meta-analytic approach but be slightly more powerful. I also thought it might be possible to combine evidence across labs more efficiently by taking advantage of Bayes Factors being transitive (Morey and Rouder, Psychological Methods, 2011).

5. I didn’t see the need to restrict the age of participants to a 12-year range.

6. Stimuli and procedure. I thought there should be a sentence qualifying the timings as approximate, or specifying two sets of timings, as the refresh rates of the monitors used across the different labs would mean these exact timings cannot be followed by all labs.

7. The sentence “The experimental session counts 16 blocks of 72 trials each”, should that be “The experimental sessions consist of 16 blocks of 72 trials each”?

8. Should an impedance target be prescribed? I understand this can be equipment-dependent, but then maybe it could be lab-specific.

9. I would have liked to see more weight given to adjudication between support for null vs. lack of evidence in case of a negative finding in the main text. I think differentiating between these two potential outcomes could be important information for the field. I understand that the Bayesian meta-analytic approach offers the potential to do this, so I was wondering whether the Bayesian analyses could receive a greater emphasis in terms of decision criteria. For example, at the end of the introduction. This might mean using Bayesian equivalents for the primary t-tests (which can be derived from T statistics) and recalculating the sample size estimation based on Bayesian simulations, but in my experience, the outcome invariably aligns between Bayesian and frequentist estimations. Alternatively, frequentist statistics can assess equivalence (Lakens et al,. 2018, AMPPS). This relates to point 4 where to fairly assess equivalence, a smaller expected effect size may be required.

10. Related to the above point and based on the expectation of small effects described in the introduction, I thought the prior applied in the Bayesian analyses should probably reflect this aspect of the hypothesis and be smaller than the default 0.707 scaling factor (see Dienes, 2011, Perspective on Psychological Science).

11. Table 1 describes decision criteria based on p<0.05, whereas the text describes p<0.02 criteria (also true in the final table). Can this be checked or reasons for the discrepancy given? Also, I did not find the description of Hp b1 in this table very easy to understand.

12. Table 2 and the final table seem to repeat much of the same information, I realise the final one conforms to guidelines, but I still wondered whether they could be integrated. I also thought the final table would benefit from basing criteria on Bayesian tests to more straightforwardly differentiate between negative and inconclusive outcomes, and this might help align the tables.

 

Reviewed by , 29 Sep 2023

This registered report describes a large replication project under the initiative #EEGManyLabs. The aim for this report is to describe a replication of the study by Mathewson et al. (2009), which was among the first evidence showing that the visibility of a visual target stimulus depended on the phase of prestimulus alpha phase.

I find this this multi-labs project very interesting. The report is well written and clearly describe all the details of the replication study that will be conducted.

I have only a couple of observations that relate to the declared intent of performing a second study, conditioned to the results of the first one, where the Authors would like to test whether temporal expectation plays a role in the relationship between stimulus detection and ongoing alpha. The motivation behind the 2nd study is clear: i.e., if visual perception operates in cycles, any influence of pre-stimulus alpha phase on behavioural and/or neurophysiological measures should be valid even when less (or no) temporal expectation is present. However, I would suggest the Authors to consider embracing the following observations:

1) I think that sometimes the description of the rationale Study 2 is overly simplistic. It is true that in Study 1 subjects will have strong temporal expectation becuase of the very short and fixed inter-stimulus interval (ISI) employed, but such expectation will be very likely to be present also in Study 2, altough in an attenuated form. This should be more clearly stated.

2) I invite the Authors to consider that, in case Study 1 will be succesfully replicated, temporal expectation is only one out of the different aspects that differentiate the two studies. For example, the evoked response and the concomitant 'phase reset' created by the appearence/disappearence of the fixation cross is likely to be more influential in Study 1 than in Study 2, becuase of the increased temporal distance between events present in Study 2. How will the Authors take into account this factor in Study 2 to be sure about their conclusion on the role of temporal expectation? This should be better clarified.

 

Reviewed by ORCID_LOGO, 29 Sep 2023

This registered report aims to replicate the effects observed in Mathewson et al (2009) in study 1 which has a fixed ISI. If the results are replicated, then a variable ISI will be included in a study 2. The original Mathewson et al. study has some limitations in the fixed ISI design, which are covered nicely in the intro, and it also has some issues with the sample size used. Or rather, the lack of estimating sample size, which we all did back in 2009. Thus, it is a good study to replicate in this #EEGManyLabs. The manuscript is nicely written and its clear. I only have one main point and a couple of minor things to clarify.

 

The prediction is that if temporal expectation may be present in study 1 with the fixed ISI but not in Study 2 which includes a variable ISI. I think one thing to consider is any potential influence of the foreperiod effect/hazard function which may be present in Study 2. That is, there is some type of temporal expectation (the effect of time) in Study 2 which is not there in Study 1. So the question is whether you have considered if the foreperiod/hazard effect can influence the results? Would be good if you can add something regarding this.

 

 

Minor

Why pick Pz and Fz as the analysis electrodes? I get that these were the ones used in the original paper, but in that paper they also used Oz and show an effect on P1/P2 components.

 

Quality checks: Can you please add a bit more description of what “target only trials larger than 0dB…” As the timing is crucial to this study and finding effects (if they are there) then the checks are important. You don’t want to find yourself in a situation where a lab fails to replicate due to poor timing, not due to the effect. The relatively high refresh rate will go some way ensure stim presentation is accurate, but of course other things can cause delay. If the results are different across labs then you might want to check with e.g. a photo diode that any results are not due to poor timing. Just to rule that out.

 

Alexander Jones

User comments

No user comments yet