DOI or URL of the report: https://osf.io/zu6qv
Version of the report: 1
Dear Dr. Karakashevska,
Thank you for your thorough reply to the reviewers and recommenders. We have now received positive feedback from three of our reviewers and will proceed with an invitation to reply to the few remaining comments.
One of the reviewers highlights the necessity of a multiple comparison correction regarding the sub-hypotheses under hypothesis 3, especially considering how these pairwise comparisons come together in support of hypothesis 3. We agree with the reviewer and suggest you could use Bonferroni and accomodate the adjusted alpha for each comparison. Power should be calculated for the t-tests with respect to the adjusted alpha. As your study seems very highly powered, this should not be a problem.
Futhermore, we will reiterate our suggestion that you only run the pairwise comparisons to answer hypothesis 3 rather than include the RM ANOVA to examine the main effect of Block. The reason we suggest jumping straight to the four pairwise comparisons is twofold: 1) a main effect of Block does not add anything beyond the pairwise comparisons, each of which directly tests a claim of interest; 2) a Bonferroni already controls familywise error rate without the need for the omnibus main effect to be significant (the only function of testing the omnibus main effect before proceeding to specific comparisons is to control familywise error rate). Also be clear what error term you will be using when you specify t-tests (presumably the error term specific to that t-test rather than one derived from the ANOVA). The error term specific to the t-test is what you would get just by asking a package to perform a t-test, rather than a multiple comparison as a post-hoc test as part of an ANOVA.
For clarity, we also request that you separate the H3 hypotheses and their four t-tests in the design table.
Minor
Missing word in the additional sentence: "In a recent SPN study (Karakashevska et al., forthcoming), we found polygons slightly *** perspective cost, but do not eliminate it."
Please accompany revisions with a reply to reviewers and a tracked changes version of the manuscript. We look forward to hearing from you.
Best,
Grace & Zoltan
The revised version of the article and the reply addressed my questions. I only have a few minor suggestions left.
The new Figure 4 is very useful. Try to match font sizes across panels and align the ABCD labels. In panel A, font sizes are too small and the y axis seems to have two superimposed lines.
"We powered our experiment ..." should be changed to "our line of research". Power is not defined for a single experiment. Alternatively, explain how you determined your sample size, using a power analysis...
"are significant " -> "are statistically significant"
About filtering: ideally, add details about the filter characteristics. FIR or IIR, specific kernel... Or at least report the name of the function and the version of EEGLAB it came with. This is important for reproducbility, because for years EEGLAB provided very poor default filter settings.
As for the 25 Hz distortions, your figure is convincing. I would just cite your 2020 paper to support the lack of signal distortions.
Channel interpolation: I didn't mean not to remove bad channels, but simply not to interpolate them after removal. You use the example of ICA: interpolation doesn't recover any information, so your ICs will be the same with or without interpolation. It is up to you but from experience, interpolation will give you nothing. For topographic maps, interpolation is built into the algorithm, so again, no gain.
"However, the validity of these tests is questionable." -- I'm not sure if the tests can be described as invalid. What we know from simulations is that these tests have very low power, so failure to detect deviation from normality is not conclusive. Also, accepting normality based on p>0.05 is a statistical fallacy. In the text, I would phrase your conclusion more carefully, suggesting that assuming normality is a reasonable *approximation* to the population distribution. Afterall, we know that ERPs cannot be normally distributed -- values are necessarily bounded. And if you suspect skewness, you could always run simulations using a small amount of skewness (g-and-h distributions are great for that). One-sample t-tests using trimmed means will increase power in the presence of skewness.
The authors have done a very good job in addressing all of my earlier concerns and suggestions. In my opinion, the submission can now pass stage 1 review.
In the revision of the manuscript, entitled "Putting things into perspective: Which visual cues facilitate automatic extraretinal symmetry representation?", by Elena Karakashevska, Marco Bertamini, and Alexis D.J. Makin, the authors addressed issues raised by the reviewers. Now, I have only a few minor concerns.
p.9. Hypothesis 3, which is a set of four sub-hypotheses, is discussed on the basis of the results of statistical tests of these sub-hypotheses. How will the authors combine the results of these statistical tests about the sub-hypotheses to discuss Hypothesis 3? If the authors use a logical operator “or” (at least one of the tests show results consistent with the predictions), A Bonferroni correction of the statistical tests is necessary. If the authors use a logical operator “and” (all of the tests show results consistent with the predictions), the correction is not necessary.
p.14. “Lux” is the unit of illuminance and not of luminance. Luminance cannot be computed from the illuminance without using some additional information.
p.14. > two visual transformations, first adopting the position of the virtual camera, and then correcting for perspective distortion (Sawada & Pizlo, 2008).
I do not see any part of Sawada & Pizlo (2008) discussing such a mechanism in the visual system.
p.15. > The angles of 60 and 15 degrees were also chosen to follow recommendations in Sawada and Pizlo (2008).
Sawada & Pizlo (2008) used slant between 50 and 70 degrees for their visual stimuli but they were not making any general recommendation about the slant.
Figure 6B. I do not see any purple dots in this figure. In this figure, there is an arrow, dotted grids, circles, and a horizontally-long rectangle but none of them are explained in the caption of the figure.
Reading the authors’ reply to my comments in the last review, I see that the authors want to discuss which of the projective transformations or the perspective transformation are “superior.” As I mentioned in the last review, the perspective transformation is a sub-set of the projective transformation so, we cannot say which is “superior.” When we use these transformations for a particular application, we can discuss which transformation is valid for this application.
DOI or URL of the report: https://osf.io/tqcdu
Version of the report: 1
Dear Dr. Karakashevska,
We have been fortunate to receive insightful and thorough comments from four expert reviewers. We agree with the reviewers that your research question is well-motivated and you have a strong proposed design. There are several areas where the reviewers suggest clarifications, to highlight a few:
Drs Apthorp and Cottereau both indicate the need for more information related to task difficulty and its potential impact on the results. They suggest if the task is too easy, the participants will be able to focus on the regularity of the stimuli, which would undercut the automaticity you would like to study and could reduce the perspective cost, according to a study you previously ran (Makin et al., 2015). Please note, there seems to be some inconsistency with the mention of task-based effects in Makin et al., (2015), and the earlier assertion in the manuscript (Page 2) which suggests “The SPN is comparable when participants are classifying stimuli in terms of symmetry or in terms of a different dimension such as colour…” and “Task relevance of symmetry has a relatively small effect on SPN amplitude…”. It is plausible these statements are in reference to frontoparallel stimuli only, but please clarify.
Our anonymous reviewer also indicated some previous studies where participants struggled to detect mirror-symmetry in dot-stimuli, which was overcome with the use of contours. Please motivate the use for the dot-stimuli in your Stage 1.
Dr. Rousselet suggested using the Greenhouse-Geisser correction by default so your power analyses do not need to include the potential testing of Sphericity. They also gave a potential design proposal for boosting your signal-to-noise with the addition of a localizer.
From our perspective as recommenders, we have a further few items to address in relation to formatting for registered reports:
1. Hypothesis testing:
Your hypotheses are direct and well-motivated (with the addition of a couple of suggestions from the reviewers). In this case, the analyses you plan to run should be equally succinct:
Hypothesis 1: Performing a 2x4 repeated measures ANOVA (regularity x condition) will provide a main effect of regularity to test your hypothesis, as outlined in your Stage 1, but it will also provide a main effect of condition and an interaction between condition and regularity. Please only report the main effect of regularity so you do not need to interpret the main effect of condition and the interaction without an initial hypothesis. These outputs should not be reported in the pre-registered result section.
Hypothesis 2 is directly examined.
For each sub level of hypothesis 3 it seems you just need paired t-tests between each condition and baseline, and between the static and moving frame conditions – therefore no need for the main effect of the ANOVA as suggested in the study plan. Power analyses should be calculated for the smallest effect of interest across your study, which should consider each pairwise comparison, rather than the main effect.
Finally, it seems for hypothesis 4 you are performing a one-sided equivalence test (or a non-superiority test; testing the effect is not greater than zero). Please explicitly state this in the Stage 1 text and study design table.
2. Power analyses:
You mention "An SPN modulation effect size 0.34 SD corresponds to around 0.35 microvolts (Makin et al., 2022), which is smaller than nearly all reported SPN modulations". As the 0.35 is a crucial quantity in the power and equivalence testing analyses, the authors should evidence the claim by listing the papers they are referring to, and the range of effect sizes.
Minor comments:
- “exemple” fig 4.
- “bloc” Study Plan Table; Hypothesis 4 row.
- Please match hypotheses in main text with study plan table exactly to remove any confusion.
- You should also provide a “theory that could be shown wrong by the outcomes” for each hypothesis in the final column of the study design table.
Considering the positive comments from the reviewers, we believe your manuscript has potential for Stage 1 in-principle acceptance, therefore we request revision and resubmission. Please address each reviewers’ and the recommenders comments and revise your manuscript accordingly.
Best,
Grace & Zoltan
The manuscript, entitled "Putting things into perspective: Which visual cues facilitate automatic extraretinal symmetry representation?", by Elena Karakashevska, Marco Bertamini, and Alexis D.J. Makin is a registered report of a study measuring neurophysiological responses to mirror-symmetry in the frontoparallel plane and to mirror-symmetry of a planar figure that is slanted from the frontoparallel plane in a 3D scene. The current status of the study is Stage 1, so, the manuscript only includes the introduction, design, and methods sections. The study is well motivated and it is well designed, but it needs a little revision.
Sawada & Pizlo (2008), and several studies by Wagemans and his colleagues have shown that the mirror-symmetry of a slanted planar figure is hard to detect when only dot-stimuli are used. Reliable detection is possible with contours, so this will be a limitation of this study.
p. 1. Information about pictorial depth cues in the Static frame condition is missing in the manuscript.
p. 8. Hypothesis 4 is a subset of Hypothesis 3. Hypothesis 3 is a composition of 4 sub-hypotheses and they are based on different factors in the visual stimuli. Hypothesis 3a is about cue conflict. The hypothesis, 3b, is based on an additional pictorial cue. Hypothesis 3c is based on an additional motion cue. Hypothesis 3d is concerned with the difference between these additional pictorial and motion cues. The authors do not explain why these sub-hypotheses were combined to make a single hypothesis.
p. 5. > participants perform symmetry discrimination tasks (Karakashevska et al., 2022).
There is no Karakashevska et al. (2022) in the References section. Is it Karakashevska et al. (2021)?
p. 6. > stereo cues indicate that that it is flat (Allison & Howard, 2000). …
A plane is still flat even when it is slanted in a 3D scene. Perhaps, the authors want to say “frontoparallel” here.
pp. 9-10. > For hypothesis 4 we predict an absence of an effect in the moving frame condition.
What is the effect on?
> In stage 1, we will run 4 one sample t tests against zero.
What are these t-tests about?
> we will find an effect in more than 95% of experiments
Are the authors referring to “power”? The authors will conduct the proposed experiment only once.
> Stage 2 is required to establish…
What is “stage 2”?
p.11. > … with small Gabors (approximate 0.25 dva diameter, Figure 5A).
I do not see any Gabor patterns in Figure 5A. Perhaps, the authors are actually referring to Gaussian patterns.
> … asymmetrical patterns had accidental rows and columns …
What does “accidental” mean in this sentence?
> Perspective views were produced by changing the position of the virtual camera.
What is the virtual camera?
> For frontoparallel trials, the virtual camera was on the equator and vertical meridian.
I understand that the authors are trying to explain the process used to generate their visual stimuli by making use of an analogy between the sphere and the earth, but, the authors need to first explain the orientation of their “earth” relative to a virtual scene. If this is not explained, readers cannot understand how the equator or the meridian is oriented. At this point, this paragraph does not clarify the process.
p.12 > This study therefore involved projective transformation rather than a superior perspective transformation
The perspective transformation is not superior to the projective transformation. It is a sub-set of the projective transformation. A retinal image of a planar figure in a scene is a perspective transformation of the figure based on the pinhole camera model.
p.13. Figure 5B is unclear and is also noisy.
p.14. > … to classify Gabor element luminance.
Gaussian?
p.15. > For the perspective conditions, the virtual camera will move from +/- 60 to +/-40 degrees and back again twice, …
This sentence is unclear. Does the camera oscillate between +60° and -60° first and then oscillate between +40° and -40°? Or, does it oscillate between +60° and +40° or between -60° and -40°?
This RR is relatively clear and presents well conceived hypotheses and design. There are enough experimental conditions to allow a clear interpretation of the results. A lot of code and the stimuli are already shared online, which is brilliant. I'm not an expert in symmetry perception so my comments are mostly about the structure of the article and the analyses. I look forward to seeing the results!
##Abstract
There is a very abrupt transition from a general topic to a specific goal about computational resources. At least one extra sentence is needed to explain the problem. Third sentence also introduces a new topic abruptly, with the explanation only found in the next sentences: reverse order for better flow.
What is the meaning of "selectively reduced"? Would "reduce" suffice? Otherwise explain.
The key sentence "However, this perspective cost might be reduced when additional visual cues support extraretinal representation." is insufficient to understand the problem. How do the different blocks help answer the question?
"The task [...] they will". Rephrase to focus on task or participants, but not both in the same sentence.
"we will conclude that automatic extra-retinal symmetry representation occurs during luminance discrimination" -- luminance discrimination appears for the first time at the end of the abstract and should be explained earlier.
##Introduction
"Reflectional symmetry is everywhere in the universe." Even in black holes? Do you need this sentence?
"Both symmetrical and asymmetrical stimuli generate event related potentials (ERPs) at posterior electrodes." -- could you be more specific? Any brief visual presentation triggers ERPs.
"the symmetry wave" -- needs more explanation. Do you mean a sequence of ERPs following the presentation of a symmetric stimulus?
"This difference is called the ‘Sustained Posterior Negativity’ (SPN)" -- difference between what and what?
Figure 1: turn this image into grey levels and you will see the issue. The colourmap is not linear and colourblind friendly. Viridis and related colourmaps are linear and colourblind friendly. There are also better divergent colourmaps you could use in the topographic plots.
Figure 2 B & C: in grey levels the contrast between conditions is poor. For accessibility, make one condition black, the other one grey.
"Stereo defined symmetry is another form of extraretinal symmetry..." -- unclear how that topic relates to the two studies mentioned in the previous sentence; be explicit.
"indicate that that it is flat" -- that x 2
"perspective cost would be zero (as in Figure 2B)" -- the two conditions actually differ in that figure. Maybe phrase as close to zero or practically equivalent, which would prepare readers for the equivalence test that you present later on.
"We predict that perspective cost will highest" -- be missing
Figure 3: contrast could be increased in that figure too. A suggestion is to have one condition in black, one in grey and one in white with a black surround.
"by covering the one eye" -- delete the
##Method
"A sample 120 participants" -- of missing
"All participants will have normal or corrected to normal vision and no history of neurological conditions." -- based on a self report?
"We thus powered our experiment" -- I know it is a shortcut but to be accurate power cannot be the property of an experiment; it is only define in the long run for a line of research. Also, power is not defined in a vacuum, it must be for a specific test.
"N=120 provides 92% chance of finding a significant..." This statement is inaccurate as there is no probability associated with one experiment. Also, you first need to explain what will be measured, how that quantity is distributed, and what test(s) will be used before you can address power. So a bit of reorganisation of that section is needed.
"strict thresholds" -- stricter? I agree that aiming for 90% power in the long run is a big improvement over the traditional 80% (another tradition with zero foundation). I wonder why people think it is ok to miss an effect in one out of 5 experiments.
The power simulation is a great addition, but you need to justify the use of a normal population. Given that you have access to a large database of SPN, it would be very informative to illustrate a large n distribution. I see this is mentioned later on: "Analysis of the whole SPN catalogue suggests that individual participant SPNs are usually normally distributed around the grand average." So bring it all together, before the power section, ideally with an illustration.
"a specified correlation of 0.5" -- correlation between what and what?
"LCD monitor" -- add specs.
"The luminance of the light and dark elements" -- report the values.
"It will thus marginally darker" -- be missing
"on the Baseline and Monocular blocks" -- on -> in.
"This feature can be seen be inspecting" -- be -> by
Figure 5 B: the disk with angles is presented under trial structure and is too small. I suggest to split this figure in two or to make it larger.
##EEG
matlab -> Matlab + add version number.
Add details about the filter characteristics. LP filter at 25 Hz seems a bit drastic, but that depends on the slope/order of the filter.
"These channels will then be replaced with spherical interpolation." -- what is the point of interpolation? It doesn't add any information to the analyses. Do you plan to include an electrode factor in the ANOVAs? If not then interpolation is a waste of time.
Cluster of electrodes: do you plan to average the ERPs across these electrodes? If the SPN varies a lot across electrodes, it would be more powerful to use a localiser to identify the best electrode(s) in each participant. Otherwise averaging over so many electrodes will necessarily lower the effect.
##Analysis plan
"We will check for violations of the normality assumption using the Kolmogorov-Smirnov test." This is a bad idea for reasons explained here:
https://garstats.wordpress.com/2022/09/30/normtest/
The KS test is extremely poor at detecting deviation from normality. More importantly, you mentioned above that you have good reasons to believe that the SPN population is normally distributed, so that makes a test of this assumption superfluous. Also, non-parametric tests are not equivalent to the parametric ones: they do not test the same hypotheses.
"If we violate the assumption of Sphericity..." -- just use the GG correction by default. Otherwise, if your tests are conditional on other tests, you need to redo the power analyses to include the extra decisional step.
##Results
For stage 2, consider how you will represent the results in sufficient detail. I would expect an article free of bar graphs and with clear representation of individual results. Here are some guidelines:
https://onlinelibrary.wiley.com/doi/full/10.1111/ejn.13400
This submission focusses on the mechanisms underlying symmetry perception in humans using scalp EEG recordings. The authors propose to characterize how different visual cues support extraretinal symmetry representation. To this aim, they will measure the sustained posterior negativity (SPN) in different viewing conditions and compute a perspective cost which corresponds to the difference between frontoparallel and perspective SPNs. They hypothesize that this perspective cost will be diminished (compared to a baseline condition) under monocular viewing (i.e., when the cue conflict between perspective and binocular disparity is removed) and when additional perspective cues (either static or moving frames) are added. In my opinion, this submission could pass stage 1 of the review process as the research question is scientifically valid and the proposed hypotheses are plausible. In addition, the experiments sound feasible and the methodology is well developed and can thus be replicated. I provide more detailed comments below.
1A. Scientific validity of the research question
The proposed scientific question stems from numerous psychophysical and neuroimaging (EEG and also fMRI) studies which suggested that extraretinal symmetry representations are not constructed automatically when attention is focused on another task (e.g., when participants are instructed to report non-symmetrical features of the stimuli). In event-related EEG recordings (‘ERPs'), these mechanisms can be reflected by a perspective cost corresponding to the difference between frontoparallel and perspective SPNs. Here, the authors wish to question whether this perspective cost is removed under more naturalistic viewing conditions, when sufficient cues are available to support 3D interpretation. This is a valid and scientifically justifiable question which was not addressed in previous works. This question is answerable through quantitative research and does not suffer from ethical issues.
1B. Logic, rationale, and plausibility of the proposed hypotheses
As preliminary hypotheses, the authors propose that their experimental protocol will permit to measure sustained posterior negativities (SPNs) at posterior electrodes between 300 and 600 ms post stimulus onset. They also propose that in the baseline condition (symmetric/asymmetric stimuli without additional cues), these SPNs will be significantly larger for frontoparallel than for perspective stimuli, leading to measurable perspective costs. These preliminary hypotheses are supported by the results of previous studies performed by the authors using a similar experimental protocol (Makin 2022; 2015).
The main hypothesis of the study is that perspective costs will be reduced (as compared to a baseline) under more realistic viewing conditions, i.e. under monocular viewing (when the conflict between perspective and binocular disparity is removed) and when additional perspective cues (either static or moving frames) are added. The authors also hypothesize that the perspective cost will be lower with moving frames than with static frames (because the cues supporting the extraretinal representation of symmetry are weaker in this latter case). In addition to this main hypothesis, the authors also propose that the perspective cost will approximate zero with moving frames (although in this case, a conflict with binocular disparity is still present). All the hypotheses are precisely stated and follow directly from the research question.
1C. Soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable)
The methodology developed in this submission is based on previous EEG studies from the same group which demonstrated the feasibility and soundness of the proposed experiment. The authors already measured significant SPNs (see e.g., Makin, 2022) and perspective costs (Makin, 2015) using a similar analysis pipeline. This pipeline is based on a classical pre-processing of the EEG data (data are re-referenced to scalp average, filtered and segmented into epochs, an independent component analysis is used to remove artefacts such as eye blinks, for each condition, event-related potentials are computed on a pre-defined posterior electrode cluster and between 300 and 600 ms after stimulus onset). The authors provide a convincing statistical justification of their sample size (n = 120) which should permit to properly test the different proposed hypotheses. It has to be noted that this sample size is much larger than in previous EEG experiments which measured SPNs.
I nonetheless noted a few points that might deserve some attention:
- The authors chose the luminance values used in their task based on a pilot experiment and in order to get more than 90% of correct responses. This value is rather high (chance level is 50% in this case). Is it possible that for some participants, the task is very easy and they can also attend to the symmetry of the stimuli, thereby reducing the perspective cost, even in the baseline condition?
- The authors are intending to replace any participants whose performance is below 80% in any block. This criterion may be a little harsh. Is there any justification for it?
- In the moving frame condition, the frame motion will stop 1000 ms before stimulus onset (in this time interval, only a static frame will be displayed). Did the authors wonder whether the neural responses triggered by the motion will still be observable after 1000 ms (i.e., after stimulus onset). These possible late ERPs will be removed in the computations of the SPN (with respect to baseline) and of the perspective costs but should be taken into account if the authors are intending to show raw ERPs.
1D. Clarity and degree of methodological details, replicability of the proposed study procedures and analysis pipeline
The manuscript gives sufficient methodological details for the experimental protocol to be reproduced. The authors notably provide weblinks (osf) to their codes for power analysis simulations, generating the stimuli and running the experiments and processing the EEG data in Matlab. The proposed methodology is clearly structured and easy to follow.
1E. Consideration of outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the obtained results are able to test the stated hypotheses
The main hypothesis of the study will be tested by comparing the perspective costs in different viewing conditions with a baseline. The baseline condition was already used in previous study and is likely to lead to a significant perspective cost. The chances that the obtained measurements permit to test the main hypothesis are thus very high.
This report aims to investigate human processing of visual symmetry. The authors are experienced in this field. Previous research shows there is an automatic pre-attentive response to visual symmetry which is seen in EEG recordings, the Sustained Posterior Negativity (SPN). This effect is diminished when displays are shifted from the frontoparallel plane, but only when participants are not actively detecting symmetry in the displays. The authors suggest that this may be because visual displays used in experiments provide few (or no) perspective cues to indicate that the display is shifted from the frontoparallel plane, and that this “perspective cost” may be reduced if participants are given more perspective cues. The Introduction explains the gap in the literature and sets out the research question well.
The hypotheses are logically set out, especially the first three. For Hypothesis 4, I think we need to see more justification of why motion cues are expected to eliminate perspective cost. This seems to be a very central hypothesis for the study, and yet it is only justified in a very small paragraph on Page 6 (last paragraph before Study Aims and Hypotheses). Why should this cue be so much stronger than the other two?
The sampling plan is a little vague (voluntary sampling isn’t a thing - do the authors mean convenience sampling? How will participants be recruited? How much will they be compensated? Will their visual acuity and stereo vision be tested - and if so, how?). The power analysis seems sound and 120 participants is certainly a high number for an EEG study. The justification for the effect size of interest is good and is conservative, based on previous studies.
It isn’t quite clear whether 120 is the initial number and more participants will be recruited on top of this if a participant’s data need to be excluded (e.g. for poor performance on the behavioural measures). I assume this is the case but if could be clearer.
Hypotheses 1-3 can all be tested in a single analysis (the 2X4 RM ANOVA suggested in the analysis for H1). H1 can be tested by the main effect of Symmetry, while 2 and 3 can be tested by the suggested pairwise comparisons for block. (Since the hypotheses are directional, a 1-tailed test is most appropriate for each.)
For Hypothesis 4, I would suggest an equivalence testing approach. See Lakens, School & Isager (2018) for a tutorial on this. I really think this would be a much simpler approach than the currently suggested approach of testing against a specific value.
As a minor point, Figure 5b is almost impossible to read - specifically the diagram of the degrees of rotation. Perhaps this could be a separate figure?
The task that participants will perform could be more clearly set out, perhaps in a figure. If it is designed to be relatively easy, how do we know that participants are not consciously attending to symmetry in the stimuli? It seems that the task (illustrated in figure 5B) is to say whether the stimulus is light or dark - but compared to what? Is there a standard? This isn’t clear from the figure.
In general, the methodological detail here is excellent, and I particularly commend the authors for sharing all the study code on OSF, as well as the EEG processing pipeline. One thing that was not very clear to me was why the horizontal and vertical EOG channels are recorded when they are not used in the analysis (eye movements seem to be subtracted via ICA analysis, which doesn’t include those channels). Also, the authors state that a “semi-automatic” process will be used to remove bad channels (p.15) - what is this process?
Another thing that isn’t immediately clear from the report is how the SPN will be computed. The time frame and electrodes are given, which is great, but is the SPN the mean voltage difference across all electrodes of interest for the entire time period? Or are the peak values calculated? How will the numbers which go into the repeated measures ANOVA for each participant be calculated? I could not determine this from the report or, indeed, from the pipeline code provided. It is important that this is clearly set out in Stage 1, because as well all know, EEG analysis provides a large number of researcher degrees of freedom in this area!
Overall this is a very interesting study and the report is very well set out.