Recommendation

The visual cortex can maintain information for up to a second

ORCID_LOGO based on reviews by Evie Vergauwe and Vincent van de Ven
A recommendation of:
toto

Causal evidence for the role of the sensory visual cortex in visual short-term memory maintenance

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 03 January 2023
Recommendation: posted 14 March 2023, validated 14 March 2023
Cite this recommendation as:
Dienes, Z. (2023) The visual cortex can maintain information for up to a second. Peer Community in Registered Reports, 100362. 10.24072/pci.rr.100362

Recommendation

According to the sensory recruitment framework, the visual cortex is at least in part responsible for maintaining information about elementary visual features in visual short term memory. Could an early visual area, constantly taking in new information, really be responsible for holding information for up to a second? But conversely, could higher order regions, such as frontal regions, really hold subtle sensory distinctions? It must be done somewhere. Yet the existing evidence is conflicting. Phylactou et al. addressed this question by applying transcranial magnetic stimulation (TMS) to disrupt early visual areas at intervals up to a second after stimulus presentation to determine the effect on visual short term memory performance. In this way, they causally influenced the sensory cortex at relevant times while tightening up on possible confounds in earlier research.
 
They found that TMS applied to the occipital hemisphere at each of 200ms and 1000ms after presentation of a brief visual stimulus disrupted stimuls detection on a visual short term memory test. These findings support sensory recruitment, which claims that both perceptual and memory processes rely on the same neural substrates in the visual cortex.

The Stage 2 manuscript was evaluated by two expert reviewers. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 2 criteria for recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/empdt
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after Stage 1 IPA. 
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Phylactou, P., Shimi, A. & Konstantinou, N. (2023). Causal evidence for the role of the sensory visual cortex in visual short-term memory maintenance, acceptance of Version 13 by Peer Community in Registered Reports. https://doi.org/10.31234/osf.io/64hdx
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #4

DOI or URL of the report: https://doi.org/10.31234/osf.io/64hdx

Version of the report: 12

Author's Reply, 14 Mar 2023

Decision by ORCID_LOGO, posted 13 Mar 2023, validated 14 Mar 2023

You still haven't quite dealt with reviewer 1's point; you have kept in the analysis which uses as an error term a term that include systematic variance from another factor. Remove these analyses and just report your model averaging.

In this sentence:

"a TMS site (ipsilateral vs contralateral) effect is evident and that timing differences are unlikely, as reflected by the evidence against a model which solely includes timing or a model that includes an interaction with timing, and the low posterior odds for an averaged model which includes the TMS factor."  This is for experiment 1. I take it the last "TMS factor" is "TMS timing factor"?  But when you average that BF is 0.73 for timing, which is rather far from the benchmark for evidence. The interaction BF is close enough, so you can conclude that there is moderate evidence against an interaction. Just leave it at that. Experiment 2 has good enough evidence against a timing effect.

The discussion spends a long time exploring this exploratory lack of timing effect, taking it as established, on a par with your pre-registered analyses. You need to moderate this. For example the sentence "Experiments 1 and 2 provided evidence against a TMS timing effect,.." should refer only to experiment 2. And you also need to explicitly label this finding as exploratory. You can keep discussion of it, but add as an exploratory findiing, it would be useful for future research to independently confirm it.


Evaluation round #3

DOI or URL of the report: https://doi.org/10.31234/osf.io/64hdx

Version of the report: 11

Author's Reply, 13 Mar 2023

Decision by ORCID_LOGO, posted 11 Mar 2023, validated 11 Mar 2023

We are almost there. I just have one query. In the discussion, you say "The Bayesian rmANOVA generated the highest BF10 for the model that included only the TMS site factor (BF10 = 3.46), showing that the observed data are better represented by considering the ipsilateral and contralateral differences. The model including only the TMS timing factor produced a very low BF10 (BF10 = .05)". As the first reviewer said, in effect, the last test includes variance from the first factor in the error variance; but we have just concluded that the first factor involved more than error variance. So you address this here (and for the other analysis where this comes up), by averaging over models with and without the factor (if you stick with this, make this clearer in the legends for tables 5 and 7). OK, but then you still need conclusions to follow from the same rules of inference you have been using, i.e. 1/3 < B < 3 is non-evidential - as the reviewer indicated. (I feel slightly happier just basing assumptions on a model with main effects included at once, otherwise one includes  in the average over models, the case just mentioned, where the error variance is wrongly estimated. But as this is exploratory, I leave that detail lto you.) To address this point, you could just report the one analysis that best estimates error variance for each test, or average as you do - but in any case make sure the conclusions follow the rules of inference you have already decided on.


Evaluation round #2

DOI or URL of the report: https://doi.org/10.31234/osf.io/64hdx

Version of the report: 10

Author's Reply, 03 Mar 2023

Decision by ORCID_LOGO, posted 16 Feb 2023, validated 16 Feb 2023

The reviewers are both very happy with your Stage 2, with some minor points to think about. Indeed, I agree it is well written and makes a clear contribution. Concerning your discussion of the point that a stopping rule to reach e.g. a BF of 3 (1/3) may lead to lack of evidence for other exploratory contrasts, or a lack of robustness, one solution is to stop at a higher threshold than one makes decisions for; e.g. the stopping rule refers to 6 or 1/6, but decisions are based on 3 or 1/3. That simple modification allows such decisions to be robust to scientifically reasonable changes to the scale factor.

Reviewed by , 08 Feb 2023

It was very exciting to see the results of this study, and overall, I think the authors did a great job! They nicely followed the preregistered plan, and I find their conclusions related to the preregistered analyses justified. There are a few issues that need to be considered, before recommending this manuscript, in my opinion:

1) Some small changes were made to the introduction, whereby additional references were added. I do not see a particular problem with it, but I am not sure to what extent changes like these can be made to the introduction at this point in the process.

2) One of the footnotes mentions “The number of trials in Experiments 1 and 2 differ from the registered trial numbers, due to correcting a mistake in the calculation of the required trials (360 instead of 432 in Experiment 1 and 512 instead of 572 in Experiment 2). This error was corrected prior to any data collection and after receiving the recommender’s approval on 22 June 2022”. I think it would be good to specify explicitly that the modification is, in fact, an *increase* in the number of trials (and thus more data). Also, for Experiment 2, the main text mentions 576 trials, but the footnote mentions 572 trials – this should be the same number.

3) On p. 21, the following sentence is hard to understand (does the second part need to read “we would recruit” instead?, or should the “if” be a “because”?): “If any of the three BFs did not reach the stopping rule of > 3 or < 1/3, we recruited four more participants and repeated the analyses” 

4) Figure 3 and 5: The names of the different conditions are put as the titles of the Y axis, which I find confusing. The title of the Y axis should be the measure that is displayed (here: Bayes factor). And then the names of the conditions can go above or below the graphs.

5) There are some model comparisons and interpretations of results that deserve some attention (these concern only exploratory analyses): 

5.1: p. 33 “The Bayesian rmANOVA generated the highest BF10 for the model that included only the TMS site factor (BF10 = 3.46), showing that the observed data are better represented by considering the ipsilateral and contralateral differences ).” –> the fact that the best model of the data included only the TMS site factor, shows that the data are best explained by *only* considering the differences between the ipsilateral and contralateral sites.

5.2: p. 33: the authors conclude that the Timing factor is not adequate to explain the observed data, because the Timing-only model was much worse than the null model (Experiment 1). However, that may not be the most appropriate model comparison to assess the evidence in the data for a main effect of Timing factor. Given that, for the main effect of Site, the authors start by stating the best model of the data (i.e., Site-only), it makes more sense to me to then examine how much worse this model fares if one now adds the Timing factor. Based on the values in Table 4, this would show that the data are inconclusive when it comes to the Timing factor: 3.46/3.02 = 1.15. There is a similar issue for Experiment 2, where the Timing-only model was compared to the null model, to examine the main effect of Timing, rather than by adding the Timing factor to the best model. Adding the Timing factor to the best model gives some evidence against a main effect of Timing (9.06/1.79=5.06) in Experiment 2. Related to these points, I think that some statements in the discussion may be too strong when it comes to the absence of a Timing main effect. Furthermore, it seems that some of the statements made in the general discussion about Timing effects are actually about the *interaction with Timing*, rather than the main effect of Timing. If that is indeed the case, the BF's of the relevant model comparisons should also be reported in the results section.

Reviewed by , 16 Feb 2023

I reviewed the latest version of a Stage 2 manuscript of the authors' study, which comprises 2 experiments that utilize double-pulse TMS to modulate visual cortical activity during the maintenance phase of a change detection task. The methodology and planned analyses were already approved at Stage 1, in which I was also involved in one of the reviewing rounds. 

At this moment in the manuscript development, I can be short in stating that I find the manuscript a very impressive and enjoyable read. The Introduction is authoritative and well balanced, as the authors carefully describe the background literature in a detailed and disciplined manner, with emphasis not only on previous TMS work but also on the recent debate about whether the visual cortex supports (short-term) memory formation / consolidation. Methods are minutely described in relevant detail and experimental design decisions are well motivated. The Results are presented in a thorough, rigorous but also easily readable fashion, where I find the empirical evidence strong and convincing. The two experiments provide a strong combination of evidence, by replicating main results as well as extending the methodology to sham stimulation, which is an important but (in my view) also somewhat controversial procedure in TMS research. The Figures are clear to me (although lettering is sometimes a bit hard to read in the provided PDF). The Discussion carefully considers the findings in light of previous research and points to relevant future steps to gain further insight in how we retain visual information in memory. Overall, the manuscript is long but reads very easily. In this sense, I find the manuscript a strong and important contribution to the current literature and a stepping stone to future implementations.

The only two comments I would like to make are:

1) The results for 1000 msec timepoint of stimulation is less strong (BF just above 3) in comparison to other timepoints. While this surely can be considered as evidence for a "late" consolidation window in visual cortex (as the authors seem to do), I would have liked to see a bit more consideration of this effect. Especially in light of some studies also showing smaller (or null) effects for late timepoints (in TMS as well as in visual memory masking). Perhaps, at a BF just above 3, the glass is proverbially half full or half empty, depending on one's preference in this matter.

2) In the Discussion, I would have liked to see a reconsideration (however brief) of the current debate about the role of visual cortex in short-term consolidation / retention in light of the current findings. In her reviewing work, Xu (and colleagues) includes previous TMS findings in her considerations that visual cortex is not suited to store visual memories. One could have perhaps linked back to her argumentation and consider in how far this view is modulated by the current evidence.

However, I offer these comments are optional for the authors to consider. I do not wish to hold back this well written and elaborately described manuscript on just these points.


Evaluation round #1

DOI or URL of the report: https://doi.org/10.31234/osf.io/64hdx

Version of the report: 10

Author's Reply, 03 Jan 2023

Download tracked changes file

Dear Recommender,

The tracked changes file is now attached.

Yours sincerely,

Phivos Phylactou

Decision by ORCID_LOGO, posted 03 Jan 2023, validated 03 Jan 2023

Thank you for your Stage 2 submission. Could you provide both a) the clean copy of the stage 2, as you have done; but also b) a tracked changes copy i.e. as was changed from the stage 1, so that changes to the introduction etc can be quickly evaluated?

 

best

Zoltan

User comments

No user comments yet