Close printable page
Recommendation

Functional specificity of cognitive updating in human prefrontal cortex

ORCID_LOGO based on reviews by Phivos Phylactou
A recommendation of:

Causal dynamics of task-relevant rule and stimulus processing in prefrontal cortex

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 28 September 2023
Recommendation: posted 06 June 2024, validated 06 June 2024
Cite this recommendation as:
Chambers, C. (2024) Functional specificity of cognitive updating in human prefrontal cortex. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=570

Recommendation

One of the hallmarks of cognitive control is the ability to flexibly update attention and action when goals change. The prefrontal cortex has long been identified as important for such updating, but much remains to be understood about the anatomical and temporal mechanisms that support cognitive flexibility within prefrontal networks. In the current study, Jackson et al. (2024) build upon insights from recent transcranial magnetic stimulation (TMS) and neuroimaging studies to investigate the critical role of prefrontal cortex for updating goals and selecting behaviourally-relevant stimuli.
 
To measure updating, the authors deploy an attentional switching paradigm in which participants selectively attend to one feature of a novel object (colour or form) while ignoring the other feature. On each trial, a symbol (called a rule cue) indicates whether to attend to the colour (green or blue) or to the form (X or non-X) of the upcoming object. By mapping each stimulus response to a separate button press (two buttons for the two colours; two buttons for the two features), the authors can then categorise different types of behavioural errors – focusing especially on attending incorrectly to the task-irrelevant feature (rule error) vs. applying the correct rule but failing to correctly identify the task-relevant feature (stimulus error). If disruption of a specific cortical region causes a selective increase in one type of error, then this would indicate that the stimulated region is important for either rule processing or stimulus processing.
 
The proposal includes a number of key features that add depth and rigor to the investigation. First, to probe the anatomical specificity of cognitive control, the authors will contrast the effect of TMS delivered to different prefrontal regions that reside within different networks and may have divergent roles in cognitive control: the dorsolateral prefrontal cortex (dlPFC, part of the multiple-demand network) and dorsomedial prefrontal cortex (dmPFC, part of the default mode network). Moreover, unlike many previous TMS studies, the authors will use electric field modelling to normalise cortical stimulation strength between regions, enabling a more controlled anatomical comparison. Second, since the task involves responding to a rule cue and then selectively attending to a task-relevant feature, it is likely that a particular brain region could be selectively critical at a specific time – for instance, if dlPFC were important for rule processing then it should only be necessary shortly after (or around) presentation of the rule cue. To capture the temporal specificity of cortical involvement, the authors will apply a short burst of TMS at different times, beginning either +150ms after the cue or +700ms during stimulus processing. In a preliminary study, the authors used magnetoencephalography (MEG) in combination with the same behavioural task and multivariate pattern analysis (MVPA) to identify these epochs for TMS. Finally, the experiment includes a range of additional control conditions and quality checks to rule out alternative explanations of potential findings, such as TMS impairing perception of the rule cue rather than implementation of the rule, and the effect of peripheral TMS artefacts. Overall, the study promises to reveal a range of intriguing new insights into the timecourse and anatomical specificity of cognitive updating, with implications for theories of prefrontal cortical function.
 
The Stage 1 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' and recommender’s comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/94sgu (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 

Jackon, J. B, Runhao, L., & Woolgar, A. (2024). Causal dynamics of task-relevant rule and stimulus processing in prefrontal cortex. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/94sgu

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #1

DOI or URL of the report: https://osf.io/72erq?view_only=b5772a79882a483380c507f67198ed6f

Version of the report: 1

Author's Reply, 13 May 2024

Decision by ORCID_LOGO, posted 21 Dec 2023, validated 21 Dec 2023

At the outset, please accept my apologies for the slow handling of your submission. I originally secured two reviewers. One high-quality review (enclosed) was promptly received but the other review was repeatedly delayed before the reviewer eventually became non-responsive altogether. It is very unfortunate when this happens as reviewer ghosting can drag out the evaluation process by weeks, which is particularly costly for Stage 1 RRs.
 
Given the quality of the one review obtained, in the interests of avoiding further delay I have decided to proceed with an interim decision. To substitute for the missing review, I have obtained an additional evaluation as a sanity check from the Managing Board (provided by Yuki Yamada) and I have also read your submission closely as it falls within my own specialism in cognitive neuroscience.
 
Overall I must say that I found this to be a very impressive TMS study that covers many methodological bases that I often find missing in comparable designs, including careful site localisation, matching of effective stimulation intensity between sites (interestingly using a field model rather than more practical approach of Stokes et al. 2013 -- which is perfectly ok but caught my eye), and use of MEG to inform the timing of stimulation. I also found the theoretical rationale for the design (and particularly the DLPFC component of the experiment) to be convincing. There is much to like about your proposal and I believe it is a strong candidate for eventual IPA.
 
Among the comments and Managing Board evaluation you will find some recurring themes, with the in-depth review offering a range of constructive suggestions for improving or clarifying the procedures and analysis plans. One presentational issue is the sheer quantity of hypotheses. I don't necessarily see this as a roadblock provided are all clearly specified and justified (and you believe that all are pivotal for driving the conclusions, in which case they should all be confirmatory and prespecified rather than left for exploratory analyses). Your design table does a good job of making clear how you will interpret different outcomes. I do think, though, that there is some merit in considering whether some rationalisation of hypotheses could be beneficial, possibly with changes to the analysis plans to hone in on pairwise comparisons of interest from the outset (as suggested by the reviewer).You may prefer to rebut this concern, and I am very happy to hear such a rebuttal provided you can satisfactorily resolve the various queries about clarity and rationale.
 
I include below some comments based on my own reading ("recommender comments") and the general Managing Board evaluation.
 
I am mindful of the importance of timely evaluation of Stage 1 submissions, so although the revisions required may seem moderate-to-substantial, given my familiarity with this topic I will seek further specialist review of your revised manuscript only if I feel certain points have not been thoroughly addressed. Please note that PCI RR is currently in the Dec shutdown period so the earliest you will be able to submit your revised manuscript will be 10 January.
 
Recommender comments
 
1. What happens if participants respond during or before the late TMS? Will TMS still be administered? How (if at all) will this be taken into account in the analysis? How was RT taken into account in the MEG analysis? It strikes me as a significant interpretative concern if the TMS is delivered during or after the response is executed, as it logically would be unable to influence cognition.
 
2. Please clarify the timing of TMS pulse trains as there appears to be potential discrepancy between the details in Figure 2 (trains starting at 250ms or 800ms) and the description on p17 (trains starting at 150ms and 700ms).
 
3. It seems to me that if DLPFC or DMPFC TMS impairs attentional selection (or even perception) of the cue it could produce a rule error (or RT slowing) without affecting rule processing per se. Therefore I find myself wondering if the design would benefit from an additional negative control to confirm that prefrontal stimulation leaves perception/attention of the cue unaffected. I will leave you to consider how best to achieve this, but one possibility could be to insert some trial blocks in which participants need to discriminate the cue type as quickly as possible (e.g. & vs ! or $ vs %), and a Bayesian t-test could be used to search for any effect of active vs sham TMS on RT and error rates. I note that you do give participants the option to "press a fifth button with their left hand if they did not see the stimulus or the rule symbol", which would capture a very large effect of TMS on lower level processes, but any such disruption of attention/perception is likely to be too subtle to be detected using such a response choice. In general there is risk, as with all TMS studies, that because the cognitive task being used involves quite high-level processing, that at least some TMS-induced deficits observed on the task must be originating at a similarly high level, when it is possible that any lower level disruption could have knock-on effects. These potential lower-level causes need to identified and controlled as much as possible.
 
4. Please fully specify the interpretative consequences in any differences between sham vs active TMS in the stimulation artefact analyses (H37). You note that it will weaken the interpretration of the results (which is a important starting point), but it is crucial to make clear by how much it will do so. In other words: which outcomes of this analysis (if any) would render the results of the main hypotheses completely inconclusive? Without a clear and precise interpretative plan, I fear it will be highly tempting to dismiss any artefact differences. Knowing how much work goes into such large-scale TMS studies, I know I would certainly be tempted to do so myself!
 
5. Will participants wear hearing protection (e.g. ear plugs)?
 
6. Have you done any piloting to explore risk of blink artefacts due to facial nerve stimulation? In our own studies we sometimes found that some participants can be susceptible to these artefacts, and unfortunately timed blink artefacts could produce behavioural results that look like those produced by cognitive interference (particularly for the early TMS epoch). If you have eye tracking available, this would be ideal use-case to detect and exclude any trials in which blinks occured during the cue/stimulus presentation. At a minimum, it may be a good idea to check in session 2 that the active TMS doesn't cause blinks in each participant.
 
7. A general comment: but given the complexity of the design, please pass through everything and check that the exclusion (and participant replacement) criteria are as comprehensive as possible, as these are generally not possible to change for confirmatory analyses after Stage 1 in-principle acceptance.
 
Managing Board review (provided by Yuki Yamada)
The methods are very detailed, technical and skillful information is provided, and I could not detect any major problems here. However, I felt that there are too many hypotheses. In confirmatory research, hypotheses for testing need to be theoretically justified and validated, but I doubt that all 40+ hypotheses here have such a background. I rather got the impression that this study is exploratory in nature. It would be good if the authors could clarify which (style) of research, exploratory or confirmatory, this study is. Regarding the sample size, I could not find any clear rationale that the minimum sample size should be 24. Also, there is a discrepancy between the Participants section (N=60) and the Proposed analyses section (N=56) regarding the maximum sample size.

Reviewed by ORCID_LOGO, 12 Nov 2023