Recommendation

Evaluating the role of the centro-parietal positivity in perceptual decision-making

ORCID_LOGO based on reviews by April Shi Min Ching, Cassie Short and Caleb Stone
A recommendation of:

Is CPP an ERP marker of evidence accumulation in perceptual decision-making? A multiverse study

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 03 September 2024
Recommendation: posted 12 May 2025, validated 12 May 2025
Cite this recommendation as:
Chambers, C. (2025) Evaluating the role of the centro-parietal positivity in perceptual decision-making. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=892

Recommendation

One of the hallmarks of adaptive behaviour is perceptual decision-making: the ability to select and integrate sensory inputs to guide judgments and actions. For decades, evidence accumulation models (EAMs) have been influential in shaping understanding of perceptual decision-making, proposing that evidence for different choices builds over time until a threshold is reached and a decision is triggered. At the same time, reliable biomarkers for evidence accumulation have been observed through single-unit recordings in non-human primates, particularly in parietal, frontal and premotor regions. In humans, the centro-parietal positivity (CPP) –  a positive deflection in the event-related potential (ERP) waveform – has emerged as a candidate proxy of perceptual decision-making, exhibiting accumulate-to-bound dynamics, modality-independence, and versatility to sensory inputs. However, much remains to be understood about the generalisability of the CPP across different behavioural contexts, from simple decision-making tasks (e.g. motion discrimination) to more complex judgments (e.g. emotion discrimination).
 
In the current study, Liu et al. (2025) will test the robustness of the CPP in human decision-making by leveraging secondary analysis of existing datasets. Broadly, they hypothesise that if the CPP is a reliable and generalisable biomarker then it should covary statistically with evidence accumulation, both at the trial level and across tasks that increase in complexity. To test this prediction, the authors will undertake joint modelling of behavioural and ERP data using drift-diffusion modelling, capturing variability within and between trials to estimate the relationship between the CPP and drift rate. In addition, the authors will use multiverse analysis to test the robustness of the observed relationships across a range of different analysis choices, with decision nodes focusing on the choice of CPP metric (build-up rate, amplitude, or peak amplitude) and pooling method in statistical analysis (trial-wise or bin-wise). Overall, the study promises to offer a fresh perspective – both methodologically and theoretically – on how CPP relates to perceptual decision-making.
 
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/vwmzh
 
Level of bias control achieved: Level 3. At least some data/evidence that will be used to the answer the research question has been previously accessed by the authors (e.g. downloaded or otherwise received), but the authors certify that they have not yet observed any part of the data/evidence
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Liu, Y., Yan, C., & Chuan-Peng, H. (2025). Is CPP an ERP marker of evidence accumulation in perceptual decision-making? A multiverse study. In principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/vwmzh
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviewed by ORCID_LOGO, 04 Apr 2025

I have examined the authors' responses and the revised article - the authors have addressed my comments very adequately, providing better explanation of the study's motivation, and further detailing and improving the analysis pipeline. I thank the authors for addressing all reviewers' points and look forward to the findings.


Evaluation round #1

DOI or URL of the report: https://osf.io/p6aum

Version of the report: 1

Author's Reply, 17 Mar 2025

Decision by ORCID_LOGO, posted 06 Dec 2024, validated 06 Dec 2024

I have now obtained three very constructive and helpful evaluations of your Stage 1 submission. As you will see, the reviews are broadly positive and align with my own opinion. It is great to see a Stage 1 RR proposing multiverse analysis, which in my view is an under-utilised technique in our field.
 
The reviewers highlight a number of areas that would benefit from revision, including the strength of the study rationale (including links to relevant background literature), precision of the hypotheses (and hypothesis 1 especially), clarity and precision of the overall inferential chain, the level of described detail concerning the multiverse analysis, and the control of potential bias due to prior data observation and analysis.
 
Overall, there is work to be done to reach IPA but I believe it is readily achievable.
 
As you will be aware, we are now in December shutdown period. From 1 Dec 2024 to 12 Jan 2025, authors are unable to submit new or revised submissions. However, given the delays in handling your submission (particularly in assigning a recommender), I am going to give you the opportunity to resubmit despite the shutdown. You won't be able to do this the usual way. Instead, if you wish to submit your revision before 12 January, please email us (at contact@rr.peercommunityin.org) with the following:
 
  • A response to reviewers (attached to the email as a PDF or Word document)
  • A tracked-changes version of the revised manuscript (attached to the email as a PDF or Word document)
  • The URL to a completely clean version of the revised Stage 2 manuscript on the OSF

In the subject line of the email please state the submission number (#892) and title. We will then submit the manuscript on your behalf.
 
Of course, there is no pressure to submit a revised manuscript and response prior to 12 January. So if you would like to take the extra time, feel free to resubmit the normal way once the shutdown period ends.
 
I hope you find the enclosed reviews helpful and look forward to receiving your revised manuscript in due course.

Reviewed by ORCID_LOGO, 05 Dec 2024

Reviewed by ORCID_LOGO, 08 Nov 2024

Thank you for the opportunity to contribute a stage 1 review of the manuscript ‘Is CPP an ERP marker of evidence accumulation in perceptual decision-making? A multiverse study’. To adhere to the advice on key issues to consider at stage 1 provided by the peer community site, I format my review into two sections: (1) according to each key issue, and (2) my additional comments on the planned multiverse analysis.

Key issues as recommended by the peer community site

1.       ‘Does the research question make sense in light of the theory or applications? Is it clearly defined? Where the proposal includes hypotheses, are the hypotheses capable of answering the research question?’

a.       The hypothesis noted in the final paragraph of the introduction is well defined and testable statement, that aligns with the theoretical framework. However, the summary in Table 1 loses this precision. I recommend that the authors amend the question and hypothesis within Table 1 to align with the precision provided within the introduction. For example, question - Is CPP a consistent ERP marker for evidence accumulation at the trial level across multiple perceptual decision-making tasks?; hypothesis - If CPP is a generalisable ERP marker of evidence accumulation, then CPP build-up rate will show a statistically significant positive correlation with the drift rate across multiple perceptual tasks.

2.       ‘Is the protocol sufficiently detailed to enable replication by an expert in the field, and to close off sources of undisclosed procedural or analytic flexibility?’

a.       It would be more transparent if the authors stated the decisions taken for the following decision points in the workflow: unaccepted task performance (if participants were not removed based on task performance, it would be clear to state this. If they were, please report the threshold used); whether variables were normalized and/or centered; were there adjustments for multiple testing; were bad channels in the EEG datasets identified and, if so, how were they handled; were bad data segments in the EEG datasets identified and, if so, how were they handled. 

3.       ‘Is there an exact mapping between the theory, hypotheses, sampling plan (e.g. power analysis, where applicable), preregistered statistical tests, and possible interpretations given different outcomes?’

a.       The recommended amendment to the hypothesis at point 1 above would improve the direct mapping of the theoretical background to the hypothesis. It is noted that the authors use previously collected datasets, therefore an á prior power analysis is not applicable. However, the authors could report a sensitivity analysis to determine the smallest effect size that the existing sample sizes could reliably detect with a desired level of power (e.g., 80%), or commit to calculating the observed power based on the effect size obtained after conducting the analyses. The statistical tests are specified in advance and align with the hypothesis.

4.       ‘For proposals that test hypotheses, have the authors explained precisely which outcomes will confirm or disconfirm their predictions?’

a.       Yes.

5.       ‘Is the sample size sufficient to provide informative results?’

a.       As explained under point 3, this remains unclear until the authors either report a sensitivity analysis or commit to calculating the observed power. 

6.       ‘Where the proposal involves statistical hypothesis testing, does the sampling plan for each hypothesis propose a realistic and well justified estimate of the effect size?’

a.       The authors analyse preexisting datasets. While they do not report the sampling approaches, they refer the readers to the original studies for further details. 

7.       ‘Have the authors avoided the common pitfall of relying on conventional null hypothesis significance testing to conclude evidence of absence from null results? Where the authors intend to interpret a negative result as evidence that an effect is absent, have authors proposed an inferential method that is capable of drawing such a conclusion, such as Bayesian hypothesis testing or frequentist equivalence testing?’

a.       They interpret the 95% highest density interval of the posterior distribution for the effect of CPP build-up rate on drift rate, to allow probabilistic statements about parameter estimates rather than relying on p-values. The authors specify a criterion for concluding a positive effect: if the lower bound of the 95% HDI is above zero, they interpret this as evidence of a positive correlation between CPP and drift rate, implying that they would not necessarily conclude the absence of an effect but instead interpret this as insufficient evidence to support a positive correlation. 

8.       ‘Have the authors minimised all discussion of post hoc exploratory analyses, apart from those that must be explained to justify specific design features? Maintaining this clear distinction at Stage 1 can prevent exploratory analyses at Stage 2 being inadvertently presented as pre-planned.’ 

a.       The authors have detailed a clear, pre-specified approach, with justification for the structured analysis plan. Authors report a predefined criterion for evaluating CPP build up effect on drift rate. 

9.       ‘Have the authors clearly distinguished work that has already been done (e.g. preliminary studies and data analyses) from work yet to be done?’

a.       It is not immediately clear which analyses were completed by prior publications using the datasets. Related to this, a clear justification for the datasets selected from those available for the present study is required. 

10.   ‘Have the authors prespecified positive controls, manipulation checks or other data quality checks? If not, have they justified why such tests are either infeasible or unnecessary? Is the design sufficiently well controlled in all other respects?’

a.       This is not reported in the present stage 1 manuscript. 

11.   ‘When proposing positive controls or other data quality checks that rely on inferential testing, have the authors included a statistical sampling plan that is sufficient in terms of statistical power or evidential strength?’

a.       This is covered in my response to 3 and 10.

12.   ‘Does the proposed research fall within established ethical norms for its field? Regardless of whether the study has received ethical approval, have the authors adequately considered any ethical risks of the research?’

a.       Yes, the proposed research falls within established ethical norms for the field. 

Multiverse analysis

It is encouraging to see that the authors wish to report uncertainty and assess the robustness of results to variations in data analysis decisions. Multiverse analyses should be systematic and decisions transparent. Therefore, the authors should (1) specify which element of the workflow is subjected to a multiverse analysis (i.e. two decision nodes in the analytical procedure are forked, whereas a multiverse analysis in general could refer to forking behavioural and EEG data preprocessing decisions also); (2) for the decision nodes that are forked, there should be transparency in the options that were considered at each decision node, including those that were not included, and the decision-making procedure to include those that are included. This will help readers to identify potential bias in the reported multiverse of results. (3) The authors should state whether the options included are equivalent (e.g. a principled multiverse, Del Guidice & Gangestad, 2021) and, if so, on which criteria are they deemed equivalent (e.g., comparable validity, examine the same effect, or estimate the effect with comparable precision). 

Reviewed by , 21 Oct 2024

User comments

No user comments yet