Recommendation

Putting the Expected Value of Control (EVC) theory to the test in explaining habitual action

ORCID_LOGO based on reviews by 2 anonymous reviewers
A recommendation of:
toto

Motivational Control of Habits: A Preregistered fMRI Study

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 05 October 2021
Recommendation: posted 08 February 2022, validated 08 February 2022
Cite this recommendation as:
Chambers, C. (2022) Putting the Expected Value of Control (EVC) theory to the test in explaining habitual action. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=140

Recommendation

What are the neurocognitive mechanisms underlying the formation of habits? In this Stage 1 Registered Report, Eder and colleagues propose an fMRI study to test a key prediction of the Expected Value of Control (EVC) theory: that the dorsal anterior cingulate cortex (dACC) – a region heavily implicated in reward processing, cognitive control, and action selection – will show increased activity during the presentation of Pavlovian cues that are associated with devalued outcomes. In combination with a series of behavioural positive controls, this observation would provide evidence in support of EVC theory, whereas failure to do so may support alternative accounts that propose independence of habits from the representations of outcomes.

The Stage 1 manuscript was evaluated over two rounds of in-depth specialist review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA). This IPA recommendation was issued on 8 Feb 2022, and was initially provisional due to lack of ethics approval. The recommendation was then updated and confirmed on 21 Feb 2022 following confirmation that ethics approval had been granted.

URL to the preregistered Stage 1 protocol: https://osf.io/k8ygb

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.

List of eligible PCI RR-friendly journals:

References

Eder, A. B., Dignath, D. & Gamer, M. (2022). Motivational Control of Habits: A Preregistered fMRI Study. Stage 1 preregistration, in principle acceptance of version 3 by Peer Community in Registered Reports. https://osf.io/k8ygb

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/52xpg/?view_only=551d1c39ed1f41549cbd35ec6b8bbed6

Version of the report: Prereg fMRI PIT revised_final

Author's Reply, 27 Jan 2022

Decision by ORCID_LOGO, posted 07 Jan 2022

The manuscript was returned to the two original reviewers, whose evaluations you will find below. As you will see, both reviewers continue to harbour major doubts concerning the theoretical framing of the study and severity of controls (Reviewer 1) and the sufficiency of the sample size (Reviewer 2).

In relation to the concern about sample size, I completely agree with the reviewer that point estimates of effect sizes from small pilot studies are likely to be unreliable, and thus the proposed sample size could very well generate inconclusive findings. On the other hand, I also do not see this as a fatal block for the current work. Given the costs involved, a sample size of N=38 is within conventional standards for fMRI studies (perhaps even toward the upper end), even if the "unit" of evidence produced by each individual study may turn out to be statistically weak. Indeed, due to the high risk of bias with fMRI studies (e.g. due to undisclosed analytic flexibility), in my view the RR model provides an especially important route for ensuring unbiased publication and a "course correction" for the neuroimaging literature.

I am, however, more worried about the sample sizes for the latter three hypotheses (manipulation checks), all of which are N=11 or less. That said, since they are manipulation checks, it is in some ways your risk to take as to whether you genuinely believe these samples are sufficiently large, because failure of critical manipulation checks is one of the few grounds for rejection at Stage 2 based on outcomes (see criterion 2A here). If you feel there is risk of failure in any of these checks, I would strongly advise increasing the sample size to avoid a rare Stage 2 rejection.

The concerns of Reviewer 1 are more fundamental to the acceptability of the current proposal, and I recognise they represent a strong difference of opinion between yourselves and the reviewer. In essence, the reviewer does not believe the design is capable of answering the research question, in part due to lack of an appropriate control but also a broader misalignment between the question/theory and the proposed methodology. It is difficult to see a way forward for this manuscript without resolving this disagreement in one way or another, and I am also keen to avoid overly burdening reviewers, especially when a discussion reaches a stalemate. Instead, I am going to offer you the opportunity to revise again. If you choose to simply rebut Reviewer 1's point rather than revise the design, I will seek additional specialist input to determine whether to accept or reject the proposal as submitted (with no further revision). However, if you believe you can sensibly revise the design to address the reviewer's concern once and for all, then I will invite Reviewer 1 back for a look before issuing a final decision. In either case, the next revision is pivotal and will determine whether in-principle acceptance is achievable.

Reviewed by anonymous reviewer 2, 06 Jan 2022

Reviewed by anonymous reviewer 1, 07 Jan 2022

I thank the authors for their revisions. I understand their approach and limitations regarding the questions on the power calculations. Yet, since their target effect size are very large and that their sample size will not allow concluding on the likelihood of the null in case of negative results (“our planned sample size (n = 38) … clearly is insufficient for the detection of a true null effect in a Bayesian test”), there is in my view a non-negligible risk for the study to be inconclusive. While previous literature and the pilot study suggest that the effect size will indeed be very large, these evidence may still be weak/unreliable (see e.g. https://pilotfeasibilitystudies.biomedcentral.com/articles/10.1186/s40814-019-0493-7). If this is indeed the case, there is a probability to end up with smaller effect size than expected, which are unlikely to be detected with the proposed sample size (for a given alpha/power, the sample size needed to detect a large vs medium ES varies greatly). It is thus possible that the results will be negative, which could then not be interpreted and make the study largely inconclusive. I acknowledge that this scenario is speculative and against the previous study on which this research is based and against the pilot data, but the probability of this scenario to take place is in my opinion rather high. In addition -and critically, the relationship between the behavioral and MRI effect size is much more difficult to predict than suggested by the authors. There is again a non-negligible (and impossible to estimate) probability for the fMRI effect to be missed, a negative result that again could not be interpreted. This would be very problematic since the novelty and originality of the study mainly lies in its fMRI part.

Because these issues of power cannot be solved in the current setting of the authors, I have the feeling that this study is a bit premature for to be conducted as a RR; In my opinion, it cannot, for the moment, reach the minimal standards required for this publication route (i.e. all contrasts adequately powered, possibility to interpret null results, etc). A RR on the topic might be more appropriate to run once at least some functional data have been collected?


Evaluation round #1

DOI or URL of the report: https://osf.io/dyt96/?view_only=5414b4189b2e4880ac614ec9a27807bf

Author's Reply, 17 Dec 2021

Download author's reply Download tracked changes file

Thank you very much for your invitation to revise & resubmit.

Please see the attached response letter for our response to the reviewers' comments and queries and the PDF with tracked changes made to the original version. We also uploaded the PDFs to our OSF repository.

https://osf.io/hbznq/?view_only=5414b4189b2e4880ac614ec9a27807bf

We hope you find our revisions appropriate!

With best wishes,

Andreas (Eder) & co-authors

Decision by ORCID_LOGO, posted 29 Oct 2021

Two expert reviewers have now evaluated the Stage 1 manuscript. As you will see, the assessments are constructive but also very critical of a range of aspects of the proposed study. The most serious concerns are the lack of an appropriate control condition to be able to answer the proposed research question (Reviewer 2), insufficient methodological detail, and lack of clear links between the hypotheses, sampling plans, and analysis plans (Reviewer 1).

Concerning the remarks from Reviewer 1 about statistical power and evidence thresholds, due to its broad disciplinary remit, PCI RR does not set a minimum power requirement (see policy here) but please be aware that several of our PCI RR-friendly journals do set such requirements (e.g. Cortex requires a minimum power of 0.9 with alpha=.02 for all preregistered hypothesis tests; see full list of journals here). Therefore, if you want to preserve the option to eventually publish your RR in a PCI RR-friendly journal, over and above achieving a positive PCI RR recommendation, I suggest consulting the requirements of the potential outlet journals carefully. For the purposes of achieving a PCI RR recommendation, the main requirement is that the planned evidence strength is sufficiently strong to be able to provide a sensitive test of the hypothesis, and that the power analysis is linked precisely to the analysis plan. Reviewer 1 has significant concerns in this area which you will need to address in revision.

The major shortcoming identified by Reviewer 2 is the lack of a suitable control condition. As a non-specialist in this particular field, it struck me that addressing this rather subtle but central concern may require additional pilot data, but I will leave this with you to consider. Addressing the issue to the reviewer's satisfaction will be critical to achieving Stage 1 in-principle acceptance (IPA).

For a regular completed manuscript, concerns of this magnitude would lead to outright rejection. But of course the benefit of the Registered Reports process is that we have the opportunity to make critical design improvements before they become outright roadblocks. For this reason I hope you will find the reviews helpful in updating your proposal and progressing toward IPA.

Reviewed by anonymous reviewer 1, 12 Oct 2021

The study addresses a valid scientific question, and the utilization of the RR publication route, especially for an fMRI study, should be commended. I would also like to emphasize that the authors invested important resources to run a pilot study. The logic behind the main functional hypothesis is strong and the methods appropriate to address it. Yet, my opinion is that the study does not fulfill the minimal standards for registered report regarding outcome neutral conditions, degree of details in data (pre)processing, the power analyses and the link between the hypothesis and the statistical contrasts used to test them. I urge the authors to check previously published (fMRI or behavioral) RR to better figure out the level of details required by this publication format, and the RR guidelines of some journals regarding the need for positive controls and power levels.

 

Power analysis:

- It is not perfectly clear which statistical designs were used for the power calculation (and throughout the study). Please include an exhaustive table listing all statistical designs and terms of interest and the related power analysis. This table should also allow to verify that the statistical design from the source study (e.g. Eder & Dignath (2016)) is exactly the same as in the present study. The parameter of the power analysis should be reported in full details to ensure its reproducibility (G*Power parameters). A power calculation should be conducted for each terms of interest, no statistical test should be conducted if not previously listed in the power analysis (and thus adequately powered).

- In the same vein, the specific statistical contrasts or terms should be linked to each hypothesis. This is partly done in the Data analyses section, but I think this aspect should be more systematically reported in a table also including the expected direction of the effects (and not simply as currently stated e.g.  p17: “It was hypothesized that response rates would be elevated by presentations of Pavlovian cues with common outcomes (i.e., a statistical interaction effect).” Table 3 goes in the right direction and conforms with the PCI RR template, but is still lacking a lot of details: Many tests are conducted (e.g. Transfer Test 1, Transfer Test 2, etc, and each of them should be subjected to a power analyses). If one of these tests is underpowered, it may limit the interpretability of the other parts of the study.

- No power analysis is conducted for the key FMRI part of the study. Since the functional investigation is the main (new) outcome of the study, this aspect should be at least discussed (cf. e.g. https://www.ohbmbrainmappingblog.com/blog/registered-reports-in-human-brain-imaging). An actual power analysis should actually be possible to conduct given the mostly ROI-based approach of the authors.

- A power pf 0.8 with alpha 0.05 is targeted, which appears to be below the usual standard for RR (0.9 or 0.95 power ideally with alpha of 0.02), not sure what the PCI RR guidelines are at this level, but likely more stringent than 0.8/0.05 given that an IPA for several journals with more conservative thresholds is granted at the end of the stage 1 review process.

- Relying on small-n pilot studies for power calculation might be problematic, e.g. https://www.sciencedirect.com/science/article/pii/S002210311630230X ), this aspect should at least be discussed. Likewise, given the typically over-inflated effect sizes in previous literature, it is generally recommended for RR to conduct power analyses based on principled grounds, e.g. determining a smallest effect size of interest and then how this SESOI can be detected given the specific task/population intra- and inter-subject variance.

- Exploratory analyses are planned (e.g. p23). Such analyses must usually not be included in stage 1 RR to avoid blurring the limits between planned/confirmatory vs unplanned contrasts. The authors should check the specific PCI RR guidelines or contact the editor on this regard.

- Equivalence tests (https://journals.sagepub.com/doi/10.1177/2515245918770963) or Bayes factors analyses should be planed, subjected to a power analysis, and conducted for hypothesis on a lack of difference (e.g. control for the expected absence of difference of ratings before devaluation, cf. also the point on outcome neutral conditions for the decisions in case the ratings actually differ). See p.3. of the PCI RR guidelines https://rr.peercommunityin.org/about/full_policies#h_6720026472751613309075757 

 

Data processing:

- Please specify whether/how excluded participants will be replaced to maintained the minimally required sample size. For instance, if one of the participants must be excluded from one stage of the procedure (should it be only for a technical reason), will it be excluded for the whole study?). Likewise, details are missing on the criteria for to exclude data at the group and individual level (minimal response rate, range of interpretable mean frequency of key press during the CS, etc.).

 

Outcome Neutral conditions:

- There is a crucial need to include outcome neutral conditions, especially given the 8 stages procedure used in the study. The authors should include sanity checks to ensure that each stage produced the expected outcome, which in turn allow the outcome of the next stage to be interpretable. For example, what would the authors do if the devaluation procedure on explicit ratings of monetary outcomes is not effective? For stage 3, what would be decided if the assignment is still incorrect after several instrumental training?, etc. 

 

Minors:

- Abstract: Too much jargon and too complex/long sentences (took me several reading to understand the last sentence), please simplify or develop to make it accessible to a larger readership

- Please report information on data distribution / individual datapoints in the figures (e.g. Fig 2)

- I found the introduction well written and clear, but too long (2200 words!). I think a more focused review of current literature on the very specific functional hypothesis would be more appropriate for an experimental study article, and more in line with usual journal guidelines. Most notably, I’m not sure it’s worth describing in so much details the animal literature in the context of the present human fMRI study. 

 

Reviewed by anonymous reviewer 2, 24 Oct 2021

​​​​

Download the review

User comments

No user comments yet