Recommendation

Charting meta-analytic evidence for the action-effect

ORCID_LOGO based on reviews by Dan Quintana, Emiel Cracco and priyali rajagopal
A recommendation of:

Action-Inaction Asymmetries in Emotions and Counterfactual ‎Thoughts: Meta-Analysis of the Action Effect‎

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 16 July 2021
Recommendation: posted 16 September 2022, validated 16 September 2022
Cite this recommendation as:
Chambers, C. (2022) Charting meta-analytic evidence for the action-effect. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=51

Recommendation

Winston Churchill once famously quipped, “I never worry about action, but only inaction.” Churchill, however, may have been an exception to the rule, with psychological research suggesting that people are more concerned about the consequences of actions than inactions. During the so-called “action-effect”, first reported by Kahneman and Tversky (1982), people regret an action leading to a bad outcome more than they do an inaction leading to the same bad outcome

In the current study, Yeung and Feldman (2022) propose a wide-ranging meta-analysis to characterise evidence for the action-effect, focusing in particular on emotions and counterfactual thoughts – that is, mental representations of alternative decisions (or “what if” thoughts). Consistent with the expected consequences of the action-effect on emotion, they predict that action will be associated with stronger negative emotions than inaction (when outcomes are negative), and with stronger positive emotions than inaction (when outcomes are positive). The authors also expect action to be associated with a greater abundance of counterfactual thought compared to inaction.

In addition to examining the overall reliability of the action-effect (plus a range of exploratory questions), the study will also examine the extent to which the action-effect is moderated by temporal distance (with more recent events or behaviours predicted to associated with a stronger action effect), the type of study design, prior outcomes and social norms, the specificity (vs. generality) of the prior event, and whether the study employed a hypothetical scenario or a real-life event.

The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA).

URL to the preregistered Stage 1 protocol: https://osf.io/4pvs6

Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question
 
List of eligible PCI RR-friendly journals:
 

References 
 
1. Kahneman, D., & Tversky, A. (1982). The psychology of preferences. Scientific American, 246(1), 160-173. https://doi.org/10.1038/scientificamerican0182-160
 
2. Yeung, S. K. & Feldman, G. (2022). Action-Inaction Asymmetries in Emotions and Counterfactual ‎Thoughts: Meta-Analysis of the Action Effect‎, in principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/4pvs6

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/tpbaw

Version of the report: v2

Author's Reply, 10 Sep 2022

Download author's reply Download tracked changes file

Revised manuscript: https://osf.io/ycenp

All revised materials uploaded to: https://osf.io/acm24/, updated manuscript under sub-directory "PCIRR Stage 1\PCIRR submission following R&R 2"

Decision by ORCID_LOGO, posted 16 Aug 2022

The three reviewers from the previous round kindly returned to evaluate your revised submission, and I'm happy to report that all are broadly positive. There remain some minor matters to resolve concerning the potential inclusion of sensitivity analyses, details of analysis plans regarding moderators, and clarification of assumptions. These should be straightforward to address in a final Stage 1 revision.

Following discussion among the Managing Board, I can now also report the bias control level that has been determined for your submission under the PCI RR taxonomy. In reaching this decision we considered carefully the arguments you put forward for Level 6 based on your correspondence of 16 July 2021. The consensus view among the Managing Board is that meta-analyses, systematic reviews, scoping reviews, and systematic maps can never achieve Level 6 under the PCI RR taxonomony because, unlike studies that will generate new data, the data that furnish these article types must already exist, even if not fully observed, analysed and interpreted. Most such submissions will achieve Level 3, 2 or 1 because at least some of the included data are likely to be in the public domain and will have been at least partially accessed by authors. In your case, because your meta-analysis includes some of your own authored work, for which you have not only accessed but necessarily observed the data at least partially, we have determined that your submission achieves Level 2 (keeping in mind that where a study includes elements at multiple levels, as your study does, it is PCI RR policy to assign the lowest level of applicable bias control). Because of the already rigorous methodological requirements for meta-analyses, systematic reviews, scoping reviews, and systematic maps at PCI RR, we are, however, waiving the usual requirement for additional stringent analytic corrections for potential bias that normally apply at Level 2. This means that you can proceed with your study as proposed and it will achieve a Level 2 designation. This decision from the Managing Board is final, but if you have any questions then feel free to contact me.

Provided you are able to address the reviewers' points in a revised manuscript and response, in-principle acceptance should be forthcoming without requiring further in-depth Stage 1 review.

Reviewed by , 08 Aug 2022

The authors have provided a comprehensive response to my intial queries, for which I am satisfied.

I only have one more very minor suggestion. Regarding the sunset plot analyses (page 49 and figures 3 and 4), it should be noted in both the text and the figure captions that these power analyses assume that their respective effect sizes used in the sunset plot power analysis (i.e., -0.13 and 0.15) are indeed the true effect sizes.

Daniel S. Quintana

Reviewed by , 27 Jul 2022

see word document

Download the review

Reviewed by , 02 Aug 2022

I think that the authors have done an excellent job addressing the issues that I raised in the previous round. The revision is clearer with appropriate justification for their areas of focus and clearer boundary conditions. 


Evaluation round #1

DOI or URL of the report: https://osf.io/8etvb/

Author's Reply, 12 Jul 2022

Download author's reply Download tracked changes file

Revised manuscript: https://osf.io/tpbaw

All revised materials uploaded to: https://osf.io/acm24/, updated manuscript under sub-directory "PCIRR Stage 1\PCIRR submission following R&R"

Decision by ORCID_LOGO, posted 14 Sep 2021

Three reviewers with a range of methodological and field-specific expertise have now assessed the Stage 1 manuscript. As you will see, the evaluations are broadly positive, with the majority of comments identifying aspects of the proposal that would benefit from clarification and/or elaboration, as well as strengthening of the rationale, and ensuring tight linking between the sampling plan (power analysis) and analysis plans. Based on these reviews, we are pleased to invite a revised submission along with a point-by-point response to the reviews.

As you know, the Managing Board has also been considering what level in the PCI RR bias control taxonomy is appropriate for your submission. This remains an ongoing discussion -- a unanimous position has not yet been agreed as there are arguments in favour of both positions (Level 6 vs. Level 4) -- but I wanted to let you know that we will reach a decision on this prior to the awarding of in-principle acceptance and will consult with you in due course. For now, there is no need to address this issue in your revised submission or response.

Reviewed by , 13 Aug 2021

This is the Stage 1 of Registered Report submission describing a planned meta-analysis of the action effect literature. I would like to state upfront that I do not have experience with the action effect literature, so I cannot speak to the appropriateness of research questions in light of the prior research. Thus, I will be largely be commenting on the methodological aspects of this manuscript. Overall, this manuscript reports a comprehensive and well-considered plan for a meta-analysis. However, I have some comments that may improve future versions of this manscript.

Introduction

"One of the most well-known effects in the action-inaction literature is the action-effect, which is the phenomenon that people imagine, associate, or experience stronger emotions for action compared to inaction" This section could be improved by providing a brief example after this sentence

"At the time of writing (July 2021), we identified 2466 citations of the article..." Using which database?

Methods

"We conducted an initial unstructured pre-search..." What was the purpose of this pre-search? To refine formal search strings?

"and posted a notice on listservs..." Provide an example or two of listservs. It sounds like this will be done so that the authors can be notified of possible related articles, but this kind of strategy is typically used to find unpublished studies—is this what the authors are intending?

"We validated and pre-tested the search pattern with 10 notable articles" Was notability defined as the number of citations? Another metric?

"Third, we included both published or unpublished studies, from 1982" State why this particular year was used as the cutoff (i.e., the Kahneman and Tversky paper)

"See Supplementary Materials template for contacting authors subsection." I appreciate the comprehensiveness of including this information

"We set up a project on ResearchGate and added all identified articles as references, where possible, to notify authors about this project, and to provide an open-access list of available studies" Why was ResearchGate chosen for this? I'm unsure about the longevity of the platform (although it seems appropriate in the short term as one way for notifying authors, assuming they actively use the platform). In other words, I think ResearchGate is useful as one approach for contacting promoting the meta-analysis, but another platform with (more or less) guaranteed longevity (e.g., OSF) should be used for providing a list of studies

"If we were not able to obtain the required statistics, we excluded the articles" Have the authors considered using plot digitiser tools to extract data if the raw data is not reported in text?

"When we could not reach agreements on certain inclusion/exclusion, a moderator would make the final decision" There is two authors on the study, has a moderator been identified yet?

"All statistics were converted to Hedges g effects." This should be "Hedges' g"

"Chi-square is Converted to Cohen d with chies function of compute.es v0.2-5 (Re, 2020)." This should be "Cohen's d"

It appears the authors are planning two different approaches to account for effect size dependencies: Three level multivariate models and effect size aggregation (via the 'agg' function). From what I can gather, three level models will be used for moderator analysis and aggregation will be used for main-effect analysis, is that correct? Why not use three level models for all analysis, considering that you will lose some precision with effect size aggregation? I'm not entirely opposed to effect size aggregation, but I just want to better understand the reasoning here. I may have missed something here, but this description of how effect size dependencies will be dealt with is currently unclear

"and assumed the correlation between the measures to be 0.5..." For a sensitivity analysis, I would choose two other correlations to make sure that the conclusions don't differ according to the assumed correlation.

"We stated our planned preferred effect size adjustment methods under different scenarios in Supplementary Table 5" I appreciate the comprehensiveness of this approach

A big strength of this article is the use of simulated data in the results section

"We conducted posteriori power analyses with Tiebel (2018) tool" Should this be "Tiebel's tool"?

The authors should also consider robust bayesian meta-analysis, which addresses many limitations of frequentists approaches to publication bias (e.g., how to interpret a non-significant publication bias test, dealing with conflicting conclusions from different publication bias tests). See this primer on the RoBMA R package from Bartos et al https://psyarxiv.com/75bqn/

"With rank correlation tests and Egger’s regression tests, which are based on funnel plot asymmetry (see Figure 3 for funnel sunset plot), we found support for evidence of publication bias" Funnel plot asymmetry approaches are technically tests of small study bias, which encompasses publication bias but can also include other sources of bias

"We set this threshold arbitrarily, as no study has compared performances between MetaForest, traditional mixed effects two-level model, and traditional multivariate three-level model given different numbers of studies. We would appreciate constructive feedback from reviewers." I think this is a reasonable threshold, as long as the authors are explicit in the paper that this is arbitrary, as no comparison studies exist.

Reviewed by , 21 Aug 2021

Reviewed by , 14 Sep 2021

The proposed meta-analysis is well designed and deals with an interesting topic area – the action effect. The authors do a nice job of summarizing the current literature and proposing a set of moderators to explore the action effect. A few suggestions are noted below.

1. The authors articulate their objective as focusing on action-inaction asymmetries with respect to two specific outcomes – emotions and counterfactual thoughts. Some justification or reasoning for the selection of these two types of outcomes would be useful for the reader. 

2. Within positive and negative emotions (Table 2), are there any expected differences? Research has found that emotions can vary on many dimensions (e.g., arousal, control etc.) even when they are similarly valenced (positive or negative). Hence, within the context of the action effect, will all positive or all negative emotions respond similarly? Will the authors test for differences between specific emotions if there is sufficient data? 

3. While the authors focus on the numbers of counterfactual thoughts, it may be helpful to consider the type of counterfactual too (e.g., upward vs downward). 

4. Will H2 hold for both positive and negative emotions? The research cited for supporting the moderating role of temporal distance seems rather specific to regret alone – why would the authors expect it to replicate for all emotions?

Good luck with your research! 

User comments

No user comments yet