Recommendation

Using gamification to improve food response inhibition training

ORCID_LOGO based on reviews by Miguel Vadillo and Daniel Phipps
A recommendation of:

The effects of isolated game elements on adherence rates in food response inhibition training

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 28 June 2024
Recommendation: posted 09 September 2024, validated 09 September 2024
Cite this recommendation as:
Leganes-Fonteneau, M. (2024) Using gamification to improve food response inhibition training. Peer Community in Registered Reports, 100874. 10.24072/pci.rr.100874

This is a stage 2 based on:

The effects of isolated game elements on adherence rates in food-based response inhibition training
Alexander MacLellan, Charlotte Pennington, Natalia Lawrence, Samuel Westwood, Andrew Jones, Katherine Button
https://osf.io/jspf3

Recommendation

A poor diet has severe detrimental health effects, and attempts to reduce caloric intake often prove unsuccessful. Unhealthy foods, high in fat, sugar, and salt tend to be highly appetitive, and can undermine individuals’ ability to refrain themselves from consuming them. Computerized cognitive retraining techniques have shown promise in curbing the intake of unhealthy foods and promoting weight loss. However, in real-world scenarios, adherence to such retraining programs can be suboptimal, potentially diminishing their effectiveness.
 
In the present study, Maclellan et al. (2024) aimed to investigate whether the incorporation of gamified elements, transforming the cognitive retraining task into a game-like experience, can enhance adherence and overall intervention effectiveness by boosting engagement and motivation.
 
Upon testing the main hypotheses, the authors found mostly non-significant effects of adding gamified elements to adherence, motivation, or effectiveness of food response inhibition training programs. These results hold high relevance, as indeed there has been a push in introducing gamified elements to cognitive retraining programs. These findings should guide future developments in the field of cognitive retraining.
 
The Stage 2 manuscript was evaluated over one round of review and revision. Based on detailed evaluations by two expert reviewers, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/jspf3

Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
References
 
MacLellan, A., Pennington, C. R., Lawrence, N., Westwood, S. J., Jones, A., Slegrova, A., Sung, B., Parker, L., Relph, L., Miranda, J. O., Shakeel, M., Mouka, E., Lovejoy, C., Chung, C., Lash, S., Suhail, Y., Nag M., and Button​, K. S. (2024). The effects of isolated game elements on adherence rates in food response inhibition training​ [Stage 2]. Acceptance of Version 3 by Peer Community in Registered Reports.
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #1

DOI or URL of the report: https://doi.org/10.31234/osf.io/2e73b

Version of the report: 1

Author's Reply, 02 Sep 2024

Decision by ORCID_LOGO, posted 19 Aug 2024, validated 19 Aug 2024

Following the in-principle acceptance of Stage 1, Maclellan and colleagues have successfully conducted the data collection and analyses, strictly adhering to all pre-registered methodologies or clearly disclosing any deviations where applicable.

The same reviewers who provided feedback on Stage 1 have now reviewed Stage 2. They have noted some minor corrections related to formatting, disclosure, and the write-up. I invite the authors to address these in the next round of revisions.

Sincerely,
 
Mateo

 

Reviewed by ORCID_LOGO, 08 Jul 2024

In the Stage 2 ms the authors present the results of the study, following the preregistered protocol in every detail and clarifying the few occasions in which they have departed from the initial plan and why. The interpretation of the results does not go beyond what the preregistered analyses permit. Therefore, in my humble opinion, the paper could be accepted essentially as is. I only have a small number of recommendations that the authors might want to consider, although they shouldn’t feel obliged to include them in the final version of the manuscript.

I have the feeling that the authors fail to acknowledge the added value of pre-registration in their study. In the general discussion, they point out that based on previous research they expected gamification to have a greater impact. But how many of those previous studies were pre-registered and followed a registered report format? Plenty of research shows that RRs are far less biased and non-registered studies. Is it possible that previous research on gamification suffers from bias and therefore the reported effect sizes are inflated to an unknown extent? Even in the literature on response inhibition training, there is some evidence that, once corrected for publication bias, the average power could be substantially lower than .50, suggesting that the actual effect sizes are lower than anticipated by researchers in their power analyses (see https://onlinelibrary.wiley.com/doi/10.1111/obr.13338).

The effect of the manipulation on weight loss is analyzed, but no descriptive information is provided, possibly because this analysis was not pre-registered. But this information can be useful in many different ways (e.g., for future meta-analyses). I’d encourage the authors to report the descriptives of weight loss either in a table or a figure.

In the analysis of RQ4 it is perhaps worth noting that the SESOI entered in the power analysis was relatively large. The conclusion that the effects of both manipulation are identical relies on the assumption that effect sizes below d = 0.46 are too small to matter.

Figured 4 is not referenced in the main text.

 

 

Reviewed by , 04 Aug 2024

From the previous round of review

  • The authors have done a good job of replying to my comments. My only remaining thought is that, while I agree with the authors that measures of implicit associations are often difficult to change and can be frustrating, I would still recommend noting that they were not directly assessed in the discussion section of the manuscript when this comes around. It would be an interesting direction for future research. This might be particularly relevant where discussing that participants might not be aware of the n-go associations they learned and thus have not reported differences on self-reported measures.
  • The authors have now dropped the bayesian statistics which is fine. But I beleive there was also mention of lmer in the original version? Was this also dropped?
  • Minor comment but the tables are a little messy and not formatted per APA or similar.
  • Violin plots add very little for me, as they seem very consistant with the numerical resutls. Consider moving these to a supplement to keep the manuscript streamlined.
  • How meeting parametric assumptions, was this tested formally using shapro wilk or similar? or just by inspection of figures? Either is fine with me but I think it warrents mentioning. 
  • I would like to see more on the argument for single vs. multiple gamified elements to highlight why this research is needed in the discussion (e.g., it is briefly touched on page 29). For me, if multiple gamified elements typically produce big effects, but single ones here only produce small effects, it leaves a big open question about how the multiple gamified elements stack. Is it a cumultative or interactive effect etc. A larger scale study with more combinations of gamified effects would be something very interesting to suggest for future research.

 

 

 

User comments

No user comments yet