Recommendation

Methodological Variation in the Experience Sampling Methods: Can We Do ESM Better?

ORCID_LOGO based on reviews by Priya Silverstein and 1 anonymous reviewer
A recommendation of:

Mapping methodological variation in experience sampling research from design to data analysis: A systematic review

Abstract
Submission: posted 04 September 2024
Recommendation: posted 23 January 2025, validated 23 January 2025
Cite this recommendation as:
Evans, T. (2025) Methodological Variation in the Experience Sampling Methods: Can We Do ESM Better?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=894

Recommendation

The replication crisis/credibility revolution has driven a vast number of changes to our research environment (Korbmacher et al., 2023) including a much needed spotlight on issues surrounding measurement (Flake & Fried, 2020). As general understanding and awareness has increased surrounding the 'garden of forking paths' or 'researcher degrees of freedom' (Simmons et al., 2011), and the various decisions made during the scientific process that could impact the conclusions drawn by the process, so too should our interest in meta-research that tells us more about the methodological processes we follow, and how discretionary decisions may influence the design, analysis and reporting of a project.
 
Peeters et al. (2025) have proposed a systematic literature review of this nature, mapping the methodological variation in experience sampling methods (ESM) from the design stage all the way to dissemination. It starts this journey by mapping how ESM studies vary e.g., in design, considering a variety of factors like sample size, number of measurements, and sampling scheme. It also evaluates reporting quality, rationales provided, and captures the extent of open science practices adopted. Covering many parts of the research process that get assumed, unreported or otherwise unjustified, the proposed work looks set to springboard an important body of work that can tell us more effectively how to design, implement and report ESM studies.
 
The Stage 1 submission was reviewed over one round of in-depth review with two reviewers. Based on detailed responses to reviewers’ feedback, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
 
URL to the preregistered Stage 1 protocol: https://osf.io/ztvn3
 
Level of bias control achieved: Level 1. At least some of the data/evidence that will be used to the answer the research question has been accessed and observed by the authors, including key variables, but the authors certify that they have not yet performed any of their preregistered analyses, and in addition they have taken stringent steps to reduce the risk of bias.
 
List of eligible PCI-RR-friendly journals:
 
References
 
1. Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465. https://doi.org/10.1177/2515245920952393
 
2. Korbmacher, M., Azevedo, F., Pennington, C. R., Hartmann, H., Pownall, M., Schmidt, K., ... & Evans, T. (2023). The replication crisis has led to positive structural, procedural, and community changes. Communications Psychology, 1, 3. https://doi.org/10.1038/s44271-023-00003-2
 
3. Peeters, L., Van Den Noortgate, W., Blanchard, M. A., Eisele, G., Kirtley, O., Artner, R., & Lafit, G. (2025). Mapping Methodological Variation in ESM Research from Design to Data Analysis: A Systematic Review. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/ztvn3
 
4. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. https://doi.org/10.1177/0956797611417632
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/g2983

Version of the report: 2

Author's Reply, 13 Jan 2025

Decision by ORCID_LOGO, posted 03 Dec 2024, validated 03 Dec 2024

I am returning this submission back to the author to make a further amendment as requested by the author.


Evaluation round #1

DOI or URL of the report: https://osf.io/7mzxd

Version of the report: 1

Author's Reply, 22 Nov 2024

Decision by ORCID_LOGO, posted 11 Nov 2024, validated 11 Nov 2024

Thanks for your patience whilst we've been securing the feedback of two reviewers (we had a third lined up but this didn't go to plan). I am not an expert in ESM although I have contributed to collaborative projects using this approach, and I found this Registered Report to be an ambitious and important piece of proposed research exploring diversity in ESM practices. Covering many parts of the research process that get assumed, unreported or otherwise unjustified, I was particularly pleased to see consideration of the forking paths all the way towards analyses. As a whole the manuscript is well-prepared, well-focused and accessible and the accompanying documents were clear and informative. Based upon reviews of other open practices I have contributed to, my main concerns surround the role of poor reporting standards and the implications this can have for the impact of the project, however I can see this is acknowledged in the manuscript and that you have conducted a robust pilot to negate these. Our two reviewers have kindly provided some helpful suggestions or factors to consider, but as a whole we are positive about the potential of the proposed work. As such, I encourage you to review their feedback and make adjustments as you see fit to the proposal. I will then be very happy to act on your submission without further review. 

I do hope this is a helpful process to enrich the proposed research, and I look forward to reading a revised version of your protocol soon,

Take care,

Tom

Reviewed by ORCID_LOGO, 19 Sep 2024

Thank you for the opportunity to review this Stage 1 Registered Report. To contextualise my review, I am a metascientist and psychologist interested in open science and improving methods. I have no experience conducting systematic reviews. I am currently working as data manager for one big team science project using ESM, but this is my only experience of ESM research and data. I look forward to reading the reviews from those more experienced in these areas! In general, the Stage 1 Registered Report seems to me to be very clear and well written. I have a few relatively small suggestions for improvement below.


Abstract: the abstract currently feels quite long, and could be made more concise. It might be helpful to at this point add placeholders for certain information that you do not yet have (e.g. the number of included records, a summary and an explanation of the main result, and the implications of the review) as I’m not sure how much editing is permitted at Stage 2 (feel free to ignore if the abstract is free to be edited as much as desired at Stage 2).


Search dates: I don’t believe that the focus on only 2023 is currently well-justified. If you want the most recent work, would it make sense instead to start 1 year before the date of the first search going up to the date of the first search? Is there a contingency plan to broaden the search if there are not enough articles in this time period? Is there a minimum sample size of articles that would be deemed sufficient? Is there a maximum that can be analysed due to capacity or the authorship team? You say that citation tracking will not be carried out as there would be few additional records due to the time period – would this not just mean that it would be very quick to do, and therefore still worth doing?


Data synthesis: how many reviewers will synthesise the results? Describe how the reliability of the decisions will be assessed if only one reviewer will be involved. In order to avoid bias more than one reviewer should contribute to this process. (Topor et al., 2020)


I always sign my reviews,

Priya Silverstein

Reviewed by anonymous reviewer 1, 29 Sep 2024

The manuscript "Mapping methodological variation in experience sampling research from design to data analysis: A systematic review " is well-written and aims to identify the amount of transparency in reporting - focusing on ESM studies published within one year.

I only have a few minor comments.

It would be beneficial if the concept of adaptivity is better explained. Neither figure 1 nor the first time the concept is mentioned in the text establishes this concept thoroughly. However, adaptivity is one of the criteria to assess transparency and methodological variation. I could (at least in the SOM) envision a table, providing a non-exhaustive overview of adaptivity - and frankly I am not 100% I know what you mean by adaptivity.

Given the amount of ESM studies, the restriction to one year and 150 randomly selected studies is sensible. I do though wonder whether one could apply either a stricter exclusion criterion or consider even two separate papers - published jointy.

My reasoning is that ESM is used in various fields, including clinical psychology / psychiatry. I am curious whether the transparency differs (is lower/higher?) in ESM studies conducted on patients. Here I would expect very little data sharing, and even a too concise data analysis section. Some PSA members have written a book chapter on how to improve open science in clinical psychology (see https://link.springer.com/chapter/10.1007/978-3-031-04968-2_19) as this field of psychology might be behind cognitive psy. Note, I do not have data, only own collaboration with researchers from both fields. Transparency with resepct to describing the patient population is high, but method section are often less detailed in clinical psy papers, including ESM studies. That brings me to another point worth to look at in your analysis: where is the article published.
In the advent of OSF and alike, SOMs are a no brainer, and necessary, as some journals have word limitations! This should be taken into account, not least if the ESM is published as "short report" with a e.g., 4000 word limit. That does severly hamper full transparency.

With 150 articles, it should be doable to check the journal requirements / restrictions if any. Thanks

Very minor: regarding figure 2, software should be software and version

study design may include hardware, some ESM software is only for android phones, other researchers hand out mobile devices to participants (not uncommon in clinical studies). This has IMO an effect on compliance. Please note that I have not checked whether anybody has investigated this.

line 391: "... through MS forms" - are you refering to having used a microsoft survey tool? Following FAIR principle, this acronym may not be understood in 10 years, who knows.

I am not sure you can score a non-reporting as 0 (absent), as there might be absent due to N/A, and absent due to lack of transparency (i.e. the study design or analysis clearly implies that this and that has been done)

I wish you good luck with the systematic review and very much look forward to learnig about your findings.

User comments

No user comments yet