FIELD Sarahanne Miranda
- Pedagogy, University of Groningen, Groningen, Netherlands
- Social sciences
- recommender
Recommendations: 2
Reviews: 2
Website
https://www.sarahannemfield.eu
Areas of expertise
**Education**
-Bachelor of Psychology (honours I; 2014; University of Newcastle, Australia)
-Research Master in Psychometrics and Statistics (2017; University of Groningen, the Netherlands)
-PhD in Behavioural and Social Sciences (topic: science reform movement; 2022; University of Groningen, The Netherlands)
**Research Interests, Modus Operandi and Philosophy**
Broadly, I am a metascientist. I research science (the scientific community, practices etc) using the scientific method. More specifically, I am a science reform scholar. I am interested in science reform as a social movement, the communities within the science reform movement, reform practices and critique of the reform movement. I don't just study reform objectively, however. I am an activist as well, and advocate most reform/open research initiatives (such as registered reports!)... as long as they're critically engaged with by users before adoption.
I am currently an assistant professor at the University of Groningen. I am on a research project with Sarah de Rijcke (CWTS), Bart Penders (University of Maastricht), Jackie Thompson (Bristol University) and Marcus Munafò (Bristol University), working on establishing a new, updated responsible research and innovation (RRI) framework, and embedding it in local contexts in the UK and parts of Europe. I am also involved in multiple smaller projects concerning replication, ethics, qualitative open science, and theory.
I am a mixed methodologist, with training and expertise in both qualitative and quantitative methods. I value reflexivity and good scientific mentorship. I am passionate about considering alternatives to traditional publishing systems, and about dismantling barriers to participation in reform (relating to e.g., power imbalances, hierarchy in research, accessibility and inclusivity). I would argue that the registered report format, though not a panacea, might be one of the most impactful and valuable research reform initiatives that will come of my 'generation'.
I have conducted studies involving preregistration and registered reports, replication and selecting replication targets, reflexivity (especially for use in quantitative research contexts), and Bayesian analyses and reanalyses. I am (to varying degrees) familiar with digital and virtual ethnography, social network analysis, the Bayesian approach, and thematic analysis (both done 'by hand' and done using QDAS).
I am the editor-in-chief of the Journal of Trial and Error, and an associate editor for Collar: Psychology.
Recommendations: 2
16 Oct 2024
STAGE 1
Open Scholarship and Feedback in Applied Research/Understanding the Role of Climate Change in Applied Research: A Qualitative Registered Report
Understanding how applied researchers address open scholarship, feedback and climate change in their work
Recommended by Sarahanne Miranda Field based on reviews by Crystal Steltenpohl, Lisa Hof and Jay PatelThis recommendation concerns the plan of two studies that are intended to be conducted simultaneously, using the same data collection approach, and to result in two manuscripts that will be submitted for assessment at Stage 2. The Stage 1 manuscript containing these protocols was submitted via the programmatic track.
Protocol 1 concerns “Open Scholarship and Feedback in Applied Research: A Qualitative Registered Report”. With this study, the authors aim to explore how applied researchers integrate feedback processes into their work, in relation to transparency and rigor in particular. They will investigate whether their sample are aware of and use feedback mechanisms from the open science movement, such as registered reports, which makes this study nicely metascientific. Through interviews with 50 applied researchers from various fields, the study will examine current feedback practices. The authors intend to use the findings of this first study to inform recommendations on how open science practices can be incorporated into research workflows.
Protocol 2 concerns “Understanding the Role of Climate Change in Applied Research: A Qualitative Registered Report”. This study aims to explore how applied researchers address climate change in their work, including the ways their practices are influenced by and respond to climate challenges. It addresses how their approaches may evolve, and they plan to look into the barriers and opportunities climate change presents in practice. Interviews with 50 applied researchers will be analysed to help understand these dynamics. The authors aim to provide recommendations to help applied researchers and their employers adjust their priorities to align with the urgency of climate action. One reviewer did not comment on this second protocol, as the content was outside of their own research area. Although I would have found a reviewer who specializes in this area directly ideally, I found I could still rely on the other two reviewers and my own knowledge to assess this protocol.
General comments: As I mentioned in my initial assessment text, these studies were well planned from the get-go and the protocol nicely articulated those plans. The use of different colour highlighting clearly helped the reviewers target different elements of the protocol and give direct feedback on specific parts. It also helped prevent me from getting lost in all the details! Well done, once again, to the authors for making the distinction between the two studies so clear. I was also pleased at how well they balanced the information between the two protocols – this made it easier to see if there were deficiencies in either one somehow. Finally, I loved that reflexivity was considered by the authors. One suggestion by me is that the authors might consider providing a collective positionality statement to go with the trainees’ reflexivity statements (if this is already in the plan and I missed it, please forgive the oversight!) in the final studies, even if as part of an appendix. This is because the open science movement and climate change can both be controversial, and because of the nature of the qualitative approach I would like to understand a little of the stance the group takes towards these issues collectively if the authors think it’s appropriate. I understand that with a big group that might be difficult or impossible, but if it is possible I would like to see it. I would also like to see initials used in the manuscripts to indicate who was responsible for what analysis elements where possible. This allows for accountability and to attribute interpretation to specific individuals involved in the data analysis. Alternatively, individuals can be attributed in a statement at the end of each manuscript to serve the same purpose and be less awkward in the text. If this won't work for some reason, please motivate this decision.
General comments: As I mentioned in my initial assessment text, these studies were well planned from the get-go and the protocol nicely articulated those plans. The use of different colour highlighting clearly helped the reviewers target different elements of the protocol and give direct feedback on specific parts. It also helped prevent me from getting lost in all the details! Well done, once again, to the authors for making the distinction between the two studies so clear. I was also pleased at how well they balanced the information between the two protocols – this made it easier to see if there were deficiencies in either one somehow. Finally, I loved that reflexivity was considered by the authors. One suggestion by me is that the authors might consider providing a collective positionality statement to go with the trainees’ reflexivity statements (if this is already in the plan and I missed it, please forgive the oversight!) in the final studies, even if as part of an appendix. This is because the open science movement and climate change can both be controversial, and because of the nature of the qualitative approach I would like to understand a little of the stance the group takes towards these issues collectively if the authors think it’s appropriate. I understand that with a big group that might be difficult or impossible, but if it is possible I would like to see it. I would also like to see initials used in the manuscripts to indicate who was responsible for what analysis elements where possible. This allows for accountability and to attribute interpretation to specific individuals involved in the data analysis. Alternatively, individuals can be attributed in a statement at the end of each manuscript to serve the same purpose and be less awkward in the text. If this won't work for some reason, please motivate this decision.
The three reviewers that took the time to go through the reports nevertheless had useful comments, most of which would have contributed to strengthening the plan and minimizing problematic bias later on. The authors took these comments seriously, and thoughtfully (cheerfully even) responded to each. In my estimation, each of the suggestions of the reviewers were satisfied by the authors’ response to reviews letter. Other than my earlier comment about the positionality statement, I have no further comments for the Stage 1 protocol, and I wish the authors all the best with running the studies and writing up Stage 2 for each.
URL to the preregistered Stage 1 protocol: https://osf.io/jdh32
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- Advances in Methods and Practices in Psychological Science *pending editorial consideration of disciplinary fit (RR1)
- Collabra: Psychology
- Meta-Psychology *For RR1 only
- Peer Community Journal
- PeerJ
- Swiss Psychology Open
References
Evans, T. R. et al. (2024). Open Scholarship and Feedback in Applied Research/Understanding the Role of Climate Change in Applied Research: A Qualitative Registered Report. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/jdh32
11 Sep 2023
STAGE 1
Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary
Understanding the validity of standardised language in research evaluation
Recommended by Sarahanne Miranda Field and Chris Chambers based on reviews by Chris Hartgerink (they/them), Veli-Matti Karhulahti, Štěpán Bahník and Ross MounceIn 2023, the journal eLife ended the practice of making binary accept/reject decisions following peer review, instead sharing peer review reports (for manuscripts that are peer-reviewed) and brief “eLife assessments” representing the consensus opinions of editors and peer reviewers. As part of these assessments, the journal draws language from a "common vocabulary" to linguistically rank the significance of findings and strength of empirical support for the article's conclusions. In particular, the significance of findings is described using an ordinal scale of terms from "landmark" → "fundamental" → "important" → "valuable" → "useful", while the strength of support is ranked across six descending levels from "exceptional" down to "inadequate".
In the current study, Hardwicke et al. (2023) question the validity of this taxonomy, noting a range of linguistic ambiguities and counterintuitive characteristics that may undermine the communication of research evaluations to readers. Given the centrality of this common vocabulary to the journal's policy, the authors propose a study to explore whether the language used in the eLife assessments will be interpreted as intended by readers. Using a repeated-measures experimental design, they will tackle three aims: first, to understand the extent to which people share similar interpretations of phrases used to describe scientific research; second, to reveal the extent to which people’s implicit ranking of phrases used to describe scientific research aligns with each other and with the intended ranking; and third, to test whether phrases used to describe scientific research have overlapping interpretations. The proposed study has the potential to make a useful contribution to metascience, as well as being a valuable source of information for other journals potentially interested in following the novel path made by eLife.
The Stage 1 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/mkbtp
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- Advances in Methods and Practices in Psychological Science
- F1000Research
- Peer Community Journal
- PeerJ
- Royal Society Open Science
References
1. Hardwicke, T. E., Schiavone, S., Clarke, B. & Vazire, S. (2023). Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/mkbtp
1. Hardwicke, T. E., Schiavone, S., Clarke, B. & Vazire, S. (2023). Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary. In principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/mkbtp
Reviews: 2
27 Nov 2024
STAGE 1
Does Truth Pay? Investigating the Effectiveness of the Bayesian Truth Serum with an Interim Payment: A Registered Report
Do interim payments promote honesty in self-report? A test of the Bayesian Truth Serum
Recommended by Romain Espinosa and Chris Chambers based on reviews by Philipp Schoenegger, Sarahanne Miranda Field and Martin SchnuerchSurveys that measure self-report are a workhorse in psychology and the social sciences, providing a vital window into beliefs, attitudes and emotions, both at the level of groups and individuals. The validity of self-report data, however, is an enduring methodological concern, with self-reports vulnerable to a range of response biases, including (among others) the risk of social desirability bias in which, rather than responding honestly, participants answer questions in a way that they believe will be viewed favourably by others. One proposed solution to socially desirable responding is the so-called Bayesian Truth Serum (BTS), which aims to incentivise truthfulness by taking into account the relationship between an individual’s response and their belief about the dominant (or most likely) response given by other people, and then assigning a high truthfulness score to answers that are surprisingly common.
Although valid in theory (under a variety of assumptions), questions remain regarding the empirical utility of the BTS. One area of concern is participants’ uncertainty regarding incentives for truth-telling – if participants don’t understand the extent to which telling the truth is in their own interests (or they don’t believe that it matters) then the validity of the BTS is undermined. In the current study, Neville and Williams (2024) aim to test the role of clarifying incentives, particularly for addressing social desirability bias when answering sensitive questions. The authors will administer an experimental survey design including sensitive questions, curated from validated scales, that are relevant to current social attitudes and sensitivities (e.g. “Men are not particularly discriminated against”, “Younger people are usually more productive than older people at their jobs”). Three groups of participants will complete the survey under different incentive conditions: the BTS delivered alone in standard format, the BTS with an interim bonus payment that is awarded to participants (based on their BTS score) half-way through the survey to increase certainty in incentives, and a Regular Incentive control group in which participants receive payment without additional incentives.
The authors will then address two questions: whether the BTS overall effectively incentivises honesty (the contrast of BTS alone + BTS with interim payment vs the Regular Incentive group), and whether interim payments, specifically, further boost assumed honesty (the contrast of BTS alone vs BTS with interim payment). Regardless of how the results turn out, the study promises to shed light on the effectiveness of the BTS and its dependence on the visibility of incentives, with implications for survey design in psychology and beyond.
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to reviewers’ and the recommender’s comments, the recommenders judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/vuh8b
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI-RR-friendly journals:
List of eligible PCI-RR-friendly journals:
- Advances in Cognitive Psychology
- Advances in Methods and Practices in Psychological Science *pending editorial consideration of disciplinary fit
- Collabra: Psychology
- Experimental Psychology *pending editorial consideration of disciplinary fit
- In&Vertebrates
- Meta-Psychology
- Peer Community Journal
- PeerJ
- Royal Society Open Science
- Social Psychological Bulletin
- Studia Psychologica
- Swiss Psychology Open
References
Neville, C. M & Williams, M. N. (2024). Does Truth Pay? Investigating the Effectiveness of the Bayesian
Truth Serum with an Interim Payment: A Registered Report. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vuh8b
Truth Serum with an Interim Payment: A Registered Report. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/vuh8b
17 Jan 2024
STAGE 1
Revisiting the Effects of Helper Intention on Gratitude and Indebtedness: Replication and extensions Registered Report of Tsang (2006)
Grateful or indebted? Revisiting the role of helper intention in gratitude and indebtedness
Recommended by Zhang Chen based on reviews by Jo-Ann Tsang, Sarahanne Miranda Field and Cong PengWhen receiving a favour, we may feel grateful and/or indebted to those who have helped us. What factors determine how much gratitude and indebtedness people experience? In a seminal paper, Tsang (2006) found that people reported feeling more gratitude when the helper's intention was benevolent (e.g., helping others out of genuine concerns for other people) compared to when the helper's intention was perceived to be selfish (e.g., helping others for selfish reasons). In contrast, indebtedness was not influenced by perceived helper intention. This finding highlighted the different processes underlying gratitude and indebtedness, and also inspired later work on how these two emotions may have different downstream influences, for instance on interpersonal relationships.
So far, there has been no published direct replication of this seminal work by Tsang (2006). In the current study, Chan et al. (2024) propose to revisit the effects of helper intention on gratitude and indebtedness, by replicating and extending the original studies (Study 2 & 3) by Tsang (2006). Participants will be asked to either recall (Study 2) or read (Study 3) a scenario in which another person helped them with either benevolent or selfish intentions, and rate how much gratitude and indebtedness they would experience in such situations. The authors predict that in line with the original findings, gratitude will be more influenced by helper intention than indebtedness. To further extend the original findings, the authors will also assess people's perceived expectations for reciprocity, and their intention to reciprocate. These extensions will shed further light on how helper intention may influence beneficiaries’ experiences of gratitude and indebtedness, and their subsequent tendencies to reciprocate.
This Stage 1 manuscript was evaluated over two rounds of in-depth review by three expert reviewers and the recommender. After the revisions, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/uyfvq
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- Collabra: Psychology
- F1000Research
- International Review of Social Psychology
- Meta-Psychology
- Peer Community Journal
- PeerJ
- Royal Society Open Science
- Social Psychological Bulletin
- Studia Psychologica
- Swiss Psychology Open
References
1. Tsang, J.-A. (2006). The effects of helper intention on gratitude and indebtedness. Motivation and Emotion, 30, 199–205. https://doi.org/10.1007/s11031-006-9031-z
2. Chan, C. F., Lim, H. C., Lau, F. Y., Ip, W., Lui, C. F. S., Tam, K. Y. Y., & Feldman, G. (2024). Revisiting the Effects of Helper Intention on Gratitude and Indebtedness: Replication and extensions Registered Report of Tsang (2006). In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/uyfvq