Can the sense of agency and reality be altered by our meta-cognitive models?
Can Imagining Actions as Occurring Involuntarily Cause Intentional Behaviour to Feel Involuntary?
Abstract
Recommendation: posted 31 May 2024, validated 01 June 2024
Zahedi , A. (2024) Can the sense of agency and reality be altered by our meta-cognitive models?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=605
Recommendation
Relying on this theory, in the current study Sheldrake and Dienes (2024) postulate that the metacognitive processes related to these alterations can occur by appropriate use of imagination. In other words, by imagining the movement or object to be hallucinated and further imagining the underlying process was outside of awareness, one can elicit alterations in SoA and SoR. To this end, an intervention is devised whereby the participant is repeatedly asked to consider what might help or hinder them from imagining they are unaware of the relevant intention and thereby adjust their imagination. A control group will be asked to increase the feeling of involuntariness or altered reality simply by repeated practice. Afterward, participants will be asked in a test phase the extent to which the suggested experience felt involuntary.
URL to the preregistered Stage 1 protocol: https://osf.io/f8hsd
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
References
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.
Reviewed by Zoltan Kekecs, 28 May 2024
Review notes by Zoltan Kekecs, PhD:
I am now fully satisfied with the authors' responses.
I would like to thank the authors for being open to my suggestions. It makes the review process feel more worthwile for me as a reviewer.
Evaluation round #2
DOI or URL of the report: https://osf.io/74gcn
Version of the report: Sheldrake & Dienes 2023 v2.pdf. Listed on OSF as Version: 4
Author's Reply, 20 May 2024
Decision by Anoushiravan Zahedi , posted 06 May 2024, validated 07 May 2024
I thank the authors for the revision. Both reviewers were happy regarding the changes that the authors made. However, there is still a significant concern regarding the discrepancies between the instructions that the control versus treatment group receive, which I encourage the authors to consider thoroughly.
Further, both reviewers suggested several minor points that the authors need to consider as well.
Below, you will find the reviewers' comments.
Anoushirvan Zahedi, PhD
Recommender, Peer Community in Registered Reports
Universität Münster
Email: azahedi@uni-muenster.de
Reviewed by Zoltan Kekecs, 03 May 2024
Reviewed by Sophie Siestrup, 30 Apr 2024
I compliment the authors for doing a great job on improving clarity in the introduction and methods sections. I think the experimental protocol is now suited to target their research question. Three minor suggestions below which the authors could implement if they wish:
Abstract
-page 2: “If that study is successful, we will repeat but […]” – maybe “[…] repeat it […]”? otherwise the sentence sound incomplete to me
Introduction
-page 4: I appreciate that the authors changed “successful participants” to “participants who were successfully responding” on page 5 – but now they added the same expression with their new text on page 4. Maybe use “participants who were successfully responding” and subsequently switch to “successful participants”?
Methods:
-Do specific regulations apply to participants who might still be 17? Can they sign an informed consent themselves or do they need parental approval?
Evaluation round #1
DOI or URL of the report: https://osf.io/74gcn
Version of the report: 1
Author's Reply, 19 Apr 2024
Decision by Anoushiravan Zahedi , posted 27 Feb 2024, validated 27 Feb 2024
Thank you for submitting your manuscript to Peer Community in Registered Reports (PCI RR). Your paper, referenced above, has been reviewed by two experts. Based on the comments of these reviewers, a revision would be appropriate.
All reviewers make excellent points about different issues that need clarification; hence, I strongly suggest addressing them point by point. Further, I want to highlight several points raised by the reviewers that need specific attention.
First, the reviewers were doubtful about the applicability of the proposed project for addressing the declared hypothesis. Specifically, several points regarding the comparability of control and intervention groups that need rigorous consideration were highlighted.
Second, the reviewers had concerns regarding the demand characteristics as a confound that needs to be addressed thoroughly. Relatedly, the instructions used in the study could be clarified better, and the reviewers have several suggestions on how to do so.
Third, several critical points were raised regarding the implemented power analysis. Particularly, the reviewers were concerned that the analysis does not account for the uncertainty of the effect sizes, considering the differences between the experiment and the pilot. I encourage the authors to consider this and implement a power analysis that accounts for the uncertainty.
Finally, the reviewers wanted to access codes and pilot data, which is reasonable. I would strongly suggest that the authors use an online repository to do so.
Below, you will find reviewer comments for your manuscript. We hope these suggestions will improve your manuscript and encourage you to consider these comments and make appropriate revisions. Upon receipt, the manuscript will be re-reviewed promptly.
Thank you for considering PCI RR, and I will look forward to receiving your revision.
Anoushirvan Zahedi, PhD
Recommender, Peer Community in Registered Reports
Universität Münster
Email: azahedi@uni-muenster.de
Reviewed by Zoltan Kekecs, 08 Feb 2024
Review by Zoltan Kekecs.
This study is testing the effectiveness of a suggestibility-enhancing training which is based on the cold control theory. The experiment is transparently reported, with appendices allowing for a direct replication of this study proposal (except for missing analysis code). I wish all research papers would be like this. My main concern is that the goal of this study is not clear. It seems as if the authors want to do a crucial test of the theory or at least one of its predictions. However, the proposed experiment is not doing that, it would take more matching between the control and intervention groups and some more blinding to circumvent alternative explanations of why group differences can occur. Below are specific suggestions which might improve the manuscript.
- Previous attempts at enhancing PC (suggestibility) also include using sensory deprivation (Darakjy, Barabasz & Barabasz, 2015), and using reversible inhibition of the DLPFC (for example the authors’ own work).
- Reference: Darakjy, J., Barabasz, M., & Barabasz, A. (2015). Effects of dry flotation restricted environmental stimulation on hypnotizability and pain control. American Journal of Clinical Hypnosis, 58(2), 204-214.
- Calling low suggestibles „lows” is abrupt and is not explained in the text. It is of course clear to people well-read in the hypnosis literature, but the use of this phrase could be introduced for those who are not familiar with it.
- Based on the description of the procedures of the pilot study, it seems that the control group only had one try with each suggestion, while the intervention group had 5 tries. Why was this difference between groups introduced? This seems to be a possible confound. Group differences might arise simply due to practice/fatigues effects. Similarly, the two groups might have different response expectancies, and different beliefs about the role of the practice phase. I suggest that the practice phase of the two groups should be matched very carefully, with the only difference being the instructions for imagined involuntariness. That means that the groups should be matched in having to imagine the enactment of the suggestions. The only difference should be that people in the intervention group should imagine not only the enactment of the suggestion, but also that this enactment is involuntary.
- Importantly, the conditions should also be matched in what is the implied purpose of this practice run. It should be made explicit in both cases and these statements should be matched exactly. Something like: “The role of this practice is to enable you to respond to the suggestions as well as possible. By imagining the enactment of the suggestion and by practicing the enactment, you will became more capable of responding to suggestions.” – Same for both groups. This might help matching response expectancies between the groups.
Instructions like this can create strong expectancy and or demand characteristics if they only appear in one group: “Now, the idea is that we’re going to try to make that feel involuntary through the use of imagination. So, I’d like you to do it again, when I indicate, but this time I’d like you to also imagine that you’re not involved in the process at all, as if your hands are moving all by themselves. Can you imagine that while you do this? Okay, please do that now.” The instructions should match between the groups as much as possible except for the manipulated mechanism (imagining involuntariness). If you say “we will try to make this feel more involuntary” to one group, you have to say the same in a credible way to the other group. For example you could tell the other group: “Now, the idea is that we’re going to try to make that feel involuntary through the use of imagination. So, I’d like you to do it again, when I indicate, but this time I’d like you to also imagine that you are responding to the suggestion completely, just make it happen with your imagination. Can you imagine that while you do this? Okay, please do that now.”
- Also, I think that the trainer can influence the outcomes by explicitly or implicitly implying desired results. So it would be ideal if there was no human in the loop, or if the humans in the loop would be blinded to either group allocation or at least expected study outcomes.
- One possibility for the different results between motor and hallucination suggestions could be that enacting the hallucination suggestions might requires imagination in itself, and there might be a competition for the imagination resource between the actual suggestion enactment and the suggestibility-enhancement strategy. So it is possible that this particular suggestibility-enahncement strategy would not work, or would have limited utility for (especially positive) hallucination type suggestions. If this is true, I would predict that negative hallucination type suggestions would benefit more from this strategy than positive hallucination type strategies, because negative hallucination type suggestions might be enacted in different ways other than through imagination, and also because I expect they require less imagination resources (I have no actual evidence to back this up). So it might be worth considering to include a negative hallucination type suggestion in the registered experiment.
- Another possible explanation is that hallucination type suggestions are usually “hard suggestions”, and that the imagination-based strategy only works for easier suggestions. This could potentially be tested with including suggestions in the experiment that have similar difficulty. Or at least including a hard motor suggestion in there as well.
It is important to note that the difficulty of a suggestion might not only depend on the expected behavior or response itself. For example if I am not mistaken the arm immobilization suggestion in the SHSS:C is on the “harder side”, maybe because the test suggestion is short. In comparison a very similar arm immobilization suggestion in the EHS is on the easier side, maybe because there is a lot of repetition and formulation of the suggestion in different ways. So it might require prior data or extensive pretesting to figure out what is the difficulty of a given suggestion in the particular experiment.
- The authors describe their sample size rationale very clearly. This is exemplary. However, I think the calculations do not take into account sampling error. That is, the actual power to detect the effect if it exists is small, because the authors did not account for the noise. I recommend running a simulation to assess power, and aim to achieve at least 80% power (preferably 90%).
- It is not clear why highly suggestible individuals are involved in this experiment due to the likely ceiling effect such a training would have with them. Including only lows and mediums could increase the impact of the intervention, and thus, statistical power (of course with the trade-off of having to do a pretest or some sort of screening pre, during, or post study session). I see how this could interfere with the practicalities of the experiment, so I don’t expect this to be necessarily adopted, but something that the authors might consider (especially if they are running other experiments from which it is easy to pre-screen individuals). Maybe it would be good to propose an exploratory analysis to look at the correlation of the effect of the training with baseline performance (on the three training suggestions). This can be done with no cost and could aid future studies/trainings using similar protocols.
- I expect that with improved matching between conditions, which will make expectancies and demand characteristics closer between the groups, and also with blinding (or automation) of the trainer, the effect size would drop substantially. The changes in effect size alter sample size targets, and the changes in the procedure might involve unexpected events or participant reactions. So I suggest running a new pilot study with this new protocol, and only then engage in a full crucial test. I know this is very expensive, but engaging in the crucial test without the modifications to the protocol will not really be a crucial test the theory (it would “only” be an efficacy study of a training that is based on the theory). And running the study with so many modifications after the initial pilot without piloting is risky. Because of the time and resources cost of running an extra pilot I would accept if the authors would decline this, but then they need to acknowledge the limitations of the implications of the findings of this experiment regarding the theory.
- Relatedly, I would suggest that the authors pre-formulate brief conclusions for a scenario where Bf 3 was achieved on the involuntariness scale, for where bf 1/3 was achieved for the involuntariness scale, and for the test being inconclusive.
- It would be great to include link to proposed analysis code (together with simulated data or designed to work with raw data from the pilot study).
- “We estimated the need number of subjects in the following way.” – “ed” missing from “needed”.
- “Participants will be randomly assigned to the either the control group or the intervention group.” – no “the” is required before “either”.