Reviews: 3
31 May 2024
STAGE 1
Unveiling the Positivity Bias on Social Media: A Registered Experimental Study On Facebook, Instagram, And X
Social media positivity bias
Recommended by Veli-Matti Karhulahti based on reviews by Linda Kaye, Marcel Martončik, Julius Klingelhoefer and 1 anonymous reviewerBoth research and public debates around social media use tend to involve a premise of positivity bias, which refers to presenting one’s life in an overly positive light by various different means. This premise contributes to multiple potentially important follow-up hypotheses, such as the fear of missing out and low self-image effects, due to repeated consumption of positive social media content (e.g., Bayer et al. 2020, for a review). The positivity bias of social media use, itself, has received limited research attention, however.
In the present study, Masciantonio and colleagues (2024) will test positivity bias in the context of three social media platforms: Facebook, Instagram, and X. The experiment involves recruiting participants into platform-specific user groups and crafting posts to be shared with friends as well as respective social media audiences. If positivity bias manifests in this context, the social media posts should introduce more positive valence in comparison to offline sharing—and if the platforms differ in their encouragement of positivity bias, they should introduce significant between-platform differences in valence.
The Stage 1 plan was reviewed by four independent experts representing relevant areas of methodological and topic expertise. Three reviewers proceeded throughout three rounds of review, after which the study was considered having met all Stage 1 criteria and the recommender granted in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/9z6hm
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
In the present study, Masciantonio and colleagues (2024) will test positivity bias in the context of three social media platforms: Facebook, Instagram, and X. The experiment involves recruiting participants into platform-specific user groups and crafting posts to be shared with friends as well as respective social media audiences. If positivity bias manifests in this context, the social media posts should introduce more positive valence in comparison to offline sharing—and if the platforms differ in their encouragement of positivity bias, they should introduce significant between-platform differences in valence.
The Stage 1 plan was reviewed by four independent experts representing relevant areas of methodological and topic expertise. Three reviewers proceeded throughout three rounds of review, after which the study was considered having met all Stage 1 criteria and the recommender granted in-principle acceptance.
URL to the preregistered Stage 1 protocol: https://osf.io/9z6hm
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- Collabra: Psychology
- International Review of Social Psychology
- Peer Community Journal
- PeerJ
- Royal Society Open Science
- Social Psychological Bulletin
- Studia Psychologica
- Swiss Psychology Open
References
1. Bayer, J. B., Triệu, P., & Ellison, N. B. (2020). Social media elements, ecologies, and effects. Annual review of psychology, 71, 471-497. https:// doi.org/10.1146/annurev-psych-010419-050944
2. Masciantonio, A., Heiser, N., & Cherbonnier, A. (2024). Unveiling the Positivity Bias on Social Media: A Registered Experimental Study On Facebook, Instagram, And X. In principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/9z6hm
Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored?
Benevolent correction may provide a promising antidote to online toxicity
Recommended by Chris Chambers based on reviews by Corina Logan and Marcel MartončikSocial media is a popular tool for online discussion and debate, bringing with it various forms of hostile interactions – from offensive remarks and insults, to harassment and threats of physical violence. The nature of such online toxicity has been well studied, but much remains to be understood regarding strategies to reduce it. Existing theory and evidence suggests that a range of responses – including those that emphasise prosociality and empathy – might be effective at mitigating online toxicity. But do such strategies work in practice?
In the current study, Young Reusser et al (2023) tested the effectiveness of three types of responses to online toxicity – benevolent correction (including disagreement), benevolent going along (including joking/agreement) and retaliation (additional toxicity) – on how able participants feel to contribute to conversations, their belief that the toxicity would be reduced by the intervention, and their belief that justice had been restored.
The results showed the benevolent correction – while an uncommon strategy in online communities – was most effective in helping participants feel freer to contribute to online discussions. Benevolent correction was also the preferred approach for discouraging toxicity and restoring justice. Overall, the findings suggest that responding to toxic commenters with empathy and understanding while (crucially) also correcting their toxicity may be an effective intervention for bystanders seeking to improve the health of online interaction. The authors note that future research should focus on whether benevolent correction actually discourages toxicity, which wasn't tested in the current experiment, and if so how the use of benevolent corrections might be encouraged.
Following one round of review and revisions, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
List of eligible PCI RR-friendly journals:
- Collabra Psychology
- F1000Research
- International Review of Social Psychology
- Peer Community Journal
- PeerJ
- Royal Society Open Science
- Studia Psychologica
- Swiss Psychology Open
References
1. Young Reusser, A. I., Veit, K. M., Gassin, E. A., & Case, J. P. (2023). Responding to Online Toxicity:
Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored? [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/k46e8
Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored? [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/k46e8
23 Jan 2023
STAGE 1
Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored?
Testing antidotes to online toxicity
Recommended by Chris Chambers based on reviews by Corina Logan and Marcel MartončikSocial media is a popular tool for online discussion and debate, bringing with it various forms of hostile interactions – from offensive remarks and insults, to harassment and threats of physical violence. The nature of such online toxicity has been well studied, but much remains to be understood regarding strategies to reduce it. Existing theory and evidence suggests that a range of responses – including those that emphasise prosociality and empathy – might be effective at mitigating online toxicity. But do such strategies work in practice?
In the current study, Young Reusser et al (2023) propose an experiment to test the effectiveness of three types of responses to online toxicity – Benevolent Correction (including disagreement), Benevolent Going Along (including joking/agreement), or Retaliation (additional toxicity) – on how able participants feel to contribute to conversations, their belief that the toxicity would be reduced by the intervention, and their belief that justice had been restored. The findings promise to shed light on approaches for improving the health of online discourse.
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb (under temporary private embargo)
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb (under temporary private embargo)
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
References
1. Young Reusser, A. I., Veit, K. M., Gassin, E. A., & Case, J. P. (2023). Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored? In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/hfjnb