LOGAN Corina's profile
avatar

LOGAN CorinaORCID_LOGO

  • Comparative Behavioral Ecology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
  • Life Sciences
  • administrator, manager, recommender, developer

Recommendations:  4

Reviews:  2

Areas of expertise
I investigate how behavioral flexibility relates to rapid geographic range expansions in grackles (a bird associated with human-modified environments) and other species (www.CorinaLogan.com). I co-founded ManyIndividuals (https://github.com/ManyIndividuals/ManyIndividuals) - a global network of researchers with field sites investigating hypotheses that involve generalizing across many individuals. We conduct the same tests in the same way across species to determine whether the results of particular experiments are generalizable beyond that population or species. I write my articles using rmarkdown (reproducible manuscripts where the text and R code live together), post them publicly at GitHub (https://github.com/corinalogan/grackles), and get them pre- and post-study peer reviewed at PCI. If they end up going to a journal after that, it has to be ethical! (See my list here: http://corinalogan.com/journals.html) I am a Senior Researcher at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and I co-lead the #BulliedIntoBadScience campaign where early career researchers are working to change academic culture to adopt open research practices to improve research rigor (www.BulliedIntoBadScience.org). Follow along as I learn about grackles, implicit biases, and verifiable research on Mastodon (@CorinaLogan, https://nerdculture.de/@CorinaLogan).

Recommendations:  4

13 Nov 2023
STAGE 2
(Go to stage 1)
toto

Convenience Samples and Measurement Equivalence in Replication Research

Data from students and crowdsourced online platforms do not often measure the same thing

Recommended by based on reviews by Benjamin Farrar and Shinichi Nakagawa

Comparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008).

In the current study, Alley and colleagues (2023) tackled the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform. They determined whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups.

Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results are of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). The authors show that these two types of subjects do not often have measurement equivalence, which is a warning to others to evaluate their experimental design to improve validity. They provide useful recommendations for researchers on how to to implement equivalence testing in their studies, and they facilitate the process by providing well annotated code that is openly available for others to use.

After one round of review and revision, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.

URL to the preregistered Stage 1 protocol: https://osf.io/7gtvf
 
Level of bias control achieved: Level 2. At least some data/evidence that was used to answer the research question had been accessed and partially observed by the authors prior to Stage 1 IPA, but the authors certify that they had not yet observed the key variables within the data that were used to answer the research question AND they took additional steps to maximise bias control and rigour.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
 
2. Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
 
3. Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports​. https://osf.io/s5t3v
18 Aug 2023
STAGE 2
(Go to stage 1)
toto

Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation

Incorporating open research practices into the undergraduate curriculum increases understanding of such practices

Recommended by based on reviews by Kelsey McCune, Neil Lewis, Jr., Lisa Spitzer and 1 anonymous reviewer
In a time when open research practices are becoming more widely used to combat questionable research practices (QRPs) in academia, this Registered Report by Pownall and colleagues (2023) empirically investigated the practice of preregistering study plans, which allows us to better understand to what degree such practices increase awareness of QRPs and whether experience with preregistration helps reduce engagement in QRPs. This investigation is timely because results from these kinds of studies are only recently becoming available and the conclusions are providing evidence that open research practices can improve research quality and reliability (e.g., Soderberg et al. 2021, Chambers & Tzavella 2022). The authors crucially focused on the effect of preregistering the undergraduate senior thesis (of psychology students in the UK), which is a key stage in the development of an academic.
 
Pownall and colleagues found that preregistration did not affect attitudes toward QRPs, but it did improve student understanding of open research practices. Using exploratory analyses, they additionally found that those who preregistered were those students who reported that they had more opportunity, motivation, and greater capability. This shows how important it is to incorporate the teaching of open research practices such that students can increase their capability, motivation, and opportunity to pursue such practices, whether it is preregistration or other practices that are better known to reduce QRPs (such as registered reports; Krypotos et al. 2022). 
 
After four rounds of review and revisions, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/9hjbw
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Chambers C. D. & Tzavella, L. (2022). The past, present, and future of Registered Reports. Nature Human Behaviour, 6, 29-42. https://doi.org/10.1038/s41562-021-01193-7
 
2. Krypotos, A. M., Mertens, G., Klugkist, I., & Engelhard, I. M. (2022). Preregistration: Definition, advantages, disadvantages, and how it can help against questionable research practices. In Avoiding Questionable Research Practices in Applied Psychology (pp. 343-357). Cham: Springer International Publishing.
 
3. Pownall, M., Pennington, C. R., Norris, E., Juanchich, M., Smaile, D., Russell, S., Gooch, D., Rhys Evans, T., Persson, S., Mak, M. H. C., Tzavella, L., Monk, R., Gough, T., Benwell, C. S. Y., Elsherif, M., Farran, E., Gallagher-Mitchell, T., Kendrick, L. T., Bahnmueller, J., Nordmann, E., Zaneva, M., Gilligan-Lee, K., Bazhydai, M., Jones, A., Sedgmond, J., Holzleitner, I., Reynolds, J., Moss, J., Farrelly, D., Parker, A. J. & Clark, K. (2023). Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation [Stage 2 Registered Report], acceptance of Version 4 by Peer Community in Registered Reports. https://psyarxiv.com/xg2ah
 
4. Soderberg C. K., Errington T. M., Schiavone S. R., Bottesini J., Thorn F. S., Vazire S., Esterling K. M. & Nosek B. A. (2021) Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5, 990–997. https://doi.org/10.1038/s41562-021-01142-4
21 Mar 2023
STAGE 1
toto

Convenience Samples and Measurement Equivalence in Replication Research

Does data from students and crowdsourced online platforms measure the same thing? Determining the external validity of combining data from these two types of subjects

Recommended by based on reviews by Benjamin Farrar and Shinichi Nakagawa
Comparative research is how evidence is generated to support or refute broad hypotheses (e.g., Pagel 1999). However, the foundations of such research must be solid if one is to arrive at the correct conclusions. Determining the external validity (the generalizability across situations/individuals/populations) of the building blocks of comparative data sets allows one to place appropriate caveats around the robustness of their conclusions (Steckler & McLeroy 2008).
 
In this registered report, Alley and colleagues plan to tackle the external validity of comparative research that relies on subjects who are either university students or participating in experiments via an online platform (Alley et al. 2023). They will determine whether data from these two types of subjects have measurement equivalence - whether the same trait is measured in the same way across groups. Although they use data from studies involved in the Many Labs replication project to evaluate this question, their results will be of crucial importance to other comparative researchers whose data are generated from these two sources (students and online crowdsourcing). If Alley and colleagues show that these two types of subjects have measurement equivalence, then this indicates that it is more likely that equivalence could hold for other studies relying on these type of subjects as well. If measurement equivalence is not found, then it is a warning to others to evaluate their experimental design to improve validity. In either case, it gives researchers a way to test measurement equivalence for themselves because the code is well annotated and openly available for others to use.

The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).

URL to the preregistered Stage 1 protocol: https://osf.io/7gtvf
 
Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question AND they have taken additional steps to maximise bias control and rigour (e.g. conservative statistical threshold; recruitment of a blinded analyst; robustness testing, multiverse/specification analysis, or other approach) 
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Alley L. J., Axt, J., & Flake J. K. (2023). Convenience Samples and Measurement Equivalence in Replication Research, in principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/7gtvf
 
Steckler, A. & McLeroy, K. R. (2008). The importance of external validity. American Journal of Public Health 98, 9-10. https://doi.org/10.2105/AJPH.2007.126847
 
Pagel, M. (1999). Inferring the historical patterns of biological evolution. Nature, 401, 877-884. https://doi.org/10.1038/44766
29 Sep 2021
STAGE 1
toto

Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report

Does incorporating open research practices into the undergraduate curriculum decrease questionable research practices?

Recommended by and based on reviews by Kelsey McCune, Neil Lewis, Jr., Lisa Spitzer and 1 anonymous reviewer

In a time when open research practices are becoming more widely used to combat questionable research practices (QRPs) in academia, this Stage 1 Registered Report by Pownall and colleagues (2021) will empirically investigate the practice of preregistering study plans, which will allow us to better understand to what degree such practices increase awareness of QRPs and whether experience with preregistration helps reduce engagement in QRPs. This investigation is timely because results from these kinds of studies are only recently becoming available and the conclusions are providing evidence that open research practices can improve research quality and reliability (e.g., Soderberg et al. 2020, Chambers & Tzavella 2021). The authors crucially focus on the effect of preregistering the undergraduate senior thesis (of psychology students in the UK), which is a key stage in the development of an academic. This data will help shape the future of how we should teach open research practices and what effect we as teachers can have on budding research careers. The five expert peer reviews were of an extremely high quality and were very thorough. The authors did an excellent job of addressing all of the comments in their responses and revised manuscript versions, which resulted in only one round of peer review, plus a second revision based on Recommender feedback. As such, this registered report meets the Stage 1 criteria and is therefore awarded in-principle acceptance (IPA). We wish the authors the best of luck with the study and we look forward to seeing the results.

URL to the preregistered Stage 1 protocol: https://osf.io/9hjbw

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.

List of eligible PCI RR-friendly journals:

References

  1. Pownall M, Pennington CR, Norris E, Clark K. 2021. Evaluating the pedagogical effectiveness of study preregistration in the undergraduate dissertation: A Registered Report. OSF, stage 1 preregistration, in principle acceptance of version 1 by Peer Community in Registered Reports.   https://doi.org/10.17605/OSF.IO/9HJBW
  2. Chambers C, Tzavella L (2021). The past, present, and future of Registered Reports. https://doi.org/10.31222/osf.io/43298
  3. Soderberg CK, Errington TM, Schiavone SR, Bottesini J, Thorn FS, Vazire S, Esterling KM, Nosek BA (2021) Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5, 990–997. https://doi.org/10.1038/s41562-021-01142-4

Reviews:  2

08 Nov 2023
STAGE 2
(Go to stage 1)
toto

Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored?

Benevolent correction may provide a promising antidote to online toxicity

Recommended by based on reviews by Corina Logan and Marcel Martončik
Social media is a popular tool for online discussion and debate, bringing with it various forms of hostile interactions –  from offensive remarks and insults, to harassment and threats of physical violence. The nature of such online toxicity has been well studied, but much remains to be understood regarding strategies to reduce it. Existing theory and evidence suggests that a range of responses – including those that emphasise prosociality and empathy – might be effective at mitigating online toxicity. But do such strategies work in practice?
 
In the current study, Young Reusser et al (2023) tested the effectiveness of three types of responses to online toxicity – benevolent correction (including disagreement), benevolent going along (including joking/agreement) and retaliation (additional toxicity) – on how able participants feel to contribute to conversations, their belief that the toxicity would be reduced by the intervention, and their belief that justice had been restored.
 
The results showed the benevolent correction – while an uncommon strategy in online communities – was most effective in helping participants feel freer to contribute to online discussions. Benevolent correction was also the preferred approach for discouraging toxicity and restoring justice. Overall, the findings suggest that responding to toxic commenters with empathy and understanding while (crucially) also correcting their toxicity may be an effective intervention for bystanders seeking to improve the health of online interaction. The authors note that future research should focus on whether benevolent correction actually discourages toxicity, which wasn't tested in the current experiment, and if so how the use of benevolent corrections might be encouraged.
 
Following one round of review and revisions, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Young Reusser, A. I., Veit, K. M., Gassin, E. A., & Case, J. P. (2023). Responding to Online Toxicity: 
Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored? [Stage 2 Registered Report] Acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/k46e8
23 Jan 2023
STAGE 1
toto

Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored?

Testing antidotes to online toxicity

Recommended by based on reviews by Corina Logan and Marcel Martončik
Social media is a popular tool for online discussion and debate, bringing with it various forms of hostile interactions –  from offensive remarks and insults, to harassment and threats of physical violence. The nature of such online toxicity has been well studied, but much remains to be understood regarding strategies to reduce it. Existing theory and evidence suggests that a range of responses – including those that emphasise prosociality and empathy – might be effective at mitigating online toxicity. But do such strategies work in practice?
 
In the current study, Young Reusser et al (2023) propose an experiment to test the effectiveness of three types of responses to online toxicity – Benevolent Correction (including disagreement), Benevolent Going Along (including joking/agreement), or Retaliation (additional toxicity) – on how able participants feel to contribute to conversations, their belief that the toxicity would be reduced by the intervention, and their belief that justice had been restored. The findings promise to shed light on approaches for improving the health of online discourse.
 
The Stage 1 manuscript was evaluated over two rounds of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/hfjnb (under temporary private embargo)
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Young Reusser, A. I., Veit, K. M., Gassin, E. A., & Case, J. P. (2023). Responding to Online Toxicity: Which Strategies Make Others Feel Freer to Contribute, Believe That Toxicity Will Decrease, and Believe that Justice Has Been Restored? In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/hfjnb
avatar

LOGAN CorinaORCID_LOGO

  • Comparative Behavioral Ecology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
  • Life Sciences
  • administrator, manager, recommender, developer

Recommendations:  4

Reviews:  2

Areas of expertise
I investigate how behavioral flexibility relates to rapid geographic range expansions in grackles (a bird associated with human-modified environments) and other species (www.CorinaLogan.com). I co-founded ManyIndividuals (https://github.com/ManyIndividuals/ManyIndividuals) - a global network of researchers with field sites investigating hypotheses that involve generalizing across many individuals. We conduct the same tests in the same way across species to determine whether the results of particular experiments are generalizable beyond that population or species. I write my articles using rmarkdown (reproducible manuscripts where the text and R code live together), post them publicly at GitHub (https://github.com/corinalogan/grackles), and get them pre- and post-study peer reviewed at PCI. If they end up going to a journal after that, it has to be ethical! (See my list here: http://corinalogan.com/journals.html) I am a Senior Researcher at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and I co-lead the #BulliedIntoBadScience campaign where early career researchers are working to change academic culture to adopt open research practices to improve research rigor (www.BulliedIntoBadScience.org). Follow along as I learn about grackles, implicit biases, and verifiable research on Mastodon (@CorinaLogan, https://nerdculture.de/@CorinaLogan).