Understanding biases and heuristics in charity donations
Factors impacting effective altruism: Revisiting heuristics and biases in charity in a replication and extensions of Baron and Szymanska (2011)
Abstract
Recommendation: posted 10 July 2023, validated 11 July 2023
Espinosa, R. (2023) Understanding biases and heuristics in charity donations. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=413
Related stage 2 preprints:
Recommendation
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
- F1000Research
- Meta-Psychology
- Peer Community Journal
- PeerJ
- Royal Society Open Science
- Swiss Psychology Open
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.
Evaluation round #2
DOI or URL of the report: https://osf.io/8ez4q
Version of the report: 2
Author's Reply, 09 Jul 2023
Revised manuscript: https://osf.io/2w3zy
All revised materials uploaded to: https://osf.io/bep78/, updated manuscript under sub-directory "PCIRR Stage 1\PCI-RR submission following R&R 2"
Decision by Romain Espinosa, posted 04 Jul 2023, validated 04 Jul 2023
Dear authors,
Thank you very much for submitting a revised version of your manuscript. Both referees find that you did a great job in revising the manuscript. Jonathan has no further comment. Amanda gives additional comments that I think can be addressed in a minor revision.
Regarding Amanda's first comment, I find the small-telescopes analysis interesting. I do not know whether you can implement it because I am not sure you have effect sizes from the original paper (given in Table 5). If you can implement it, I think it would be worth mentioning that you plan to do this analysis. (Or some equivalence testing?)
Regarding the second comment: I understand your point (overall replication), and I think Amanda is also right about what we can learn for charity giving (which hypotheses hold, which don't). In my understanding, you can discuss that in the discussion section ex-post.
I let you decide what to do with the remaining comments. Amanda's work on the scale point order is exceptionally nice. I am sympathetic to the idea of sticking as close as possible to the paper you seek to replicate (because if the results do not replicate, we do not know why) but her results are interesting (and supportive of your design choice).
I am looking forward to the revised version of the manuscript and a description of how you addressed Amanda's comments.
Thank you again for considering PCI-RR for your RR.
Best regards,
Romain
Reviewed by Amanda Geiser, 26 Jun 2023
Reviewed by Jonathan Berman ?, 02 Jun 2023
The authors have sufficiently addressed all my concerns.
Evaluation round #1
DOI or URL of the report: https://osf.io/9td7x
Version of the report: 1
Author's Reply, 31 May 2023
Revised manuscript: https://osf.io/8ez4q
All revised materials uploaded to: https://osf.io/bep78/, updated manuscript under sub-directory "PCIRR Stage 1\PCI-RR submission following R&R"
Decision by Romain Espinosa, posted 19 May 2023, validated 19 May 2023
Dear authors,
Thank you very much for submitting your Stage-1 manuscript. I have read the paper with close attention and heard back from the reviewers. I really enjoyed reading it, thank you for submitting it. Still, I believe that the paper would benefit from a revision. I put below my comments and you'll find the feedback from the reviewers.
First, I believe that the paper would greatly benefit from improving its structure. For instance:
- Please give some numbering to the sections/subsections etc. to facilitate the reading and reviewing
- Also, please make a clear distinction between the introduction, the presentation of the paper you replicate, the hypotheses you aim to test.
- I do not know what « exploratory directions » are. It is uncommon to have a section in the beginning of the manuscript that you leave for Stage 2. If not necessary, I would recommend keeping only the section « exploratory analyses ».
Besides, I have recently reviewed one RR and recommended another one from Gilad Feldman (co-author of the manuscript). If possible, it would be good to integrate the general remarks that the referees and I made on the other manuscripts. This includes for instance:
- Justification of the p-value for multiple analyses. (Why not use standard methods like Holm-Bonferroni corrections and setting alpha=0.005 instead?)
- Giving more details in your Study-Design Table about the analysis plan and/or the hypothesis. For instance, the Design Table of Kroll et al. that Gilad Feldman co-authored is very good.
- There are many missing elements in the current design table. What is H0 exactly? What is the significance threshold? What would you do if you reject some of the hypotheses and not others? I think that you might replicate some biases but not some others. Given that you indicated a joint interpretation of the results, what will you do?
- Please, discuss the potential risks for floor and ceiling effects. (I believe there is no risk for one-sample t-test but there might be for two-sample t-tests.)
Other comments:
1) In Roth et al (2015) which is cited, it is written: " [...] Arkes and Blumer (1985: 124) define the sunk-cost effect as “a greater tendency to continue an endeavor once an investment in money, effort, or time has been made.” "
--> In your experiment, I am not sure that you explore the sunk costs effect. For me, your experiment does not capture how one "continues" investing in a charity based on previous decisions. So, it is not about sunk costs. It might be an issue in the original study (or I might be mistaken here).
2) When reading the paper, I am not fully convinced that this is the best structure. When you introduce the hypotheses, you mention the studies but you haven't presented them so far. So it is a bit misleading for the readers. You might be willing to present the studies first and then describe the associated hypotheses and how they can be tested. (It would be much more natural for me.)
3) I have a question regarding your tests. In the appendix (page 4), section about the SESOI, it seems that you're running two-sided tests. I think that you have directional predictions, so I wonder why you run two-sided t-tests.
4) Regarding your assumption checks: you explain that you'll check for normality and homogeneity of variance. Please indicate how you will control for the validity of these assumptions precisely. (How do you define heavy violation?) Please indicate which non-parametric tests you will use in case the assumptions do not hold.
Regarding the reviewers' comments, I would like you to consider them. More specifically:
- I agree with Amanda Geiser on the fact that the interpretation of the results should be discussed more, especially if the effect sizes (ES) are smaller than expected.
- I also agree with increasing sample size to ensure that you can capture smaller ES than the original paper reports. For instance, the second reviewer, Jonathan Berman, suggests to multiply the original sample size by 2.5.
- I believe that it might be difficult to implement, but if you can indeed include incentive-compatible donation decisions as Amanda Geiser suggests, it would be indeed excellent. (But it is not necessary for the replication.)
- Jonathan's comment regarding the diversification effect is particularly interesting.
I am looking forward to receiving the updated version of your work!
Thank you very much for considering PCI-RR for your manuscript.
Best regards,
Romain