Submit a report

Announcements

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.


 

218

Taking A Closer Look At The Bayesian Truth Serum: A Registered Reportuse asterix (*) to get italics
Philipp Schoenegger & Steven VerheyenPlease use the format "First name initials family name" as in "Marie S. Curie, Niels H. D. Bohr, Albert Einstein, John R. R. Tolkien, Donna T. Strickland"
2022
<p>Over the past decade, psychology and its cognate disciplines have undergone substantial scientific reform, ranging from advances in statistical methodology to significant changes in academic norms. One aspect of experimental design that has received comparatively little attention is incentivisation, i.e. the way that participants are rewarded and incentivised monetarily for their participation in experiments and surveys. While incentive-compatible designs are the norm in disciplines like economics, the majority of studies in psychology and experimental philosophy are constructed such that individuals’ incentives to maximise their payoffs in many cases stand opposed to their incentives to state their true preferences honestly. This is in part because the subject matter is often self-report data about subjective topics and the sample is drawn from online platforms like Prolific or MTurk where many participants are out to make a quick buck. One mechanism that allows for the introduction of an incentive-compatible design in such circumstances is the Bayesian Truth Serum (BTS; Prelec, 2004), which rewards participants based on how surprisingly common their answers are. Recently, Schoenegger (2021) applied this mechanism in the context of Likert-scale self-reports, finding that the introduction of this mechanism significantly altered response behaviour. In this registered report, we further investigate this mechanism by (i) attempting to directly replicate the previous result and (ii) analysing if the Bayesian Truth Serum’s effect is distinct from the effects of its constituent parts (increase in expected earnings and addition of prediction tasks). We fail to find significant differences in response behaviour between participants who were simply paid for completing the study and participants who were incentivized with the BTS. Per our pre-registration, we regard this as evidence in favour of a null effect of up to V=.1 and a failure to replicate, but reserve judgment as to whether or not the BTS mechanism should be adopted in social science fields that rely heavily on Likert-scale items reporting subjective data, seeing that smaller effect sizes might still be of practical interest and results may differ for items different from the ones we studied. Further, we provide weak evidence that the prediction task itself influences response distributions and that this task’s effect is distinct from an increase in expected earnings, suggesting a complex interaction between the BTS’ constituent parts and its truth telling instructions.&nbsp;</p>
You should fill this box only if you chose 'All or part of the results presented in this preprint are based on data'. URL must start with http:// or https://
You should fill this box only if you chose 'Scripts were used to obtain or analyze the results'. URL must start with http:// or https://
You should fill this box only if you chose 'Codes have been used in this study'. URL must start with http:// or https://
Incentivisation, Bayesian Truth Serum, Methods, Rewards, Bonus
NonePlease indicate the methods that may require specialised expertise during the peer review process (use a comma to separate various required expertises).
Social sciences
No need for them to be recommenders of PCI Registered Reports. Please do not suggest reviewers for whom there might be a conflict of interest. Reviewers are not allowed to review preprints written by close colleagues (with whom they have published in the last four years, with whom they have received joint funding in the last four years, or with whom they are currently writing a manuscript, or submitting a grant proposal), or by family members, friends, or anyone for whom bias might affect the nature of the review - see the code of conduct
e.g. John Doe [john@doe.com]
2022-06-11 14:39:38
Ljerka Ostojic