Recommendation

When do perceptions of wastefulness affect how people make choices?

ORCID_LOGO based on reviews by Travis Carter and Quentin Andre
A recommendation of:
toto

Revisiting the Psychology of Waste: Replication and extensions Registered Report of Arkes (1996)

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 11 January 2024
Recommendation: posted 04 May 2024, validated 07 May 2024
Cite this recommendation as:
Markant, D. (2024) When do perceptions of wastefulness affect how people make choices?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=657

Recommendation

How do perceptions of wastefulness affect how people make choices? In an influential set of studies examining different conceptions of wasteful behavior (overspending, underutilization, and sunk costs), Arkes (1996) found a systematic aversion to wastefulness in decision making, even when choosing to avoid wastefulness has no economic value or works against personal interest. While these findings have been influential in basic and applied research, there have been no attempts to directly replicate the results. Moreover, the original study has several methodological limitations, including the use of relatively small samples and gaps in statistical analysis and reporting.
 
In this Stage 1 manuscript, Zhu and Feldman (2024) propose to conduct a high-powered replication of Arkes (1996) using an online sample of participants. The authors will incorporate several extensions to improve methodological rigor relative to the original article, including added comprehension checks, checks of the wastefulness manipulations, a within-subjects design, and a quantitative analysis of participants’ self-reported motivations for their choices. The results of the study will provide insight into the robustness of the original findings, while also better distinguishing wastefulness aversion from other potential reasons behind participants' decisions.
 
The Stage 1 submission was evaluated by the recommender and two expert reviewers. After two rounds of revision, the recommender determined that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA). 
 
URL to the preregistered Stage 1 protocol: https://osf.io/r7tsw
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Arkes, H. R. (1996). The psychology of waste. Journal of Behavioral Decision Making, 9,
213-224. https://doi.org/10.1002/(SICI)1099-0771(199609)9:3%3C213::AID-BDM230%3E3.0.CO;2-1
 
2. Zhu, Z. & Feldman, G. (2024). Revisiting the Psychology of Waste: Replication and extensions Registered Report of Arkes (1996). In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/r7tsw
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report: https://osf.io/xgf4y

Version of the report: 2

Author's Reply, 30 Apr 2024

Download author's reply Download tracked changes file

Revised manuscript:  https://osf.io/xcths

All revised materials uploaded to:  https://osf.io/gf8rc/, updated manuscript under sub-directory "PCIRR Stage 1\PCI-RR submission following R&R 2"

Decision by ORCID_LOGO, posted 24 Apr 2024, validated 24 Apr 2024

Dear Dr. Feldman,

Thank you for submitting your revisions and response to the reviews for the Stage 1 registered report entitled “Revisiting the Psychology of Waste: Replication and Extensions Registered Report of Arkes (1996)”.

Overall the revisions were comprehensive and highly responsive to the points raised in the reviews. There are two remaining minor issues that I’d ask you to address:

  1. In my view the procedure for attention checks should be documented in the manuscript well enough for a reader to judge the level of protection against low effort or automated responses, especially when it results in the exclusion of some participants. You point out in your response that failing the attention checks causes the survey to end and the participant is prevented from starting over. I would ask that you simply add this detail to the Procedure section rather than require a reader to consult the survey itself. The description also does not mention the “copy-paste” question, which I would note *does* allow a participant to correct their response if they carelessly skip past it or don’t follow the instruction, so it’s not clear that this question is serving the same purpose as the others in screening out inattentive participants.
  2. The other remaining issue is the plan to conduct analyses of order effects only in the case that you fail to find evidence for the hypotheses. I don’t think order effects are akin to other extraneous factors like age and education—indeed you anticipated the possibility of order effects impacting the replication outcome by planning for follow-up analyses. I do, however, appreciate the concern that reporting all of these additional analyses from the outset would hurt the readability and interpretability of the paper, and agree that these costs may not be worth it considering that this is a nuisance factor unrelated to the theory being tested. But part of the reason for raising this issue is that, just as you mention in your response, these analyses bring some additional analytic flexibility, and the current description of the "pre-registered" plan in the Order Effects section doesn’t do much to limit that flexibility (specifically on the question of how a moderating effect of order would be tested). If you can provide more details about the plan for these tests at this stage then I would recommend including it now. If you choose not to, the expectation would be that these analyses are appropriately caveated as being exploratory in the stage 2 submission.

After these final points are addressed I expect to be able to quickly move on to a IPA for this submission.

Best wishes,
Doug Markant


Evaluation round #1

DOI or URL of the report: https://osf.io/tas3j

Version of the report: 1

Author's Reply, 15 Apr 2024

Download author's reply Download tracked changes file

Revised manuscript:  https://osf.io/xgf4y

All revised materials uploaded to:  https://osf.io/gf8rc/, updated manuscript under sub-directory "PCIRR Stage 1\PCI-RR submission following R&R"

Decision by ORCID_LOGO, posted 29 Mar 2024, validated 02 Apr 2024

Dear Dr. Feldman,

Thank you for submitting your Stage 1 registered report entitled “Revisiting the Psychology of Waste: Replication and Extensions Registered Report of Arkes (1996)” to PCI: Registered Reports. I have now received comments from two expert reviewers and have also read the report myself. Overall, we’re in agreement that your submission has several strengths, including a good justification for conducting a replication of the target article, well-documented plans for the study and clear criteria for evaluating the outcome of the replication, well-justified modifications to the original article, and a number of proposed extensions that improve upon the original study’s methodological rigor.

Based on my own reading and reviewers’ comments, I’ve summarized below the main points that should be addressed in a revision prior to an IPA. 

Major points

  1. Some aspects of the planned analyses need to be clarified. The description of the analyses starting on pg. 34 does not entirely match the simulated results. As reviewer TC notes, it’s not clear how you’ll use the ANOVA to draw conclusions about the role of the different reasons (relative to utility maximization) in wastefulness judgments. In general, the “Data analysis strategy” section would be improved by reiterating the hypotheses that are targeted by each set of tests (especially for the proposed extensions). The Bayesian analyses need more explanation about the approach and justification for the choice of prior (see also reviewer QA’s suggestion about an alternative that doesn’t rely on Bayes Factors). Reviewer TC also points out some inconsistencies between descriptions of the analyses of perceived wastefulness.
  2. Given that the within-subjects design is a major deviation from the target article, I agree with reviewer TC’s suggestion that an evaluation of order effects should be carried out regardless of the outcome of any other tests. If there is an effect of order, examining the scenario that is presented first in a follow-up analysis seems sensible but will change the sensitivity to detect the target effect sizes and may leave you with a smaller sample than the minimum of 240, so an alternative analysis approach that accounts for order might be preferable.
  3. The purpose of the attention checks at the beginning of the survey is somewhat unclear. It sounds like participants will have to answer Yes to each of them in order to proceed. My impression is that if participants fail the attention checks they are given a chance to correct their response (although I couldn’t quite tell from the survey flow). Perhaps this prompts people to pay closer attention at least momentarily, but it seems there’s no plan to exclude people based on whether they initially fail these questions, correct?
  4. A similar comment applies to the comprehension checks (see point by reviewer QA). It appears that the plan is not to exclude anyone based on these checks either, but rather to give them as many opportunities as they need to answer correctly in order to move on. Given the complexity of the scenarios I would be concerned that some participants are going to simply cycle through the options without understanding the scenario (and the risk of this might increase with later scenarios). This is a concern especially with the change from an in-person study in the target article to an online replication—if the replication fails, a natural question will be whether it’s due to the online setting. In follow-up analyses it would be reasonable to use responses on the comprehension questions to divide the sample into groups. If you foresee going down that route, I’d recommend specifying at this point how you would approach that (e.g., deciding now what would be a reasonable number of incorrect responses where you would classify someone as an inattentive respondent). If not, please address how you will otherwise guard against low-effort or inattentive responses of the kind commonly seen in online samples.

Minor points

  1. The Introduction is generally well-written and clearly organized, but there are a few points that are a bit too terse and need some clarification.
    1. Pg. 8: “Our secondary goal was to build on the target’s design and add extensions to refine the target’s methods and gain further insights.” — Although this information comes at a later point, it would be helpful to give a little more of a preview here of the extensions and the overall motivation for them.
    2. Pg. 15: “Given that the coding procedure was unclear and the process noisy…” — In what way? The preceding paragraph doesn’t seem to explain these limitations, apart from reasons not being measured in Study 2.
    3. Pg. 17: “we were concerned about a possible discrepancy between the Arkes’s (1996) conceptualization of the concept of wastefulness, and the laypersons’ perspective of wastefulness.” — What was this concern?

  2. Clarify early on that the study will use a within-subjects design (see point by reviewer TC). I too found the phrasing about a “unified” design to be ambiguous and to cause some confusion at some points. In addition, I’d strongly recommend that for the planned experiment the authors describe the three tasks as separate “scenarios” (or something similar) rather than “studies” (“studies” is appropriate when describing the target article, but is confusing here given the within-subjects design).
  3. Please address the question by reviewer TC about the “Likelihood” measure, where the simulated mean appears to fall outside the range of response.
  4. Assuming that the red text is meant to be deleted for the Stage 2 submission, I’d suggest including some or all of the text at the top of pg. 20 about pretesting and incentive pay as part of the main text (e.g., the planned pay rate and survey duration). The two sections also seem to repeat some of the same statements and could be consolidated.
  5. Comprehension checks are described in two places (first on pg. 26 then on pg. 28), but they appear to be describing the same questions. I recommend consolidating these sections so they are described in one place.
  6. See the suggestions for improving the figures made by reviewer TC. NOTE: Although I agree with both points, given that the figures will be updated in the Stage 2 submission along with the rest of the results I don’t view these as necessary changes prior to IPA.

Best wishes,
Doug Markant

Reviewed by ORCID_LOGO, 26 Mar 2024

I think the authors of this proposed replication and extension have are well prepared to produce a solid contribution. The proposed plan is a faithful replication of the original article, with well-articulated and well-thought out deviations from the original protocol to fit with the present (e.g., adjusting for inflation). Their proposed extensions are also well considered, intended to ameliorate clear deficits in the original articles method or reporting (e.g., manipulation check; continuous measures to complement the forced choice measures; more robust quantitative approaches to a measure that was originally purely qualitative). The proposed sample size is also a very nice improvement upon the original; the original article's samples were clearly insufficient to be very informative, even if they were normal at the time.


I noticed a few small issues that I would suggest the authors address, but overall it appears to be a very solid plan to replicate and extend an important article that has so far not been revisited.

Here are my suggestions: 
- Recommend that you state very clearly much earlier in the article that you are having participants complete all three studies (the term "unified data collection" is a bit ambiguous---it could be taken to mean that they are randomly assigned to one of the three studies, rather than all three in a random order). 


- Relatedly, are there concerns about fatigue or bias being introduced by having them complete all three studies? The "unified" design is certainly efficient, and obviously you are doing the right thing by having the order counterbalanced, but you'll need to build in checks to see if the order matters (and *not* only if you fail to find support for the hypotheses, as stated in the note on pg 22), and if it does, how to handle that situation. Analyzing just the first scenario each person saw is one such solution, but that would reduce your power considerably.


- Mean of "Likelihood" in Study 1 is simulated to be 1.97 (Table 9); is that meant to be the average of the three response options (coded as -1, 0 and +1), or did I misunderstand and that's a separate question? And if I did misunderstand, it's not clear which question that would be.


- Reasons: I'm not 100% sure I understand the conclusions you're aiming to draw from the repeated-measures ANOVA to analyze the Reasons ratings. That analysis will let you see if any of them are different than each other, but your language in interpreting the (simulated) results suggests you're hoping to do much more. How are you able to make an inference about whether their decisions were "influenced by considerations other than utility maximization" (p. 45)? If you are hoping to compare the other reasons to utility maximization, it seems like you'd need to ask about it explicitly. Plus, that particular analysis doesn't really lend itself to interpreting the absolute magnitude of those reasons. It's possible that participants could rate *all* of the listed reasons as being highly important to their decision, which would nonetheless show up as a non-significant F-test. You may wish to consider interpreting the reasons on an absolute scale (high vs. low) as well as relative to each other.


- Wastefulness extension: 
    - inconsistency between descriptive statistics and analyses for Study 2 (only two means listed, but doing a mixed-model ANOVA). 
    - For studies 2 (p. 55) and 3 (p. 56), paragraph describes paired-samples t-tests instead of the analyses listed in Table 16 (p. 53).


- Figure 10: This plot needs a bit more explanation. Perhaps this is a new type of plot that I'm unfamiliar with, but all of the dashed lines seem uninformative. They should at least be explained in the note.

- Figure 11: Be consistent in labeling (include "rebate" vs. "no rebate" in addition to "waste" vs. "no waste")

 

a

Reviewed by , 18 Mar 2024

My overall impression of the manuscript is very positive:

  • I agree with the authors' argument that Arkes (1996) has been an influential building block, and that the paper is an interesting target for replication.
  • The authors' familiarity with replications in general, and with the Registered Report format in particular, is evident: The hypothesis are very clearly laid out, the authors present simulated results (with appropriate conditional logic describing how they would describe significant vs. non-significant results), any differences between the original and the replication are very clearly laid out, and the extension that the authors are planning appears meaningful (if modest) in scope.
  • The sample size justification appears meaningful (using the small telescope approach offered by Simonsohn), and the authors' analytical strategy appears properly set-up for meaningful inferences, regardless of how the data turns out.

I only have minor comments and suggestions:

  • Unless I missed it, the authors do not discuss how they are planning to handle comprehension checks. Given the potential for misunderstanding the scenario (I must admit I found the movie scenario from Study 1 a bit hard to track when I first read it, something that the authors cannot be blamed for), it appears like an important aspect to discuss. In particular, in the between-subjects designs (Studies 2 and 3), it would be valuable to discuss how differential attrition across conditions will be handled (if, for instance, participants are less likely to pass the comprehension checks in one condition vs. the other, and they are excluded on this basis).
  • The authors have opted for a (mostly) exact replication of the findings of Arkes, only adjusting prices for inflation. I am well-aware of the "blame-if-you-do-blame-if-you-don't" aspect of conducting replications: Conceptuals replications are often dismissed on the grounds that the materials were insufficiently close to the original, while exact replications are often dismissed on the grounds that times have changed and that the materials need to be updated. Given this tension, I am wondering if there would be value in considering a scenario which would be a conceptual replication, either in addition (resource permitting) or in replacement of one of the original scenario.
  • This is a matter of taste, but I do not find that Bayesian analysis based on Bayes factors are easy to interpret, given that they require a prior. If I may suggest a recommendation to interpret a null hypothesis, a Likelihood ratio test comparing the likelihood of the data under H0 to the likelihood of the data under a given H1 (which could either be the effect size of the original, or 33% of the original effect size following the "small telescopes approache". This statistics is directly interpretable as "How many times more likely is the data under H0 vs. H1".

My first point is something that I would like to see addressed, while the second and third point are suggestions/matters of taste that the authors should feel free to ignore.

Thank you for an enjoyable and very detailed read, and I wish you a smooth data collection process!

Best regards,

Quentin André

User comments

No user comments yet