Understanding probability assessments with partitioned framing

ORCID_LOGO based on reviews by Olivier L'Haridon and Don Moore
A recommendation of:

Revisiting Partition Priming in judgment under uncertainty: Replication and extension Registered Report of Fox and Rottenstreich (2003)


Submission: posted 18 January 2024
Recommendation: posted 05 June 2024, validated 05 June 2024
Cite this recommendation as:
Espinosa, R. (2024) Understanding probability assessments with partitioned framing. Peer Community in Registered Reports, .


Decision-making based on limited information is a common occurrence. Whether it is the possibility of a cheaper product elsewhere or the unknown qualifications of election candidates, people are regularly forced to make a decision under ignorance or uncertainty. In such situations, information about certain events is unavailable or too costly to acquire and people rely on subjective probability allocation to guide decision-making processes. This allocation seems to result in what is known as ignorance priors, i.e., decision-makers assigning equal probabilities to each possible outcome within a given set. How events are grouped or partitioned is often subjective and may influence probability judgments and subsequent decisions. In such cases, the way the choices within a choice set are presented may shape the perceived likelihood of different outcomes. Understanding the impact of partitioning on probability estimation is crucial for both psychological and economic theories of judgment and decision.
The question of evaluating probabilities under uncertainty has received much attention in the psychology and economics literature over the past decades given the wide range of possible applications. In the current work, Ding and Feldman (2024) seek to replicate one of the foundational works on the topic: Fox and Rottenstreich (2003). In the original work, the authors provided exploratory evidence indicating that the framing of a situation affects the way individuals perceive probabilities of possible outcomes. They showed that people assigned uniform probabilities to sets of events described in a problem, such that the way the events are described partly determines people’s partitioning of those events and evaluations of the probabilities of the possible outcomes. Additionally, this partitioned framing affected judgments both under conditions of ignorance (where individuals have no information and rely solely on uniform probability assignments) and uncertainty (where individuals have some information but still rely on heuristics influenced by partitioning). This suggests that priors resulting from the inference of available evidence are sometimes partly contaminated by partitioning bias, affecting both uninformed and partially informed decision-making processes. As a consequence, the partitioning of events into different subsets might lead to varying evaluations of a single situation, resulting in inconsistencies and poorly calibrated probability assessments.
Ding and Feldman (2024) aim to replicate Studies 1a, 1b, 3, and 4 from Fox and Rottenstreich (2003). Their close replication will rely on original data (US participants, Prolific, N=600, not collected yet) with a large statistical power (>95%). Their replication aims to examine whether the partitioned framing affects prior formation under ignorance (Studies 1a, 1b, and 4) and uncertainty (Study 3). In addition, the authors propose an extension examining estimations of alternative event(s) contrasting estimations of the probabilities of events happening versus of events not happening.
The Stage 1 manuscript was evaluated by two external reviewers and the recommender. Based on detailed responses to the reviewers' and the recommender’s comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
URL to the preregistered Stage 1 protocol:
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
List of eligible PCI RR-friendly journals:
1. Ding, K. & Feldman, G. (2024). Revisiting Partition Priming in judgment under uncertainty:
Replication and extension Registered Report of Fox and Rottenstreich (2003). In principle acceptance of Version 2 by Peer Community in Registered Reports.
2. Fox, C. R. & Rottenstreich, Y. (2003). Partition priming in judgment under uncertainty. Psychological Science, 14, 195-200.
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #1

DOI or URL of the report:

Version of the report: 1

Author's Reply, 23 May 2024

Download author's reply Download tracked changes file

Revised manuscript:

All revised materials uploaded to: , updated manuscript under sub-directory "PCIRR Stage 1\PCI-RR submission following R&R"

Decision by ORCID_LOGO, posted 10 Apr 2024, validated 10 Apr 2024

Dear authors,

Thank you very much for your submission. I have read your paper with great interest and received feedback from two reviewers. Given this feedback and my own reading of the paper, I recommend a revision to address the minor concerns that the two referees raised and that I also noted while reading your manuscript.

Please note that both Olivier and I commented on the Big Ten Conference item which is, in my view, the major element of the revision. Olivier’s suggestion regarding the Bayesian statistics with the initial paper’s results used as priors can be seen as a general comment about your overall replication project and for exploratory discussions. However, I understand that you want to stick as close as possible to the tests used in the original paper. Last, both Olivier and I commented on payments. While these comments do not challenge your design, they might call for a dedicated subsection in the paper or, at least, a bit more emphasis.

I put my comments below. (As always, consider them with caution and feel free to contradict them: I might be mistaken.)

I am looking forward to receiving the revised version of your work.

Best regards,



——Recommender’s comments——

- In the original study, for Study 1, the authors have underlined some parts of the text. I checked in Qualtrics and, indeed, you have underlined these sentences. You might be willing to underline them in your paper as well (Table 5).

- You write that F&R did not mention that the Big Ten Conference had 11 teams and that they « assumed their participants would know that information ». In their paper, Note #3 discusses this point. The authors write that « most people believe the Big Ten has 10 teams, but, in fact, it has 11. ». It seems to me that the authors were aware of this uncertainty. They assumed that, overall, people would have ignorance beliefs 1/10 vs. 9/10 in the class formulation. They further add in the note: « Thus, for some participants, the class ignorance prior may have been 1/11 - 10/11. ».

Given this, you could just leave it as they did, couldn’t you? Another possibility would be, at the end of the survey, to ask a question about it. Something like: in your opinion, how many teams participate in the Big Ten Conference? (You could tailor the ignorance prior at the participant’s level.)

- I do not see any issue with the replacement of GM by IBM.

- I might be too much of an economist here but, in my view, the lack of real incentives in Study 4 is an important deviation from the original study. (People are likely to exert more cognitive efforts if they are paid, especially for significant amounts of money. 10$ here is a large amount given the time spent.) You mention this in Table 7 but not in Table 8. (By the way: you refer to Table 8 in Table 8. Is that a typo?)

- « while others were rounded to one decimal place —> 0.3 ». If I am not mistaken, they use 0.03 not 0.3.

- I think that there are some rationales for the choices of the authors of the original paper for the classification. For item 1 of S1: the chances are 1/7, which is 14.29%. So, assuming participants had to report percent integers, they could choose 14 or 15. So the authors considered both answers as correct. For the sports items: the probability is 1/10 (=10%) if they believe that there are 10 teams and 1/11 (=9.09%) if they think that there are eleven teams. So, they considered both answers as correct. For item 3, I agree that they should have considered 3% and 4% if they used the same method as for item 1. For Study 3, they can have used the same rule as for item 2 from S1 (in the case people thought there were 10 teams).

—> In my opinion, the original approach is very strict in determining which answers fit the class partition beliefs. People are not very good when it comes to probabilities. I think that your approach is more appropriate in this regard. Personally, I would have preferred a more continuous measure of the closeness/distance to one theory relative to the other. But this would bring you too far from the replication. (Maybe as exploratory discussions?)


Reviewed by , 29 Mar 2024

The replication study is carefully designed and planned; I have only minor comments.

My first comment refers to Study 1a, Item 2. Due to the changes in the league, the ignorance prior is now 1/7, as explained by the authors. My point here is that the league was chosen because it included a genuine ignorance prior of 1/10. An alternative for the replication would be to use another league with a genuine prior of 1/10 instead of replicating the design with the initial league but changing the prior.

My second comment refers to the payments. One dollar in 2002 is equivalent to approximately 1.65 dollars in 2024. From the replication design, I understand that payments will be anchored on the minimum wage per hour. The authors should provide a better justification for not simply replicating the initial payments, adjusted for inflation (or some other purchasing power parity index).

My last comment is a simple suggestion on statistical methods. An additional statistical method, if relevant in the current context, would be to use Bayesian statistics (e.g., Bayes factors) with the initial paper used as a prior.

Reviewed by , 30 Mar 2024


Download the review

User comments

No user comments yet