Close printable page
Recommendation

Why are there variations in perceptions of inequality?

ORCID_LOGO based on reviews by Mario Gollwitzer and Sa-Kiera Hudson
A recommendation of:

Justice in the Eye of the Beholder: How Comparison Framing Affects the Perception of Global Inequality Through Social Emotions and Justice Sensitivity

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 11 December 2021
Recommendation: posted 30 June 2023, validated 30 June 2023
Cite this recommendation as:
Syed, M. (2023) Why are there variations in perceptions of inequality?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=150

Recommendation

Inequalities in income, wealth, and opportunities are rampant both between and within nations around the world. Making strides to rectify inequalities requires examining how people come to understand them as well as the psychological processes that translate those understandings into reparative actions. There is some evidence for a “comparative framing effect,” in which the group that is initially referenced impacts judgements by communicating salient information and the appropriate reference point. Research on this comparative framing effect suggests that focusing on disadvantage, relative to advantage, leads to a more negative assessment and intentions to engage in action to reduce the inequality. 
 
In two pilot studies (reported in the current proposal) focused on global inequalities (low-income vs high-income countries), Schnepf et al. (2023) did not find evidence for a main effect of framing on perceived legitimacy of the inequality or intentions to engage in action. They did, however, find some evidence for an interaction with the perceived size of the inequality. When the low-income country was the subject of the comparison, larger perceptions of the size of the inequality were associated with greater intentions to engage in action (both studies) and greater perceptions of the differences as illegitimate (Study 1 only). Moreover, they found some evidence in both studies that negative social emotions such as guilt and shame were the mechanism that explained why perceiving greater inequality in the low-income framing condition was associated with the outcomes. 
 
In the current study, Schnepf et al. (2023) build upon these two pilot studies to conduct a high-quality replication and a stronger test of their hypotheses. Most notably, the proposed Registered Report uses a much larger sample, providing adequate statistical power to detect relatively small interaction effects. Additionally, the proposed project manipulates the size of the inequality that is being evaluated, rather than relying on participants’ perceptions. Finally, the study includes “justice sensitivity,” or the degree to which individuals assess inequality as unfair as an additional hypothesized moderator, and “social dominance orientation” as an exploratory moderator. Along with the pilot studies, the proposed project will represent a strong test of several hypotheses relevant to many different areas of social and personality psychology. 
 
The Stage 1 manuscript was evaluated over two rounds of in-depth peer review, both of which consisted of substantial comments from two scholars with relevant expertise. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and was therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/pgyvw
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
1. Schnepf, J., Reese, G., Bruckmüller, S., Braun, M., Rotzinger, J., & Martiny, S. E. (2023). Justice in the eye of the beholder: How comparison framing affects the perception of global inequality through social emotions and justice sensitivity. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/pgyvw
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #2

DOI or URL of the report: 10.31234/osf.io/n72cp

Version of the report: v3

Author's Reply, 29 Jun 2023

Decision by ORCID_LOGO, posted 23 Feb 2023, validated 23 Feb 2023

February 23, 2023

Dear Julia Schnepf, Gerhard Reese, Susanne Bruckmüller, Maike Braun, Julia Rotzinger, and Sarah E. Martiny,

Thank you for submitting your revised Stage 1 manuscript, “Justice in the Eye of the Beholder: How Comparison Framing Affects the Perception of Global Inequality Through Social Emotions and Justice Sensitivity,” to PCI RR.

I returned your manuscript to the two reviewers who evaluated the first version, and I also read the paper closely myself. We were all in agreement that the revised manuscript is much improved, but that it still requires some revisions before it can be finalized. Accordingly, I am asking that you revise and resubmit your Stage 1 proposal for further evaluation.

Once again, the reviewers have provided thoughtful, detailed comments with which I fully agree, so I urge you to pay close attention to them as you prepare your revision. In my view, the most critical issues are as follows:

1.      Reviewer 1 raised important concerns about the inability to reproduce the analyses of the pilot studies. You should examine this carefully to determine what has led to the discrepancy.

2.      Both reviewers are still unhappy with your treatment of SDO, and I agree. Given that SDO is clearly having an impact on the results, it would beneficial to do more with it—either conduct analyses with and without SDO as a control and try to understand why it is impacting the results, or bring SDO into the models as a moderator.

3.      I agree with Reviewer 2 that post-hoc power for the pilot studies is not meaningful, and what you should report instead is a sensitivity analysis, i.e., solve for your power given a reasonable effect size or range of effect sizes.

4.      I missed this in the first round, but your rely heavily on MANOVAs in your analyses. MANOVAs have the dubious distinction of being misused more often than properly used, as they are only for cases when you are actually interested in the DVs as a multivariate set. That did not seem to be the case here. They should not be used as a “gatekeeper” for univariate tests nor to control error rates. Huberty & Morris (1989) is the classic reference, but this blog post quickly summarizes the issue: http://psychologicalstatistics.blogspot.com/2021/08/i-will-not-ever-never-run-manova.html?m=1

When submitting a revision, please provide a cover letter detailing how you have addressed the reviewers’ points.

Thank you for submitting your work to PCI RR, and I look forward to receiving your revised manuscript.

Moin Syed

PCI RR Recommender

Reviewed by ORCID_LOGO, 27 Jan 2023

The authors have addressed almost all of the issues I had raised in my original review -- nice job! I especially like the more detailed hypotheses regarding the moderating effect(s) of Justice Sensitivity in the preregistered main study (pp. 28-29).

That said, I still have trouble understanding (and, actually, reproducing) the results from the two pilot studies. What caught my attention was that, for instance, in Study 1, the framing x perceived size interaction effects were so highly significant (see Tables 2 and 3), yet the conditional ("simple") effects were not that different from each other, after all...

So I downloaded the raw data from the OSF website and tried to reproduce the results. And that left me with more questions than answers...

One thing I noticed was that the moderator variable ("perceived size"), was heavily skewed in both studies. In Study 1, no single participants chose 1 or 2 (on the 1-7 response scale), while 83% chose 6 or 7. In Study 2, the problem was even larger: Again, no one chose 1 or 2, but 89% chose 6 or 7. So, I doubt whether it makes sense to treat "perceived size" as a continuous moderator here... Dichotomizing this variable might be a solution, but even so, the question is whether it makes sense to treat "perceived size" as a moderator at all if the variance is so small.

Second, I could reproduce most of the findings reported in Tables 2 and 3, but not all of them. For instance, in Study 1, the PROCESS model I ran for the DV "legitimacy" was:

PROCESS vars = Treatment Perceived_Difference Legitimacy_scale SDO_scale
    /y=Legitimacy_scale
    /x=Treatment
    /m=Perceived_Difference
    /model=1
    /center=1.

One noteworthy difference was the (unconditional) effect of "perceived size", which was B=-.19 in "my" analysis (and -.39 in the authors' analysis; see Table 2). Also, the conditional ("simple") effects of "Treatment" (+/-1SD from the sample mean on "perceived size") differ substantially from the numbers reported in the Notes below Table 2. In a similar vein, the results for the DV "Intentions" differ (sometimes only slightly, sometimes more strongly, such as for the "perceived size" effects) from the numbers reported in Table 3. This, I think, needs to be double-checked and clarified, because it has important consequences for the interpretations!

Also, I was a bit worried about the fact that controlling for SDO in all of these models obviously had a strong impact on the pattern of findings... In Study 1, for instance, the treatment x perceived size interaction effect on both DVs disappears when SDO is *not* included in the respective models... I think this should be discussed more openly and explicitly in the paper.

Finally, I would be careful saying that "The results of the second study mainly replicated the results of the first study..." (p. 21) given that the treatment x perceived size interaction effect on the DV "legitimacy" was not significant in Study 2 (see Table 4).

So, all in all, I still do like the preregistered main study! But my doubts about the two pilot studies (and what they can tell us) have actually increased.

Reviewed by , 13 Feb 2023

Evaluation round #1

DOI or URL of the report: 10.31234/osf.io/n72cp

Author's Reply, 14 Jan 2023

Download author's reply Download tracked changes file

Dear Dr. Syed,

I hope this mail finds you well!

After a stay abroad, the start of my postdoc position at a new university and the fact that 5 co-authors are involved in the paper, the process has been somewhat delayed.

So I am more than happy to be able to submit the extensively revised version now.

I am looking  forward to your response!

yours sincerely,

Julia Schnepf

 

Decision by ORCID_LOGO, posted 06 May 2022

May 6, 2022

Dear Julia Schnepf, Gerhard Reese, Susanne Bruckmüller, Maike Braun, Julia Rotzinger, and Sarah E. Martiny,

Thank you for submitting your Stage 1 manuscript, “Justice in the Eye of the Beholder: How Comparison Framing Affects the Perception of Global Inequality Through Social Emotions and Justice Sensitivity,” to PCI RR.

I apologize for the delay in sending this decision. I have had two quality reviews in hand for some time, but had been awaiting a third. As that one did not appear to be forthcoming, I elected to make a decision based on the two reviews.

The reviewers and I were all in agreement that you are pursuing an important project, but that the Stage 1 manuscript  would benefit from some revisions. Accordingly, I am asking that you revise and resubmit your Stage 1 proposal for further evaluation. Please note that I will review the revision myself, and will do it as quickly as possible to make up for the delay.

The reviewers provided thoughtful, detailed comments with which I fully agree, so I urge you to pay close attention to them as you prepare your revision. In my view, the most critical issues (raised all, or in part, by reviewers) are as follows:

1.      Please be explicit about which analyses test each hypothesis. The hypotheses are numbered in the Introduction section, and this same numbering system should be carried through to the Analysis Plan section, aligning hypotheses with the corresponding tests.

2.      Reviewer 2 raised an important point about data that suggest a potential competing hypothesis to the one you proposed. Testing this competing prediction against your own would strengthen the paper.

3.      Both reviewers suggested that SDO should be a moderator rather than a control, based on arguments that I found compelling.

4.      I agree with Reviewer 1 that some additional details regarding statistical power are needed.

When submitting a revision, please provide a cover letter detailing how you have addressed the reviewers’ points. As noted, I will handle the revision myself

Thank you for submitting your work to PCI RR, and I look forward to receiving your revised manuscript.

Moin Syed

PCI RR Recommender

 

​​​​​​

Reviewed by ORCID_LOGO, 06 Feb 2022

There is much to like about this Registered Report: the research question (i.e. how global inequality is mentally represented and whether this representation affects legitimacy appraisals and action intentions) is interesting and timely, the report is very well-written, the preliminary studies reported here have shown promising results, and the proposed study makes sense and is described in sufficient detail. In particular, I appreciate the detailed methods and results sections and the fact that the pilot data are openly available. 

At first, I was admittedly a bit skeptical about how robust "comparison framing" effects actually are, but after doing a bit of literature search (for recent findings on comparison framing effects; e.g., Inbar & Evers, 2021: https://doi.org/10.1037/xge0000804), I am convinced that these effects are robust and should be taken seriously. I also learned that theorizing about the psychology of framing effects is pretty advanced by now, and I think that some of these conceptual advancements deserve to be mentioned in the present paper, too. The authors of the present report seem to rely their reasoning exclusively on salience or figure/ground effects (e.g., page 6). An alternative interpretation is that perceivers draw inferences about a communicator's intentions (i.e., their "reference point") and values (e.g., McKenzie & Nelson, 2003: https://doi.org/10.3758/BF03196520; Sher & McKenzie, 2006: https://doi.org/10.1016/j.cognition.2005.11.001). Applied to the present research, this "information leakage" approach would explain the framing effects obtained in the two pilot studies more in terms of an implicit demand characteristic (such as "the researchers think that the fact that 'developing countries have a smaller share of global wealth' is problematic and that somebody should do something about it"). I would be interested to hear the authors' opinion on whether they think "information leakage" and the implicit demand it creates may be relevant for their own research. I don't think it is necessary to re-design their proposed study in order to test the "information leakage" account against a simple salience account -- but I think the authors may want to discuss "information leakage," the "reference-point hypothesis by McKenzie, and implicit demand as a potential alternative explanation in their General Discussion (if they share my impression that these issues are relevant here).

Besides this conceptual issue, I only have a couple minor (methodological) issues, which can easily be resolved in a revision:

(1) SDO as a covariate: On page 9, the authors write: "we propose that SDO is a relevant personal-level control variable that needs to be included when investigating framing effects on the perceived legitimacy of global inequality and individual action intentions." I wondered whether SDO could also be regarded a viable moderator variable: SDO has been conceptualized as a (dispositional) preference for inequality among social groups, so one could argue that people low in SDO should be more susceptible to a framing manipulation than people high in SDO (whose dispositional preference should have a stronger impact on their attitudes than contextual variations). Maybe the authors could discuss the plausibility of this reasoning in their paper and also test whether SDO moderated the effect of framing in the pilot studies. Also, related to this, I would like to see whether the pattern of results reported here changes when SDO is not included as a covariate into the models.

(2) Sample characteristics: As the authors discuss explicitly on page 26, the two pilot studies rely on student samples and are not representative of the general population in many respects (age, gender distribution, education level, probably also political attitudes). This is why I think a nationally representative study is actually warranted. I was a bit surprised that the authors think that "...this can be interpreted as an especially strong test of inequality-related framing effects" (p. 26), because it is possible that the framing effects are much smaller in a politically more diverse sample (especially when we assume that political attitudes covary with demand susceptibility, see the "information leakage" argument discussed above). Maybe the authors can clarify this?

(3) Power analyses: I missed a discussion of statistical power in the two pilot studies -- maybe the authors could at least report a sensitivity power analysis when they describe their samples. Later, when they determine the necessary sample size for the proposed study, they write that "...the size of the significant paths in our moderated mediations models of the pilot studies lay between .18 and .75" (p. 38). I could not find these estimates in the Results sections of the two pilot studies. I may have missed that, but if not, these estimates should be added to the results. Also, effect size estimates for the framing x perceived size interaction effects (i.e., the increase in R-square by adding the interaction term to the regression) should be explicitly reported. 

(4) Simple effects: Even though the framing x perceived size interaction effects are significant in both pilot studies, I would like to see tests for simple effects (or "conditional effects") to back up claims such as "Participants who perceived the economic inequality between low and high-income countries to be large were more strongly affected by the less (versus more) frame than participants who perceived the inequality to be small" (p. 16) or "stronger emotional reactions for participants who perceived the size of inequality to be large and were presented with the less (versus more) frame" (p. 17). As far as I can see, only conditional *indirect effects* are tested (see Table 8). By the way, looking at the conditional indirect effects in Table 8, it seems that the indirect effects among people high in PSI (perceived size of inequality) are not significant in Study 2. This needs to be mentioned and discussed in the text.

(5) Preregistered "Main" Study: As noted already, I do like the proposed study and I think it is necessary to make the paper sufficiently strong and convincing. Also, I do like the inclusion of Justice Sensitivity (JS) as a potential moderator variable. That said, I wondered why (a) all "other-oriented" JS perspectives will be aggregated into one score, (b) victim sensitivity will not be measured, and (c) the authors are so cautious regarding the potential moderator effects of JS. Let me quickly explain each of these three issues:

(a) It is likely that people high in beneficiary sensitivity will react more strongly towards a "more-than" frame than towards a "less-than" frame (given that German respondents are likely to identify with a "developed" or economically privileged country), but this should not be the case for observer- or perpetrator-sensitive people. Therefore, I would analyze the JS perspectives separately instead of aggregating across them.

(b) Even though it is likely that victim-sensitive individuals react less sensitively to framing manipulation than victim-insensitive individuals, being able to empirically demonstrate such an interaction may be worthwhile. Therefore, I suggest including the victim sensitivity subscale into the study.

(c) The authors write that "the role of justice sensitivity in framing research is still largely unclear, we investigate the moderating effect of this variable in an exploratory fashion" (p. 29). This is okay, but there are certainly interaction patterns including JS that are more plausible than others. For instance, I would expect that all three "other-oriented" JS perspectives should amplify (i.e., positively moderate) an effect of inequality size (i.e., 2-way interactions). Also, since framing effects appear to be driven more strongly by a "less-than" frame compared to a "more-than" frame (Inbar & Evers, 2021), JS should predict legitimacy appraisals and action tendencies more strongly in a "less-than" frame than in a "more-than" frame. I know that some effects are harder to predict, but at least the most plausible ones could and should be formulated as hypotheses here.

One minor issue: The authors refer to Schmitt et al. (2005) for the JS scales they want to use. I suggest they refer to the more recent version (Schmitt et al., 2010: https://doi.org/10.1007/s11211-010-0115-2). The German version of the scales can be found here: https://www.uni-landau.de/schmittmanfred/forschung/sbi/index.html.

Thank you for inviting me to review this report -- I really like this project and I wish the authors success with their proposed study!

Signed,
Mario Gollwitzer

Reviewed by , 07 Mar 2022