Close printable page
Recommendation

Successfully replicating positive evaluations of our "true selves"

ORCID_LOGO based on reviews by Andrew Christy, Cillian McHugh, Caleb Reynolds and Sergio Barbosa
A recommendation of:

Revisiting the link between true-self and morality: Replication and extension of Newman, Bloom, and Knobe (2014) Studies 1 and 2

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 12 January 2023
Recommendation: posted 12 May 2025
Cite this recommendation as:
Chambers, C. (2025) Successfully replicating positive evaluations of our "true selves". Peer Community in Registered Reports, 100372. 10.24072/pci.rr.100372

Recommendation

The concept of a “true self” – the deepest and most genuine part of a person’s personality – is fundamental to many aspects of psychology, with influences that extend deep into society and culture. For decades, research in psychology has consistently found that people see their true selves as positive and virtuous. But people also positively regard (and indeed overestimate) many other characteristics related to the self, such as their abilities and achievements, prompting the question of whether there is anything special about the “true self” as a psychological concept. In an influential study, Newman et al. (2014) found that people were more likely to attribute morally good than morally bad changes in the behaviour of other people to their true selves. Crucially, they also found that our tendency to view the true self positively is shaped by our own moral values – in essence, what we regard as morally or politically good, we see in the true selves of others. Newman et al’s findings suggest that the tendency for us to regard our true self in a positive light stems from the specific nature of true self as a concept. 
 
In the current study, Lee and Feldman (2025) replicated two key studies from Newman et al. (2014) in a large online sample. In particular, they asked whether true-self attributions are higher for changes in behaviour that are morally positive compared to morally negative or neutral, and, further, how true-self attributions are aligned with personal moral/political views. The results confirmed the original findings: morally positive changes in others were perceived as more reflective of the true self than morally negative or neutral changes, and changes that were more aligned with participants' moral/political views were perceived as more reflective of the true self (regardless of whether liberal or conservative). Additional exploratory analyses revealed that social norms were positively associated with true self attributions. Overall, the outcomes constitute a successful replication of the original findings, adding weight to the conclusion that behaviours considered more aligned with moral values are perceived as more strongly reflecting a person’s “true self”.
 
The Stage 2 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 2 criteria and awarded a positive recommendation.
 
URL to the preregistered Stage 1 protocol: https://osf.io/v2tpf
 
Level of bias control achieved: Level 6. No part of the data or evidence that was used to answer the research question was generated until after IPA. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Newman, G. E., Bloom, P., & Knobe, J. (2014). Value judgments and the true self. Personality and Social Psychology Bulletin, 40, 203–216. https://doi.org/10.1177/0146167213508791
 
2. Lee, S. C. & Feldman, G. (2025). Revisiting the link between true-self and morality: Replication and extension Registered Report of Newman, Bloom, and Knobe (2014) Studies 1 and 2 [Stage 2]. Acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/zer3d
PDF recommendation pdf
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #1

DOI or URL of the report: https://osf.io/fba9m

Version of the report: v3

Author's Reply, 19 Apr 2025

Download author's reply Download tracked changes file

Revised manuscript: https://osf.io/zer3d

All revised materials uploaded to: https://osf.io/9fvtq/ (OSF recently moved everything to the "Files" tab), updated manuscript under sub-directory "PCIRR Stage 2\PCI-RR submission following RNR"

Decision by ORCID_LOGO, posted 14 Feb 2023, validated 14 Feb 2023

The four reviewers from Stage 1 kindly returned to evaluate your completed Stage 2 manuscript, and I'm happy to report that their assessments are unanimously positive. As you will see, there are some constructive points to address concerning the reporting of results, clarification of methodological details, and potential issues for inclusion in the Discussion. Provided you are able to respond comprehensively to these points in a revision, I anticipate being able to award Stage 2 acceptance without further in-depth review.

Reviewed by , 08 Feb 2023

I have completed my review of the Stage 2 manuscript, which is largely favorable; see the attached Word document. I would also like to thank the authors personally for undertaking this replication project; it is very useful to others, like me, who are working on these topics!

-Andrew Christy

Download the review

Reviewed by ORCID_LOGO, 29 Jan 2023

The authors conducted the study in accordance with the approved Stage 1 protocol. They provide interesting results, replicating and extending the target article. I commend the authors on this work.

I have only 1 comment. Perhaps the authors could provide a bit more clarity on number of participants, the exclusions, and the attention checks. The authors report 803 took part and 44 were excluded. From reading the results reported, it appears that the 803 reflects the sample after the 44 were excluded? Have I understood correctly? Some clarity on this would be helpful.

Is it possible to provide a breakdown of the number of exclusions for specific reasons? Does the "verification" refer to the attention checks or are they separate? It is not clear how participants who failed the attention checks are handled in the reporting.

These questions are for clarity only. I have no real substantive concerns, I just think a bit more detail and clarity might be useful.

Reviewed by , 06 Feb 2023

Reviewed by ORCID_LOGO, 20 Jan 2023

Authors designed and carried out a well-crafted replication and expansion. As is, I believe manuscript ought to be almost ready for publication. I have very few comments, none of which should be much trouble for them.

1: I should have picked that up on the first round review but I just realized that stating political preferences BEFORE main data collection might bias or skew main data collection by people try to be coherent to that self-proclaimed identity or some sort of demand effect. Main data collection is quite long and effects are really solid so as to not be significantly changed by this possible bias. I don't think this is any reason to be really worried about, but one never knows with a stricter reviewer. Perhaps consider this for limitations section or come up with an possible response in case it is needed.

2: Authors claim that analysis wit excluded participants was not run because hypothesis were suported "Since we find support for all the hypotheses, rerunning analyses with exclusions is not needed." (p 27). I would beg to differ in that point, exclusions are there to make sure suitable data is analyzed. The reason analysis should not be run with excluded data is taht you have reason to believe that is somehow biased irrespective of subsequent results.

3: Table 10 comparing results to hypothesis are not particularly straightforward to read. I take it that "signal" means that results replicated whereas "inconsistent" means results somehow differ from original results right? These terms are not easily understood and surely whether results are "signal" or not is linked to the amount of noise in observed data, not in whether they replicated prior results. I suggest changing this.

4: I was surprised by interaction effects of block X Moral. Maybe a bit more discussion could be offered on this. As I understood it this is not exactly expected and could be due to the choice of using blocks rather than the optimal full randomization procedure which ought to be discussed.

 

Other than these comments I believe this manuscript is readily suitable for publication and expect it to be accepted easily. I want to congratulate the authors on a rigourous and interesting work and look forward to seing this and their subsequent projects published.

Best regards,

 

Sergio B