Close printable page
Recommendation

Does reducing abstractness increase attraction? A test of Uncertainty Reduction Theory

ORCID_LOGO based on reviews by Zoltan Dienes and Florian Pargent
A recommendation of:

Attraction depending on the level of abstraction of the character descriptions

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 10 October 2022
Recommendation: posted 15 November 2024, validated 15 November 2024
Cite this recommendation as:
Chambers, C. (2024) Does reducing abstractness increase attraction? A test of Uncertainty Reduction Theory. Peer Community in Registered Reports, . https://rr.peercommunityin.org/PCIRegisteredReports/articles/rec?id=321

Recommendation

What determines levels of interpersonal attraction? A long history of research in social psychology has highlighted a range of important factors, such as physical attractiveness, similarity of attitudes and beliefs, reciprocity of feelings, self-disclosure of personal information, and familiarity. One theme that runs through several of these characteristics is the concept of uncertainty, and in particular how reducing uncertainty in knowledge about a person influences levels of attraction. According to the Uncertainty Reduction Theory (URT), as an individual’s uncertainty in a person diminishes, levels of attraction are expected to rise. Previous research, however, has reported a mixed and somewhat complicated relationship between uncertainty and attraction, possibly moderated by the current stage of the interpersonal relationship. 
 
One limitation of this area of enquiry is that the methods used to reduce uncertainty have tended to focus on the amount of available information rather than its quality. This shortcoming has become increasingly salient with the rise of online social networking, where people have a wide range of strategies available to reduce uncertainty through passive (non-interactive) observation, for instance by studying profile details or other online information about a person. In the current study, Kuge et al. (2024) aim to partially fill this gap by examining the role of uncertainty reduction by altering the abstractness (or specificity) of available information, rather than its quantity, particularly in an observational, non-interactive setting. According to the tenets of URT, the authors predict firstly that participants will rate a person described in more concrete terms as more attractive than one described using abstract terms, and secondly that perceived uncertainty will mediate the effect of the abstractness on levels of attraction.
 
To test these hypotheses, the authors begin with an online survey (N=250) to select pairs of sentences with varying levels of abstractness while ensuring they are matched for favourability. Then in the main study (N=1000) they will test the effect of the selected abstract vs. concrete expressions on levels of attractiveness, in addition to control variables such as how confident the participant is in predicting the person’s behaviour, as well as a manipulation check to confirm the effectiveness of the abstractness manipulation. Confirmation of these hypotheses would add support for URT, while disconfirmation may indicate that the theory is inadequate at explaining the drivers of attraction in online unilateral communication.
 
URL to the preregistered Stage 1 protocol: https://osf.io/28f4q
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Kuge, H., Otsubo, K., Hattori, K., Urakawa, M., & Yamada. Y (2024). Attraction depending on the level of abstraction of the character descriptions. In principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/28f4q
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #3

DOI or URL of the report: https://osf.io/rn6v8

Version of the report: 3

Author's Reply, 07 Oct 2024

Decision by ORCID_LOGO, posted 10 Jul 2024, validated 10 Jul 2024

I have now received two re-reviews of your submission. As you will see, Florian Pargent is now satisfied and recommends IPA while Zoltan Dienes notes several areas where additional consideration is needed to ensure clarity of theory, sampling plans, and the precision of the inferential chain. I agree that these are important issues to resolve, and addressing them comprehensively in the next revision will accelerate the submission toward IPA.
 
As you will be aware, we are now in July-August shutdown period. During this time, authors are generally unable to submit new or revised submissions. However, given the relatively straightforward revisions required in your case, as well as the time-critical nature of Stage 1 review, I am going to give you the opportunity to resubmit despite the shutdown. You won't be able to do this the usual way. Instead, please email us (at contact@rr.peercommunityin.org) with the following:
 
  1. A response to the reviewer (attached to the email as a PDF)
  2. The URL to a completely clean version of the revised manuscript on the OSF
  3. The URL to a tracked-changes version of the revised manuscript on the OSF
     
In the subject line of the email please state the submission number (#321) and title. We will then submit the revision on your behalf.

Reviewed by ORCID_LOGO, 17 Jun 2024

Reviewer: Florian Pargent, LMU Munich

The authors have responded to all my previous concerns.
I have no additional comments and am looking forward to reading the stage 2 manuscript.

Reviewed by ORCID_LOGO, 03 Jul 2024

Sorry for my lengthy points below in response to the authors' thoughtful reply.  The authors raise some interesting issues that still need addressing.

1) abstract

Rewrite
"Given that concrete expressions contain richer information than abstract counterparts,"

as

"Given that concrete positive expressions contain richer information than abstract positive counterparts,"

I think the reader needs to be alerted that only positive statements will be used. If the test is passed a simple theory could explain it: The more certain the positive information about someone, the more favourable the attitude towards them.I think the study would be more interesting if the statements were neutral, neutral statements would afford a more severe test of uncertainty reduction theory. But that is up to the authors. 


2) The statements defining uncertainty reduction theory I found in the manuscript included claims such as

"The Uncertainty Reduction Theory posits that as an individual’s uncertainty diminishes, they will be evaluated as more attractive"

"URT assumes that when uncertainty is high, a person appears less attractive. Given this prediction and the fact that information-seeking contributes to reducing uncertainty, it is expected that more information will reduce uncertainty and increase attraction."

Given the assumptions of the study, the study is in a position to provide evidence against URT. The authors do qualify in places that the theory only applies to the early stage of a relationship; this should be explicitly added to the statement of the theory. The study still tests this theory.

If the authors wish to claim URT could not be falsified by the study, they need to define what URT claims more restrictively. But that would be odd. Why not posit the more general theory - the version they have actually presented (or the revised version just given) - and then have the study potentially count against it (and indicate this in the Design Table).


3) In the pilot study, equivalent favourability is defined by an equivalence test. Specifically, that means the difference in favourability is significantly less than a threshold value (0.33 Likert units). The complement of this - that is, the implication of what not equivalent means - is that the difference is significantly more than the threshold value. (I previously phrased this in terms of "inference by intervals": The two t-tests the authors perform for TOST is mathematically the same as determining if the 90% CI lies within the equivalence interval, whether one interval lies within another interval. The claim that the abstractness ratings were not equivalent could thus be made if the 90% CI lies outside the equivalence interval. Thus, the authors are indeed using inference by intervals.) Put another way: if the authors claim 0.33 Likert units is only just worth taking note of as a difference, then to show a meaningful difference in abstractness, they need to show abstractness was more than 0.33.

I realise that in the end they will have to select whatever items they have that maximise differences in abstractness, even if some strict statistical rule is not satisfied for all items (as they detail). And this seems OK because it is a priori plausible they can create large differences in abstractness. So the points I just made won't change anything in practice. Still, it would be better to start with an inferentially mutually consistent set of rules, and only soften from there as necessary.

The average difference in abstractness relates to predictions of effect size made later; getting rather more than a minimal scientific relevant amount in abstractness turns out to be important. That increases the case for a strict statistical definition at this stage.


4) "Although all pairs will be included in the hypothesis-testing analyses regardless of whether the differences in perceived abstractness of profile turn out significant, we will conduct an additional sensitivity analysis that excludes pairs of profiles whose perceived abstractness do not differ significantly by condition."

This opens the way to inferential flexibility. Just be clear that the main conclusion won't depend on this analysis; or else, how it will depend.


5) "The assumed effect size of uncertainty on attraction (β = .51) was determined based on a previous study (Baruh & Cemalcılar, 2018)."

What was the effect tested in this study? Make clear it is relevant. Allow anyone to find the effect easily.

Further, the size of an effect found in a previous study helps fix what might be predicted; but not the smallest size one does not wish to miss out on; and it is the latter type of error that power is meant to control (in the long run). If a previous study found beta=0.51 and that was interesting, that does not mean one would be happy to miss out on beta = 0.4.

The authors have claimed the smallest size of scientific relevance for favourability and abstractness on a 1-7 Likert scales is 0.33 Likert units. (A scientific justification would be good, though I realize this may be difficult.)  It is also highly likely that a given difference in abstractness will produce a smaller difference in attractiveness. The question then becomes, what is the smallest ratio of attractiveness to abstractness  difference that would be minimally scientifically relevant? It is hard to say, but a 0.5 reduction still seems big (so if the difference in abstractness was 1 Likert unit, and the difference in attractiveness was 0.5 Likert units).  Maybe a reduction to 20%? This is just intuition and it would be good to have a reason for it. Maybe the authors can provide one (e.g. from similar situations in social psychology).

In any case, this way of thinking shows there is a relationship between claims of smallest effect of interest for different analyses. Likewise, it also bears on the size of the indirect effect in the mediation analysis: That can be expressed in the same way (what Likert difference in attractiveness should result from the indirect pathway for a given difference in abstractness?). And the answer for the smallest effect of interest for the indirect pathway should be the same as for the smallest effect of interest for the direct effect of abstractness on attractiveness. (Note the predicted effects would be different. Getting predicted effects is an altogether easier matter. But predicted effects is not what is relevant for power or equivalence testing.)


6) The possible theory from pragmatics I was referring to in my previous review is more general than the authors take it to be. The claim is that there is a pragmatically appropriate amount of information to divulge in *any* communication - and that would include early stages of a relationship. The theory in effect states there is an optimum amount of information,  and so contrasts with uncertainty reduction theory (the more information the better). Pragmatically, more information will make communication go smoothly up to a point, and then make it worse. But I think this can be dealt with in the discussion of the Stage 2.

Evaluation round #2

DOI or URL of the report: https://osf.io/zy57w?view_only=dc1bb4d7647046ccae4d64ba44448921

Version of the report: 2

Author's Reply, 03 Jun 2024

Decision by ORCID_LOGO, posted 21 May 2024, validated 21 May 2024

I have now received two re-reviews of your submission. We are closer to Stage 1 IPA but some work is still needed to refine the design and inferential workflow. Florian Pargent has performed a very helpful code review and I also agree with his 2nd and 3rd points concerning the manipulation check and contingent hypothesis testing. Zoltan Dienes notes a range of additional clarifications and justifications that are needed to ensure the quality of the study design plan, as well as offering additional theoretical insights.

I hope you find these reviews useful and look forward to receiving your revision and response in due course.

Reviewed by ORCID_LOGO, 14 May 2024

Reviewed by ORCID_LOGO, 08 May 2024

The authors have made good progress on my comments; but they have not been completely addressed.

1)  As the preliminary study is pre-registered it needs its entries in a design table.

2) p 10
"We calculated the required sample size for a paired t-test under the assumption of α= .05 and Cohen’s d = 0.2, and the Two One-Sided Test (TOST; Schuirmann, 1987) using the PowerTOST package (Labes et al., 2024) in R. Under the assumption of α = .05, the margin(Δ) = 0.3, and a standard deviation of 1.0, our analysis showed that 156 and 215 participants,respectively, would suffice to achieve 80% statistical power. "

It is not clear where these numbers come from. Why d = 0.2? Why SD = 1?  Why delta  = 0.3?  The authors do not make explicit why minimally interesting effect sizes are 0.2 in one case, 0.3 in another. Further, to be inferentially consistent, as they use "inference by intervals" for favourability ratings, it would be consistent to do so for abstractness as well, that is find matched stimuli whose abstractness differed by more than a minimally interesting amount.
The meaning of a d of 0.2 or 0.3 is hard to intuit for these sentences. I think it is worth running a small pilot, just to estimate very roughly the SD of the ratings, so that the minimally interesting effects can be set in raw units, and power calculated for that null region.

3) Main study: There is no justification for what the minimally interesting effect is that they do not want to miss out on detecting. A scientific reason needs to be given for what effect is just worth missing out on, and that value used in power calculations. (I find *predicted* effects easier to scientifically justify and hence use Bayes factors, e.g. for mediation and testing differences: https://doi.org/10.1177/2515245919876960. But it is of course up to the authors which inferential route they take.)


4) Final column of design table: State in here with a simple proposition the most general claim that the test could find evidence against. Maybe it is something a bit more specific than uncertainty reduction theory. But why isn't uncertainty reduction theory challenged by finding no difference in attraction between abstract and concrete profiles? Why would that theory not predict a difference in this study? Be explicit about this.

 

5)  I still find the following alternative theory a highly plausible competing theory that makes the same predictions for this study:  The more certain one is of someone's positive nature, the more one is attracted to them. The authors make clear that this theory is a different theory to uncertainty reduction theory, the latter being what they are interested in.  It is good to be clear about that, but that means obtaining evidence in support of uncertainty reduction theory won't actually provide much support for specifically that theory. However, falsifying the rpediction would count against uncertainty reduction theory, as far as these two theories are concerned. The latter point still makes the study worthwhile. Maybe the thing to do is just be clear about this. Or else use only items that are very slightly disfavourable, as the theories can make contrasting predictions then.

6) It also occurs to me there is another theory that is relevant, namely relevance theory (https://en.wikipedia.org/wiki/Relevance_theory) or, more generally, pragmatics (e.g.https://plato.stanford.edu/entries/pragmatics/) .
Consider the Gricean axioms (quoted from last link):
"Make your contribution as informative as is required (for the current purposes of the exchange).
Do not make your contribution more informative than is required."
In the context of these profiles, there may be presumptions of the sort of detail that is typically used or appropriate. If someone deviates from that level of detail, the pragmatics of the communication will feel strange.  The authors give statements in the abstract that may appear too abstract given they easily could have been more concrete for the same length. Conversely the axioms predict a profile could present "more than I need to know". It would be an empirical matter where the sweet spot was, and how far from it and in which direction the authors' examples were. (Could be tested by rating the concreteness/abstractness of real profiles on the same scale as the authors use and seeing whether the authors' profiles are more or less concrete than the prototype.)  

This theory strikes me as plausible. If typical relevance concerns are pragmatically violated more in the abstractness than concrete condition, the profile may come across as odd, and hence untrustworthy. Or it may be vice versa; in which case an outcome falsifying uncertainty reduction theory may not actually count against it.  Something should be done about this concern - minimally, discussing it - but it may be worth doing more than that.
 

Evaluation round #1

DOI or URL of the report: https://osf.io/7peyb

Version of the report: v1

Author's Reply, 24 Apr 2024

Decision by ORCID_LOGO, posted 02 Dec 2022, validated 02 Dec 2022

I have now obtained two very helpful reviews of your submission, and at the outset I want to thank both reviewers for providing such high-quality evaluations on such a short timescale. As you will see, both reviews are highly critical but constructive.

Considerable work will be needed to bring this submission to the point of Stage 1 in-principle acceptance but I see a path to achieving it if you are able to revise the study design comprehensively to address the points raised. Doing so will require strengthening of the rationale, deeper consideration of the current and alternative theoretical accounts, ensuring that the design provides a severe test of uncertainty reduction theory, increasing the degree of detail and clarity in the methods, ensuring that the method is sufficiently rigorous to avoid bias and minimise error (e.g. in number of ratings per participant, consideration of confounders, and sample size of the profiles), and inclusion of appropriate manipulation checks.

On this basis, I am happy to invite a Major Revision which I will return to the reviewers in due course for another look. Please note that due to the December closure of PCI RR, you will be unable to submit your revision until 3rd Jan at the earliest.

Reviewed by ORCID_LOGO, 02 Dec 2022

Reviewed by ORCID_LOGO, 30 Nov 2022

The study addresses the question: What makes people attractive online? The hypotheses considered are derived from uncertainty reduction theory.

I have three areas for revision.

1) Flesh out the theory just a bit more. As it stands it seems implausible and that may well be because I dont understand it. Anyway, making it clearer would help. I am not an expert in this area and maybe this is plain to people in the field; but it would be good to make things clear for other interested readers as well.

Some reactions to specific claims in the paper:

"URT assumes that an individual’s attraction decreases when their uncertainty increases and vice versa."Try telling that to my first wife. Apparently I was far too predictable. "Surprise me some time!" "Yes honey, how would you like to be surprised?" "Groan."
The URT theory contradicts another theory, namely that arousal of any form can potentially be turned to personal attraction. So some uncertainty could help boost arousal, and hence attraction. Morin in his 2012 book "the erotic mind" describes uncertainty as an aphrodisiac. Cindy Meston at Texas has a sympathetic arousal transfer theory of sexual arousal - e.g. doing moderate exercise helps boost subsequent sexual arousal. I am not saying to cite these particular people; there must be many papers the authors know about that are closer to their paradigm. The authors should cite some of this literature which would make opposite predictions.

On p 5 there is the claim that "more information causes one to perceive more attraction to a target person at the beginning of building a relationship."
Is the theory restricted to the beginnings of relationships? If so, this should be stated upfront.


p 4 "people perceive higher uncertainty and lower attraction for others when they have different opinions and attitude"
I can certainly think of some people I am not attracted to because their disagreeable opinions are predictable.


p  4 "people perceive higher uncertainty and lower attraction for others when they have different opinions and attitude"
I didn't follow this sentence. I might know someone who is a keen Chinese Communist Party member/Trump supporter/Brexiteer/Scientologist, and find their beliefs highly predictable yet different from mine.


"deviated from the existing situation"

I didn't follow.

 

What the authors do is have high or low uncertainty regarding positive statements. If the theory is that people are liked more when their estimate of their goodness has more precision, then this makes sense, but is a somewhat different theory. It could be seen as a version of the URT in a particular context. Some clarity on the theoretical relations here would be good. if the precision of goodness theory is found false then so is URT.

2)  Statistical analyses
p 8
"If their abstractness would not differ significantly, that pair would be excluded from the candidates"
It may be better to use a threshold amount of difference e.g. significantly more than a 2 point difference.


Justify N for the norming study to be sufficient to get a 90% CI within a null region of +- 0.1 Likert units. For example, find the N that will put the 90% CI inside or outside the null region a certain pecentage of times (say 80%) if H0 is true or if a plausible H1 value is true (see https://doi.org/10.1525/collabra.28202).

Design Table;
Why isn't the uncertainty reduction theory being tested? I wasn't sure why the authors thought it escaped contrary evidence.


H1: Why is the effect size chosen as d = 0.3 for power?
I don't know how subjects rated attractiveness, but let us say it is a 7 point Likert scale. How many Likert units is meaningful? PCI RR guidelines say "power analysis should be based on the lowest available or meaningful estimate of the effect size". One could look at previous studies that investigated UCT, and take the smallest difference these studies find (for a more rigorous approach see https://doi.org/10.1525/collabra.28202).

H2: presumably power = .08 is meant to be .80.
For mediation, the precise results that will lead to certain conclusions should be explicit. Would any degree of partial mediation do? How much would be meaningful? Best would be to think in terms of raw units, and an indirect effect of the same minimal interesting effect size just previously identified for the total effect also used for the indirect effect, for simplicity. Work out power based on that.

 

3) Methodology


"We will assign participants with even-numbered birth months to the abstract condition and those with odd-number to the concrete condition"
I suppose there could be personality differences depending on birth month? A GScholar search revealed titles claiming evidence for this even in specifically Japanese (e.g. https://www.cambridge.org/core/journals/european-psychiatry/article/abs/effect-of-month-of-birth-on-personality-traits-of-healthy-japanese/9CAD7C5E8898C29636FC64C7DD86EE4F)
So subjects should be assigned randomly.


p 10
"Afterward, participants will complete questionnaires regarding their attributional confidence"
Describe the questionnaires. How many questions? What is the rating scale?