Close printable page
Recommendation

Managing costs and rewards when choosing to disclose information

ORCID_LOGO based on reviews by Jason Chin, Yikang Zhang and Tyler Jacobs
A recommendation of:

Managing Disclosure Outcomes in Intelligence Interviews

Abstract
Submission: posted 15 September 2022
Recommendation: posted 21 April 2023, validated 21 April 2023
Cite this recommendation as:
Dienes, Z. (2023) Managing costs and rewards when choosing to disclose information. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=315

Related stage 2 preprints:

Recommendation

An interviewee in an intelligence interview can face competing interests in disclosing information: The value in cooperating because, for example, information given leads to the arrest of a narcotics gang, making the neighbourhood safer; and the risk that disclosing the information leads to reprisals from the gang. Different pieces of information will compete with each other for disclosure, depending on this balance of risks to self-interest. According to the disclosure-outcomes management model of Neequaye et al., information will be disclosed more with a high than low probability of reward, as might be straightforwardly expected, but this difference will be larger when there is a low probability of cost rather than a high probability. The high probability of cost will induce more a variable response to the possible benefits.

Neequaye et al. (2023) will invite participants to assume the role of an informant, with the goal of maximizing their points according to stated probabilities of costs and benefits of disclosing pieces of information relating to given scenarios. Then the degree to which each type of information is disclosed in a subsequent interview will be assessed: this way the crucial interaction can be tested.
 
The Stage 1 manuscript was evaluated over one round of in-depth review. Based on detailed responses to the reviewers' comments, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/ru8j5

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 
 
References
 
Neequaye, D. A., Luke, T. J., & Kollback, K. (2023). Managing Disclosure Outcomes in Intelligence Interviews, in principle acceptance of Version 2 by Peer Community in Registered Reports. https://osf.io/ru8j5
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #1

DOI or URL of the report: https://psyarxiv.com/tfp2c

Author's Reply, 19 Apr 2023

Decision by ORCID_LOGO, posted 28 Mar 2023, validated 28 Mar 2023

Once again sorry for the far too long delay in getting back to you. In the end I asked 60 potential reviewers - and three responded who were experts in relevant areas, and I was very pleased those were the ones we now have. They are largely happy with your proposal, but have various comments concerning details of the protocol and requested clarifications.

Reviewed by , 22 Mar 2023

Reviewed by ORCID_LOGO, 06 Mar 2023

Reviewed by , 19 Mar 2023

In this article, the authors propose and aim to test a new framework for understanding when people will disclose information during an intelligence interview called the Disclosure Outcomes Management Model. This model frames the decision to provide information as a self-interest dilemma in which the interviewee must balance the potential benefits (e.g., community safety, upholding morality) of disclosing risky information with the potential harms to the self (e.g., retaliation from the group being reported on). They then describe the calculations of risk and benefit result in four categories, with low-stakes and guarded information (theoretically) being less likely to be disclosed, unguarded information being more likely, and high-stakes information being variable. The authors then report a preliminary study that supported these hypotheses, and then begin the registered report for the planned study. 

Strengths:

-The theory is described well in the Introduction and Figure 1 displayed the model in a clear way.

-The authors use sophisticated multilevel models to account for random effects in their complex experimental design, which is important for their research design.

-In my opinion, the design has strong internal validity. 

Concerns and Comments:

Overall, I think that this is a reasonable design with strong proposed analyses. I do not see any major issues. However, there are few aspects that I would like the authors to consider.

-First, on the theory-side, I found the definition of “self-interest” to be unusual (“broadly encompass[ing] any outcome an interviewee may want to achieve or avoid”). In social psychology, self-interest is typically defined as the motivation to achieve outcomes that benefit the individual and avoid those that do not (Miller, 1999; Gerbasi & Prentice, 2013). Additionally, many theories specifically state that if the outcome is intended to primarily benefit others, it is not self-interested (Cropanzano et al., 2005; Holley, 1999). Thus, I would argue that “act[ing] in the best interests of other associates” is not actually self-interest. Their definition is closer to a purely economic definition of self-interest (maximizing one’s gains, minimizing losses), but I am not sure that this fits either, and the interview situation is not a purely economic one. Could authors either provide further justification and citations for their definition of self-interest, or consider if another term would fit better?

-Could the authors more clearly state their exclusion criteria (e.g., how much missing data is too much or how many memory checks can be failed)? The authors could also consider reporting the results without exclusions in the Supplementary Materials. I would reccomend this especially given the large number of excluded participants in Study 1 (transperency would be best).

-In lieu of an a priori sample size analysis in Study 1, could authors perform a sensistivity power analysis ? 

-For Study 2, given the large number of excluded participants in Study 1, could the authors describe the number of participants they will recruit (before exclusions) in order to meet the minimum sample size?

-In addition to the model fit stats (AIC), could the authors report effect sizes for their models (R^2 or f^2; or ICC for random effects)?

-I appreciated the authors' discussion of internal and external validity. However, I would note that concerns about external (and construct) validity go beyond just how interviews would have more psychological realism. In a real-life interrogation situation, the consequences go far beyond collecting points and competing to receive a monetary reward and could involve fear for one’s life, fear for loved ones, fear of implicating one’s self in a crime, etc. Thus, despite the efforts to vary the consequences and incorporating a choice structure with incentives, this design likely does not perfectly capture the construct of these situations in the real world. That being said, this artificiality is common in psychology research and is necessary for internal validity. This idea, though, could be added to the discussion on lower external validity.

I thank the authors for their efforts, and hope that this feedback is helpful.