Close printable page
Recommendation

Is childhood adversity associated with a heightened response to opioids?

ORCID_LOGO based on reviews by Zoltan Dienes, Yuki Yamada and 1 anonymous reviewer
A recommendation of:

Does childhood adversity alter opioid drug reward? A conceptual replication in outpatients before surgery

Abstract

EN
AR
ES
FR
HI
JA
PT
RU
ZH-CN
Submission: posted 15 March 2022
Recommendation: posted 24 October 2022
Cite this recommendation as:
Chambers, C. (2022) Is childhood adversity associated with a heightened response to opioids?. Peer Community in Registered Reports, . https://rr.peercommunityin.org/articles/rec?id=194

Related stage 2 preprints:

Does childhood adversity alter opioid drug reward? A conceptual replication in outpatients before surgery
Molly Carlyle1*, Malin Kvande*, Isabell M. Meier, Martin Trøstheim, Kaja Buen, Eira Nordeng Jensen, Gernot Ernst, Siri Leknes, Marie Eikemo (*denotes equal contribution)
https://doi.org/10.17605/OSF.IO/XR2VB

Recommendation

A convergence of evidence suggests that early life adversity may cause dysfunction in opioid-sensitive reward systems. Childhood adversity is associated with opioid use, potentially by altering reward and motivation networks, and experimental models in animals have found that early life adversity increases and consolidates opioid seeking behaviours. Further, in a recent controlled experiment, Carlyle et al. (2021) found that opioid administration produced stronger positive responses, and weaker negative responses, in adults with a history of childhood abuse and neglect.
 
In the current study, Carlyle et al. seek to test the generalisability of these previous findings in a pre-operative clinical setting. Using partially observed data from an existing cohort study (N=155), the authors will test whether patients with greater experience of childhood trauma in turn exhibit a larger mood boost and express greater subjective pleasure following opioid administration. Although not a randomised experimental design, this study provides the opportunity to examine the relationship between opioid response and history of childhood adversity in a naturalistic setting, and thus has the potential to either support or cast doubt on the theory that adversity elevates risk of opioid addiction by altering sensitivity to subjectively pleasurable effects.
 
Following three rounds of in-depth review, the recommender judged that the manuscript met the Stage 1 criteria and awarded in-principle acceptance (IPA).
 
URL to the preregistered Stage 1 protocol: https://osf.io/7ymts
 
Level of bias control achieved: Level 2. At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question AND they have taken additional steps to maximise bias control and rigour. 
 
List of eligible PCI RR-friendly journals:
 
References
 
1. Carlyle M., Broomby R., Simpson G., Hannon R., Fawaz L., Mollaahmetoglu O.M., Drain, J., Mostazir, M., & Morgan C. (2021). A randomised, double‐blind study investigating the relationship between early childhood trauma and the rewarding effects of morphine. Addiction Biology, 26(6):e13047.
 
2. Carlyle, M., Kvande, M., Leknes, S., Meier, I., Buen, K., Jensen, E. N., Ernst, G. & Eikemo, M. (2022). Does childhood adversity alter opioid drug reward? A conceptual replication in outpatients before surgery, in principle acceptance of Version 4 by Peer Community in Registered Reports. https://osf.io/7ymts
Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Reviews

Evaluation round #3

DOI or URL of the report: https://osf.io/xr2vb/?view_only=4238d2ee3d654c4f908a94efea82a027

Version of the report: v3

Author's Reply, 17 Oct 2022

Download author's reply Download tracked changes file

Please find the reviewer response attached as a PDF.

Decision by ORCID_LOGO, posted 17 Oct 2022, validated 24 Oct 2022

As promised, I returned the manuscript to Zoltan Dienes for a final evaluation. He offers some remaining suggestions for streamlining the analysis plan. The idea to include the Bayesian analyses at Stage 2 as exploratory analyses (and therefore remove them from the Stage 1 manuscript) strikes me a sensible compromise give their lack of diagnosticity. However, I will leave you to consider these points and respond/revise. Provided you are able to respond comprehensively, we should be able to award in-principle acceptance without further in-depth review.

Reviewed by ORCID_LOGO, 17 Oct 2022

The authors have responded very thoroughly to my comments. I understand their attraction to Bayesian modelling - as a Bayesian myself - but I think the combination of frequentist and Bayesian approaches in the way suggested doesn't quite work. The Bayesian model is interpreted effectively as a significance test: Whether 0 is in or outside an (100-X)% interval is the same as being significant at the X% level (see https://psyarxiv.com/bua5n/ pp 6-8). Further, power analyses tell one if a study is underpowered or not; so that is already apparent from the frequentist analyses, and the Bayesian analysis does not add  to that. Incidentally, just one point of phrasing: The authors refer to a "true non-significant" result. Significance or non-significance is a property of a particular test applied to a particular sample, not a property of the population. So what the authors mean is a "true H0".

Using the original study as a prior means the Bayesian posterior is a type of meta-analysis. That's good, but does not tell us whether this study is underpowered.

I would remove the Bayesian analyses from the pre-registration, as they do not actually influence conclusions; but the authors would of course be free to add them in an exploratory analysis section in the Stage 2, e.g. to get  meta-analytic posterior estimates (though I wouldn't see if 0 is in or outside an HDPI, see previous ref).

Evaluation round #2

DOI or URL of the report: https://osf.io/qcj5m?view_only=4238d2ee3d654c4f908a94efea82a027

Version of the report: v2

Author's Reply, 27 Sep 2022

Download author's reply Download tracked changes file

Reviewer response attached as PDF in addition to being uploaded to the OSF page.

Decision by ORCID_LOGO, posted 25 Aug 2022, validated 17 Oct 2022

The three reviewers who assessed your initial submission have now evaluated the revised manuscript, and the good news is that we are getting close to Stage 1 acceptance. You will find some remaining methodological points to address in two of the reviews, including a key point about streamlining the analysis (and consequentially the logical chain of inference), and the suggestion to remove exploratory analyses from the Stage 1 manuscript (with which I agree).

I will consult swiftly with Zoltan Dienes concerning your further revised submission to ensure that his points have been adequately addressed (especially his points 3 and 5, which are most important).

Reviewed by anonymous reviewer 1, 20 Aug 2022

The authors provided a thoughtful consideration of, and response to, all of the concerns raised.

Reviewed by ORCID_LOGO, 13 Aug 2022

I would like to thank the authors for revising the manuscript based on the review comments. My opinion is that IPA could be granted for this proposed revised​ plan.

The following points are minor and should be confirmed by the recommender:

​- In multiple regression equations, β usually represents the partial regression coefficient, and x etc. would represent predictor variables. Perhaps the brackets themselves may represent the predictor, but \( \hat{Y} \) also contains a bracketed name, which can be confusing, so I think it would be better to write it in the least misleading way possible.

- Since there was no cleaned manuscript, many typos, etc. may be present. It is recommended that the cleaned manuscript file be checked by multiple third-party eyes​ in the final version before IPA if possible.​ ​​​

 

Reviewed by ORCID_LOGO, 25 Aug 2022

The authors have addressed many of my points. There remain a few issues to resolve, the last one listed being most important.

 1)  " if any of the two tests were significant (p>.01 for the Shapiro-Wilk and p>.05 for the Kolmogorov-Smirnov)"  The ">"s should be "<"s.


2) "Outliers for the CTQ scores were assessed using boxplots"
State how outlier is defined.


3) For the Bayesian analysis, why specifically 89% CIs? Why "AND HDIs"? But the bigger point is I don't know what role the Bayesian analyses play in the planned inference. What would count as the Bayesian analysis "concurring" with the frequentist one? A CI/HDI doesn't in itself allow rejecting or accepting H1 or H2. In fact the posterior distribution is guaranteed to give 100% probability to the claim the relevant effect exists.  I suggest pick one analysis and stick with it.


4) To keep things clean, don't list exploratory analyses at this stage.


5) Most importantly, past relevant work found small to medium effect sizes, and the current study calculates power for small to medium effect sizes. That means the study is not powered to detect *all* plausible effects of interest. Thus a non-significant result would not count against the hypotheses of an effect being there.  The authors cannot change N, so they should temper their conclusions such that a non-significant result just means reserve judgment.
 

Evaluation round #1

DOI or URL of the report: https://osf.io/98wmk?view_only=4238d2ee3d654c4f908a94efea82a027

Author's Reply, 11 Aug 2022

Download author's reply Download tracked changes file

Cover letter with reviewer response, and manuscript document (in tracked changes) attached as PDFs.

Decision by ORCID_LOGO, posted 17 May 2022

I now have three very helpful and constructive reviews of your submission. As you will see, the reviewers are broadly positive about the prospects of your manuscript, although some significant work will be needed to meet the Stage 1 criteria and achieve in-principle acceptance (IPA).

Among the main concerns are:

1. The logical coherence of the introduction and rationale, including making clear how reduced mu-opioid receptor density is relate to increased reward sensitivity (a point raised in slightly different ways by two of the reviewers).

2. Considering the potentially confounding effects of expectancy.

3. Clarifying the precise details of the analysis plans and contingencies. For a revised manuscript, I would recommend generating and including analysis code on simulated data to verify suitability of the plans.

4. Clarifying the precise conditions that will confirm or disconfirm the predictions (which may entail the removal of redundant analyses). At present, the design plan does not sufficiently prespecify the conditions under which different conclusions will be drawn. This will require revision to both the main text and the study design table (while keeping the design table as succinct as possible).

5. Clarification of the level of bias control in the manuscript. In the submission checklist you selected Level 2: At least some data/evidence that will be used to answer the research question has been accessed and partially observed by the authors, but the authors certify that they have not yet observed the key variables within the data that will be used to answer the research question AND they have taken additional steps to maximise bias control and rigour (e.g. conservative statistical threshold; recruitment of a blinded analyst; robustness testing, multiverse/specification analysis, or other approach). Please add a section to the manuscript that makes clear the level of prior data observation that has taken place (and confirms the corresponding level of bias control achieved under the PCI RR taxonomy). The second part of the Level 2 definition does not appear to be tackled in your plans: additional steps to maximise bias control and rigour (e.g. conservative statistical threshold; recruitment of a blinded analyst; robustness testing, multiverse/specification analysis, or other approach). This will need to be comprehensively addressed to achieve IPA.

Overall, I believe the manuscript is sufficiently promising to invite a Major Revision. Your proposal addresses a scientifically valid question, and (from my own reading) strikes me as a innovative and valuable use of pre-existing data. Should you wish to revise, please ensure that you respond comprehensively to all of the issues raised above and in the reviews, including a point-by-point response to every comment of the reviewers, and a fully tracked-changes version of the revised manuscript.

Reviewed by anonymous reviewer 1, 10 May 2022

In this manuscript, the authors describe a study in which they explore to what extent childhood adversity predicts acute subjective responses (“reward”) to mu-opioid agonists administered in a medical setting. This is a very interesting and important topic, a nice follow-up from the authors’ previous study, and well-written start to a manuscript. The study has good scientific validity, and the hypotheses seem rational. However, there are some small matters that require clarification as described below.
 
Introduction
In the introduction, the authors state that early adversity is associated with reduced mu-opioid receptor density. It is not clear, however, how reduced mu-opioid receptor density relate to increased reward after exogenously administered opioids?
 
The authors write,“Here, we examined whether childhood adversity increases risk of opioid misuse via enhanced positive drug effects.” Are the authors actually planning to measure opioid misuse? Otherwise, this statement should be modified, as it does not accurately describe the experimental question.
 
In the intro, describe mechanism of action of two drugs (ie do they act as pure mu agonists? How do the doses used compare to one another?).
 
Methods:
One potential difficulty with this design is that it is not clear what role expectancy effects play in this study. What were the patients told about the medication they would be receiving? Did they know they would be receiving an opioid? There is some evidence that childhood adversity predicts placebo response in the context of analgesia, so one concern is that differences in expectancy effects between subjects with low and high adversity could confound the analysis.
 
Another potential pitfall is that the authors’ sample of patients will not have sufficient variability on the CTQ to be able to conduct the planned analyses. Presumably most participants will not have any history of childhood trauma. How will the authors ensure a substantial enough range on the CTQ to obtain meaningful results?
 
For the future submission: in the analysis, the group that got remifentanil and the group that got oxycodone should be compared on subjective effects to make sure that the doses of the different drugs were matched on this metric.

Reviewed by ORCID_LOGO, 06 May 2022

I read this study with interest even though I am not a complete expert on the topic, as it attempts to test as a natural experiment the hypothesis that human childhood detrimental situations, for which indirect evidence has been accumulated in very controlled situations, are associated with later opioid effects. Since this is an observational study in a natural setting, I am conservative as to whether the authors can draw conclusions about causality in their hypothesis here, but there is no doubt that the present study will still provide useful findings. Below is a list of points that I believe should be addressed in advance for a better protocol.

  • I understand that the present study is designed to analyze data that already exists. In such cases, I think the authors need to be clarified as to how much specific knowledge of the data they have. PCI RR has a set level of bias control, so please refer to that.
  • Are there any findings from previous studies that socioeconomic status affects opioid misuse, and so on? I think it needs to be explicitly explained why the authors are focusing on SES here. Furthermore, it is unclear at what point in the subjects' "childhood" the authors expect SES to have an effect, and it is also unclear about what time of SES the subjects will respond about theirs. As SES can vary over time, shouldn't this point be specified specifically?
  • There are two predictor variables, CTQ and MSSS, which will not always show consistent results. In what cases does this mean that the hypothesis is supported?
  • Data are examined in a number of ways before multiple regression analysis, but I am not sure how and when each of these is determined. For example, the Shapiro-Wilk test and the Kolmogorov-Smirnov test do not return the same results, and the criteria for visual judgments are unclear and can be arbitrary. As for outliers, there is no indication of how to detect them.
  • In multiple regression analysis, it is stated that another regression analysis is performed using the product of two predictor variables, but I did not understand this clearly. This is stated to examine the "combined effect," but I thought this was usually done to examine interactions. What is this "combined effect"? By what background is the combined effect hypothesized? Also, since there are only two predictors, I thought that this could be discussed to some extent just by looking at the multiple correlation coefficient and the coefficient of determination, but is there any reason not to? Also, what would be the (single?) correlation analysis between this combined variable and the outcome variables? Is it a partial correlation analysis? What is the interpretation if there is no significant effect of individual variables and only the combined effect is significant, or vice versa?
  • The description of the baseline is ambiguous and it is unclear how it is to be set.
  • How do the results of a multinomial logistic regression analysis for changes in mood ratings support the hypothesis? Also, how do you reconcile results that are inconsistent with the multiple regression analysis that preceded it?
  • Two types of opioid analgesics are used, does this difference affect the testing of the hypothesis? If so, I think it needs to be clearly stated in the manuscript.


Many of the questions I have raised here about analysis would not be particularly problematic if it were all exploratory analysis. However, if it is to be registered as a confirmatory analysis, please clarify each hypothesis and the criteria for evaluating and interpretation of the results.

Reviewed by ORCID_LOGO, 02 May 2022

This is a very interesting study making good use of a naturalistic situation to look at whether childhood adversity affects how people respond subjectively to opioids.
I didn't see any discussion of how bias is controlled, but I will presume the editor has this in hand.
My main point is that there is still plenty of scope for analytic flexibility.  Specifically: 
1. Normality is to be checked in a range of ways. Under what conditions will normality be presumed good enough to proceed? If it is not good enough, what will be the exact bootstrapping procedure?
2. Childhood adversity is to be measured using three IVs. If any one is significant in predicting a DV, will there be presumed to be a relationship between adversity and that DV?  This gives one three shots at that conclusion. Either pick  one main predictor or adjust with Bonferroni (etc) - and adjust the power calculation accordingly.
3. Specify exactly how demographic variables will be coded.
4. Specify exactly how ratings will be adjusted for baseline  - e.g. will baseline ratings be entered as IVs?
5. For clarity, specify the full regression equation that will be used.
6. A lower-powered back up analysis is suggested by collapsing change scores into three categories. This gives another shot at the cherry. I suggest deleting this analysis.
7. Subjective effects will be measured in three different  ways (feeling good, liking, feeling high). This gives three shots at getting the effect. I suggest averaging these ratings together (or else adjusting familywise error rate). Averaging will increase the reliability of the measure and give more power to detect a given raw effect size (i.e. difference in ratings units).
8. Determine what difference in rating units would be just meaningful, given the purpose to which the study could be used. How many units  of feeling high is enough to care about? Put another way, a previous study found the bottom limit if the 95% CI for euphoria was 7 units on a 100 point scale. This corresponds to 0.7 units on a 10 point scale. Is this still enough to care about? (See  p 10 here: https://psyarxiv.com/yc7s5/ ). If so, the fact that it is the bottom of a CI could be used to indicate it is roughly the lower limit of what is plausible; and if it is an effect one  would care about, it is a minimal meaningful effect size that is just plausible. That means it is appropriate to be the effect size used for a power analysis. Note when converting from a raw to a standard effect size, take into account if the DV is averaged, which will increase the standardized effect size for a given raw effect size.


Minor point from Introduction: Why would a reduction in mu-opioid receptor density create heightened reward sensitivity  (as it is associated with a reduced analgesic response to the drug)?

Zoltan Dienes