Stage 1 acceptance (IPA)

ORCID_LOGO based on reviews by Joshua Tasoff, Bence Palfi and Alaa Aldoh
A recommendation of:

Removing barriers to plant-based diets: assisting doctors with vegan patients


Submission: posted 16 August 2021
Recommendation: posted 05 December 2021, validated 05 December 2021
Cite this recommendation as:
Dienes, Z. (2021) Stage 1 acceptance (IPA). Peer Community in Registered Reports, .

Related stage 2 preprints:


Thank you for your careful response to the points of myself and the reviewers. I am now happy to award in principle acceptance (IPA). As requested, your submission is being awarded a private Stage 1 acceptance, which will not appear yet on the PCI RR website. Your Stage 1 manuscript has also been registered under the requested 4-year private embargo on the OSF (link below).

URL to the preregistered Stage 1 protocol:

Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.

List of eligible PCI RR-friendly journals:

Conflict of interest:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Evaluation round #2

DOI or URL of the report:

Version of the report: v1.2

Author's Reply, 23 Nov 2021

Decision by ORCID_LOGO, posted 08 Nov 2021

Dear Romain


Thank you for your revision which has addressed a number of issues. The inclusion of the pilot is useful.  A key problem with the revision is why a Cohen's d of 0.5 is the smallest meaningful effect size. Why wouldn't a d of 0.4 be practically very important? The choice of this value potentially has major implications for which way statistical conclusions go; thus the choice needs to be motivated by the scientific context in order for the conclusions to be relevant to the scientific context.  For some ideas on how to motivate this choice see:

Having justified a smallest meaningful effect size, call it S, your decision rule could be simple: e.g. for a case where the DV should be larger than S, is the sample effect significantly larger than S (conclude that there is an effect of interest); significantly smaller than S (conclude that there is not an effect of interest); or neither, in which case suspend judgment. The relevant power would then be the probability of the first outcome given a predicted effect size; and, equally relevant,  of the second outcome given 0 effect.




Evaluation round #1

DOI or URL of the report:

Author's Reply, 06 Nov 2021

Decision by ORCID_LOGO, posted 23 Sep 2021

Dear Dr Espinosa

I now have three reviews for your paper; all reviewers are overall very positive, both about the motivation for the study, and about the methods broadly.  However they raise specific concerns, particularly about showing whether the psychometric properties of your scales are up to the job, and about inferential procedures, specifically concerning the effect size of interest and whether evidence could thus be obtained for no efect.

One question, raised by Aldoh, is whether you would want to pilot your scales to establish their reliability or validity. Or else, as suggested by Palfi, introduce into the main study "outcome neutral tests" -  or estimates -  of their scale properties, so that main conclusions are conditional on these tests showing adequate psychometric properties.  A minor point: For the first scale, how about asking the question in this form "How likely are you to..." with options 0%, 10%...100%. Would this make the meaning of the response options clearer to subjects (and hence easier for us to interpret)?

The other main issue concerns staistical inference. First, a point of clarification. You say:
"We observe that we have a probability of over 80% of detecting an effect if it is greater than or equal to 0.10." What are the units? Likert units or Cohen's  d?

In terms of power, you have taken as a starting point your sample size, then asked what effect size that implies for an 80% power. As both Palfi and Aldoh asked, why should we be concerned specifically about an effect of 0.1? This point is important in terms of whether a non-significant result would refute your hypothesis. Would a non-sig result allow the conclusion you state follows from it: "the information campaign fails to improving doctors’ views of plant-based diets."Only if power was calculated with respect to a minimally interesting effect for your research problem. Aldoh also points out that conclusions based only on the logic of power do not take into account the data as they are actually observed; for example, a significant result may be less than 0.1; and a non-significant result may come with a confidence interval that extends beyond 0.1. It is up to you what inferential approach you wish to adopt (i..e you can stick with a NP power approach), but some comment on this would be helpful. In effect Aldoh is raising the possibility of an equivalence region approach. It would still require justifying a minimally interesting effect size. Palfi wonders if Bayes factors may be helpful in this regard. Then one needs to say not what is the minimally interesting effect, but what effect is predicted by a theory. (The theory could be e.g. that any difference is possible within the range of the scale, though smaller effects are more likely than bigger ones.) (Some ideas here may help for any of these approaches: )
If justifying a minimally interesting effect or a predicted effect seems difficult, that may be because this is a situation where one  should just estimate the effect size with its 95% CI. Approached in this way, your conclusion would not be that the intervention does or does not work; but that the estimate of how well it works is such and such.

The exact resolution of this inferential issue could go in several directions. Both myself and the reviewers thought this needed more work however.

Tables S1 and S2:Column labeled "Beta" - you mean "power"?

I look forward to receiving a revised manuscript that addresses these issues, and the other that the reviewers raised.


Zoltan Dienes

Reviewed by , 10 Sep 2021

1A. The scientific validity of the research question(s)

Valid, very interesting and very important.

1B. The logic, rationale, and plausibility of the proposed hypotheses (where a submission proposes hypotheses)

The project is plausible, logical, and presented clearly.

1C. The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis or alternative sampling plans where applicable)

The approach is sound and feasible.


1D. Whether the clarity and degree of methodological detail is sufficient to closely replicate the proposed study procedures and analysis pipeline and to prevent undisclosed flexibility in the procedures and analyses

Everything necessary for replication is available.


1E. Whether the authors have considered sufficient outcome-neutral conditions (e.g. absence of floor or ceiling effects; positive controls; other quality checks) for ensuring that the obtained results are able to test the stated hypotheses or answer the stated research question(s).

I think the authors have considered reasonable scenarios.  Unfortunately, a messy world produces messy data.   There may be unforeseen factors that may get in the way of logistics or make the desired tests less ideal than they appear now.  But I believe the authors have done their due diligence.

Reviewed by , 22 Sep 2021

The manuscript aims to test whether providing information about the health benefits of plant-based diets to physicians can change their attitudes and behaviour towards veganism. I believe that the proposed research question is valid and very promising, it would be certainly intriguing to explore the potential of the introduced information campaign. I applaud the authors for choosing the RR format and for the level of transparency regarding their materials. However, I have a few comments concerning the introduction and I`ve found some majors issues in the methods/proposed analyses that I believe should be addressed before in principle acceptance is secured.   



-          Some important pieces of information are missing from the Methods section and its clarity could be also improved.

o   Crucially, more information on recruitment is needed. It is unclear how the mentioned company will recruit the participants: are they approaching physicians in their database, or any physician can attend the study? What will they know about the study before signing up? Are they compensated on the same level as their typical hourly rate? These should be made explicit as selection bias can challenge the generalisability of the results.

o   Will the participants be randomly assigned to the conditions? This is critical to the comparability of the conditions.

o   There are some redundancies: the size of the sample is mentioned multiple times as well as the information about the variables and information about the control/experimental group is scattered around the method section.

o   Have the proposed variables been used before or are they newly developed? Also, it is unclear why T9 and T10 are not included in the PMPI score. Are these filler items?

-          The section on power has some factual errors and needs more clarity.

o   For instance, when the authors say that they use an a priori power of 0.95, I suspect that they refer to the significance threshold of 0.05 and not to power as later they power their design to find statistically significant tests in 80% of the cases and not in 95% of the cases.

o   The correction of the alpha level is mentioned in the Table, but this should be explained in the main text as well

o   “Based on previous data, we assume that the average probability of a positive event in the control group is 0.495”. It is unclear what previous data the authors refer to. Are they referring to some pilot data or to previous studies in the field?

o   The minimally interesting effect sizes (e.g., 0.1 on VDI) should be justified or put in context by comparing them to previous studies.

-          Outcome neutral tests play an important role in RRs to ensure the quality of the data. I think all three central tests could have a corresponding outcome neutral test to ensure that there is no ceiling or floor effect in the control groups. You could run a two-sided Wilcoxon test in the control group for all three variables.

-          The first central test investigating the extent of attitude change has a potentially important follow-up test that, in my opinion, could also be preregistered. Elevating the attitudes towards plant-based diets is useful however, the intervention may not be good enough if the attitudes remain negative in the experimental group (VDI smaller than 0.5). This is of course given that the attitudes were negative in the control group and the intervention managed to elevate them.

-          The authors may find it difficult to interpret some of their results given that they are non-significant. I recommend the inclusion of Bayesian analyses (the Bayes factor) so that the authors can distinguish between inconclusive results and clear evidence for the null.  Bayes factors can be included conditionally (in case a test is non-significant) or they can be run for every single statistical test. JASP ( offers a simple way to run Bayesian Mann-Whitney and Wilcoxon tests.


-          In general, I`ve found the introduction to be very informative, concise, and well-argued. However, it would be great to see some paragraphs on the psychology of attitude/behaviour change via information campaigns.

-          I think that the end of the introduction does not need to mention the specific statistical analyses that will be used to test the hypotheses. It is enough to specify the research question and the hypotheses. The statistical analyses are described in detail later.

-          “Physician” may be a more precise term than “doctor”. More importantly, am I right in thinking that the participants will be primary care physicians (general practitioners) and not any kind of doctors? I think this should be clarified.

-          In the description of Figure 1, the authors claim that the p values in a previous study were significant at the 1% level (in fact, p < 0.001 is also significant at the 0.1% level), but they use the 5% level in their own study. Alpha levels should be specified before data collection, and I suspect that the authors of the mentioned study did not intend to use the 1% threshold, so it is better to stick with the traditional threshold of 0.05 when describing the study.

Reviewed by , 20 Sep 2021


Doctors’ possible misperceptions of plant-based diets may compromise their relationships with patients, and the willingness of newly vegetarian/vegan patients to continue eating plant-based foods. The article suggests are doctors in France are not always willing to learn about plant-based nutrition as a result of barriers including cognitive factors and otherwise. The article outlines plans for a randomised controlled trial (N = 400) where participating doctors are assigned to one of two conditions: a) information campaign, or b) no information control. The authors hypothesize that the information campaign will: positively influence doctors’ opinions of plant-based diets, increase likelihood of prescribing the correct medical test, and increase recommendations to follow plant-based diets. 

I am not particularly aware of research in this area, but all things considered, I think this study is valuable and could have real consequences on the use of information campaigns to facilitate doctors’ knowledge and promotion of plant-based diets. Some methodological decisions require further justification or elaboration to ensure the collected data adequately answers your research questions.


Re: Introduction


The introduction provides an excellent overview of doctors’ perspectives on plant-based diets in France, though certain parts can be strengthened. I could not access a few of the references provided to justify the research questions, so I cannot easily evaluate this myself.  I think it would be helpful to include links to theses available on the web especially when DOIs or other identifiers are not available. It is my understanding that there’s been no research so far exploring the effect of information campaigns used to improve doctors’ knowledge of plant-based diets, so I can understand if it not possible to use existing evidence to support the proposed study. There’s good discussion of how doctors may deter patients from pursuing a plant-based diet, but I think some coverage of how doctors might positively influence people’s uptake of plant-based diets would be good too (see Cramer et al., 2017; Mctinosh et al., 1995).The supporting figure (Figure 1) reports results from a previous study, but only includes p-values obtained from statistical tests. Please report the statistical tests fully including sample size in either the figure or notes.


Re: Methods

Psychometric properties of measures

The proposed measures are interesting but it isn’t clear to me why existing measures of attitudes towards plant-based diets were not used. Corrin and Papadopoulos’s (2017) review of literature on attitudes towards a vegetarian diet may be helpful in finding existing measures that can be adapted for the purpose of your research. Otherwise, if these measures have been used before in a pilot/unpublished study, I would suggest adding psychometric properties found in the past to support your use of these measures. If these are completely new measures, it may be beneficial to pre-test them and examine their internal reliability (e.g., using Cronbach’s alpha or factor analysis). If this is not possible, I would acknowledge this as a limitation. I have some concerns about the “veganism promotion index” (VPI). Doctors may disagree with one aspect of the message that there are “no health risks in following a well-balanced vegan diet”, despite willingness to promote a well-balanced vegan diet generally. Distinguishing between those two may be beneficial conceptually, or you could include a question of some sort to check doctors’ understanding of the question. It is also not clear to me the extent that the charity giving game is an adequate measure of actual or “active” behaviour. Some discussion of the VPI’s convergent validity is needed to establish its adequacy for the intended purpose. On screen #6, the question “what is your opinion regarding the level of consumption by the French population of…?” may be interpreted differently by participants. For example, if a doctor chooses “slightly excessive” for eggs, does it refer to people’s consumption of eggs, or their opinion of people’s said consumption? I may be wrong about this though so it may be preferable to consider the feedback of the editor/reviewers on this. Otherwise, I believe these are conceptually valuable outcomes to test the efficacy of the intervention.


Presentation of measures

Regarding the scales presented on screen #4, I think it might be confusing to vary the valence of the anchors across questions. I think it would be better to keep the negative anchors on the same side for each question. The booklet used to provide information relates to both vegetarian and vegan diets, but the measures used refer to vegan diets specifically. I think it would be better either to adjust the prompts, or to clarify in your discussion that this research applies to vegan diets specifically. Perhaps discuss this limitation in your report, as doctors may have more favourable attitudes towards vegetarian diets in comparison to vegan ones.



In your power analyses, it seems like you expect a difference of 0.1 on the VDI and VPI measures, and 6 percentage points on the PMPI. Please add an explanation for why those are the expected effect sizes. There may be previous research justifying the effect sizes, or it may just be that this is what you consider practically meaningful. In all cases, I would recommend adding an explanation. It is also not clear to me if these thresholds are used for making inferences about the data, or if they are used for the purpose of the power analysis specifically. For example, if you obtain a significant difference between conditions, but the difference is less than 0.1 on a 0-1 scale, would you still infer that the intervention was effective? I am unfamiliar with using unilateral Wilcoxon tests, so I cannot comment on using it as an inferential tool. My personal preference would be to use methods that do not rely on null hypothesis significance testing (e.g., Bayesian estimation of parameter values, Bayes factors, equivalence testing), but I leave this to the editor who may be experienced with other methods. 


Thank you very much for this interesting read.


Best wishes,

Alaa Aldoh

User comments

No user comments yet