DOI or URL of the report: https://osf.io/t4akf?view_only=66eab29c7acb4aebbcec4631cbcb9217
Version of the report: V2.3
Dear recommender,
We proceeded to the suggested changes.
Thank you very much for your help,
Best regards,
Romain Espinosa
DOI or URL of the report: https://osf.io/w65hz?view_only=66eab29c7acb4aebbcec4631cbcb9217
The original reviewers for the Stage 1 have now evaluated your Stage 2. Each reviewer some some points of clarification or elaboration I urge you to address.
It is great to see this interesting project completed, and the results are very promising. The analyses were mostly executed as stated in the Stage 1 manuscript, but I believe that the manuscript could be improved by expanding the Discussion section, changing some of the exploratory analyses, and clarifying the result of the outcome-neutral test. See my comments below:
Have the authors provided a direct URL to the approved protocol in the Stage 2 manuscript? Did they stay true to their protocol? Are any deviations from protocol clearly justified and fully documented?
- Link is provided
- There were two minor changes in the protocol. These changes were transparently reported and were approved before data collection. I find these changes well justified as they improved the clarity of the questionnaire and added a data quality check.
Is the Introduction in the Stage 1 manuscript (including hypotheses) the same as in the Stage 2 manuscript? Are any changes transparently flagged?
The introduction and methods sections remained the same. Only modifications are a new summary of results paragraph at the end of the Intro, and changes of future to past tense in the Methods section.
Did any prespecified data quality checks, positive controls, or tests of intervention fidelity succeed?
- Partly yes. The third pre-registered hypothesis (test of VPI scores) was not tested as the power was not sufficient.
- However, the pre-registered outcome neutral tests that meant to check ceiling and floor effects are not reported (only on the pilot data), so this cannot be assessed
Are any additional post hoc analyses justified, performed appropriately, and clearly distinguished from the preregistered analyses? Are the conclusions appropriately centered on the outcomes of the preregistered analyses?
- The exploratory section introduces post-hoc comparisons, but these are only based on descriptive statistics and not on statistical tests. I agree that these comparisons are interesting, as it may reveal that the information campaign did not improve the understanding of all issues (e.g., Zinc test), but, I think these comparisons need proper post-hoc statistical tests to substantiate any claims made in that section or later. That is if you want to claim that the information campaign improved the understanding of all medical tests but that of the Zinc test, then you need to back this claim up with statistical analyses.
- The rest of the exploratory analyses look good
- I believe that the Discussion section could be expanded, mostly focusing on the results of the confirmatory analyses and their interpretation.
Are the overall conclusions based on the evidence?
I agree with the conclusions regarding the first hypothesis.
However, regarding the second hypothesis, I have two issues:
- you should bear in mind that you ran a vignette-based study rather than measured actual performance in general practice. Hence, I would refrain from saying that the information campaign improves medical practice, instead, I would say that it has the potential to improve it. This is especially relevant for the abstract, which does not reveal that it is a vignette-based study (perhaps, it would be nice to add this too to the abstract)
- the observed effect size was half of the pre-registered SESOI. I had the impression that this is not well pronounced in the abstract and the discussion section. More attention should be given to this and explained why the observed effect is still interesting (see also my comment later about the lack of information on how the observed effect sizes should be interpreted).
Issues/requests:
- Data sharing link (https://github.com/EspinosaRomain/DoctorsVeganDiets) is not accessible to me so I could not verify the data and analyses
- It was a bit unfortunate that the VPI scores were not analysed. I think the power analysis of the VPI scores may have used an unfairly large effect size as the smallest effect size of interest. Doubling an observed effect size is likely to produce an overestimation of a realistic effect, so this analysis was bound not to meet the 80% power criterion. In my opinion, even a 6-percentage point improvement in donations could be worthwhile. I understand that the approved protocol should be followed, but it would be nice to see this analysis in the exploratory section, now that you have collected the data.
- It would be great if the discussion would connect the findings to the existing literature on information interventions. There are no references in the discussion section now. I believe that the discussion could also provide an interpretation of the observed effect sizes of the confirmatory analyses. The smallest effect sizes of interest were defined, so it would be great to put the observed effects in context.
- The analyses presented in the Discussion section should be moved to the exploratory analyses sections, and the results could benefit from more interpretation/link to other findings within this study or by other studies.
- It would be great to expand more on the limitations of the current study (e.g., using vignettes, how results may generalise to French doctors [is the sample truly representative of French doctors?]) and future directions
Minor issues:
- r 150: “diet” is missing after plant-based
- r 177: the control condition is called baseline condition here
- There is no reference to Figure 5 in the text.
- The caption of Figure 5 should clarify that the presented data only includes doctors from the control group
The study is well executed and the authors have carefully considered my prior feedback. The reporting of the statistical analyses could be clearer: a) report the statistical results fully (not just p-values), and b) report the SD/SEs alongside every mean. I am slightly confused by why the authors conclude that information had a weakly positive effect on PMPI scores when the scores were lower than the SESOI. My understanding is that you would suspend judgement in this case, but it is likely that I've misunderstood. Although the study did not have enough power to test effects of the intervention on VPI, I think it is worth including the results of the analysis either in the exploratory analyses section or in the supplementary materials. The report ends somewhat abruptly and I think a conclusion paragraph is needed to tie all the findings together. Lastly, the analysis code is not available. Otherwise, the results of the study are promising and the authors have done a good job visualing their data. Well done.