Submit a report


Please note: To accommodate reviewer and recommender holiday schedules, we will be closed to submissions from 1st July — 1st September. During this time, reviewers will be able to submit reviews and recommenders will issue decisions, but no new or revised submissions can be made by authors. The one exception to this rule is that authors using the scheduled track who submit their initial Stage 1 snapshot prior to 1st July can choose a date within the shutdown period to submit their full Stage 1 manuscript.

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.



Defacing biases in manual and automated quality assessments of structural MRI with MRIQCuse asterix (*) to get italics
Céline Provins, Yasser Alemán-Gómez, Jonas Richiardi, Russell A. Poldrack, Patric Hagmann, Oscar EstebanPlease use the format "First name initials family name" as in "Marie S. Curie, Niels H. D. Bohr, Albert Einstein, John R. R. Tolkien, Donna T. Strickland"
<p>A critical requirement before data-sharing of human neuroimaging is removing facial features to protect individuals’ privacy. However, not only does this process redact identifiable information about individuals, but it also removes non-identifiable information. This may introduce undesired variability into downstream analysis and interpretation. Here, we pre-register a study design to investigate the degree to which the so-called defacing alters the quality assessment of T1-weighted images of the human brain from the openly available “IXI dataset”. The effect of defacing on manual quality assessment will be investigated on a single-site subset of the dataset (N=185). By means of repeated-measures analysis of variance (rm-ANOVA) or linear mixed-effects models in case data do not meet rm-ANOVA’s assumptions, we will determine whether four trained human raters’ perception of quality is significantly influenced by defacing by comparing their ratings on the same set of images in two conditions: “non-defaced” (i.e., preserving facial features) and “defaced”. Relatedly, we will also verify that defaced images are systematically assigned higher quality ratings. In addition, we will investigate these biases on automated quality assessments by applying multivariate rm-ANOVA (rm-MANOVA) on the image quality metrics extracted with MRIQC on the full IXI dataset (N=580; three acquisition sites). The analysis code, tested on simulated data, is made openly available with this pre-registration report. This study seeks evidence of the deleterious effects of defacing on data quality assessments by humans and machine agents.</p>
You should fill this box only if you chose 'All or part of the results presented in this preprint are based on data'. URL must start with http:// or https://
You should fill this box only if you chose 'Scripts were used to obtain or analyze the results'. URL must start with http:// or https://
You should fill this box only if you chose 'Codes have been used in this study'. URL must start with http:// or https://
biases, defacing, quality control, quality assessment, image quality metrics, iqms, manual ratings, MRIQC, MRI, structural MRI
NonePlease indicate the methods that may require specialised expertise during the peer review process (use a comma to separate various required expertises).
Medical Sciences
No need for them to be recommenders of PCI Registered Reports. Please do not suggest reviewers for whom there might be a conflict of interest. Reviewers are not allowed to review preprints written by close colleagues (with whom they have published in the last four years, with whom they have received joint funding in the last four years, or with whom they are currently writing a manuscript, or submitting a grant proposal), or by family members, friends, or anyone for whom bias might affect the nature of the review - see the code of conduct
e.g. John Doe []
2022-11-28 10:59:32
D. Samuel Schwarzkopf
Cassandra Gould van Praag, Catherine Morgan, Abiola Akinnubi