Submit a report

Announcements

Please note that we will be CLOSED to ALL SUBMISSIONS from 1 December 2024 through 12 January 2025 to give our recommenders and reviewers a holiday break.

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.

346

Defacing biases in manual and automated quality assessments of structural MRI with MRIQCuse asterix (*) to get italics
Céline Provins, Yasser Alemán-Gómez, Jonas Richiardi, Russell A. Poldrack, Patric Hagmann, Oscar EstebanPlease use the format "First name initials family name" as in "Marie S. Curie, Niels H. D. Bohr, Albert Einstein, John R. R. Tolkien, Donna T. Strickland"
2022
<p>A critical requirement before data-sharing of human neuroimaging is removing facial features to protect individuals’ privacy. However, not only does this process redact identifiable information about individuals, but it also removes non-identifiable information. This may introduce undesired variability into downstream analysis and interpretation. Here, we pre-register a study design to investigate the degree to which the so-called defacing alters the quality assessment of T1-weighted images of the human brain from the openly available “IXI dataset”. The effect of defacing on manual quality assessment will be investigated on a single-site subset of the dataset (N=185). By means of repeated-measures analysis of variance (rm-ANOVA) or linear mixed-effects models in case data do not meet rm-ANOVA’s assumptions, we will determine whether four trained human raters’ perception of quality is significantly influenced by defacing by comparing their ratings on the same set of images in two conditions: “non-defaced” (i.e., preserving facial features) and “defaced”. Relatedly, we will also verify that defaced images are systematically assigned higher quality ratings. In addition, we will investigate these biases on automated quality assessments by applying multivariate rm-ANOVA (rm-MANOVA) on the image quality metrics extracted with MRIQC on the full IXI dataset (N=580; three acquisition sites). The analysis code, tested on simulated data, is made openly available with this pre-registration report. This study seeks evidence of the deleterious effects of defacing on data quality assessments by humans and machine agents.</p>
You should fill this box only if you chose 'All or part of the results presented in this preprint are based on data'. URL must start with http:// or https://
You should fill this box only if you chose 'Scripts were used to obtain or analyze the results'. URL must start with http:// or https://
You should fill this box only if you chose 'Codes have been used in this study'. URL must start with http:// or https://
biases, defacing, quality control, quality assessment, image quality metrics, iqms, manual ratings, MRIQC, MRI, structural MRI
NonePlease indicate the methods that may require specialised expertise during the peer review process (use a comma to separate various required expertises).
Medical Sciences
e.g. John Doe john@doe.com
No need for them to be recommenders of PCI Registered Reports. Please do not suggest reviewers for whom there might be a conflict of interest. Reviewers are not allowed to review preprints written by close colleagues (with whom they have published in the last four years, with whom they have received joint funding in the last four years, or with whom they are currently writing a manuscript, or submitting a grant proposal), or by family members, friends, or anyone for whom bias might affect the nature of the review - see the code of conduct
e.g. John Doe john@doe.com
2022-11-28 10:59:32
D. Samuel Schwarzkopf
Abiola Akinnubi , Cassandra Gould van Praag, Catherine Morgan