Submit a report


Please note: To accommodate reviewer and recommender holiday schedules, we will be closed to submissions from 1st July — 1st September. During this time, reviewers will be able to submit reviews and recommenders will issue decisions, but no new or revised submissions can be made by authors. The one exception to this rule is that authors using the scheduled track who submit their initial Stage 1 snapshot prior to 1st July can choose a date within the shutdown period to submit their full Stage 1 manuscript.

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.



Test-Retest Reliability in Functional Magnetic Resonance Imaging: Impact of Analytical Decisions on Individual and Group Estimates in the Monetary Incentive Delay Taskuse asterix (*) to get italics
Michael I. Demidenko, Jeanette A. Mumford, Russell A. PoldrackPlease use the format "First name initials family name" as in "Marie S. Curie, Niels H. D. Bohr, Albert Einstein, John R. R. Tolkien, Donna T. Strickland"
<p>Empirical studies reporting low test-retest reliability of individual neural estimates in functional magnetic resonance imaging (fMRI) data have resurrected interest among cognitive neuroscientists in methods that may improve reliability in fMRI. Over the last decade, several individual studies have reported that modeling decisions, such as smoothing, motion correction and contrast selection, may improve estimates of test-retest reliability of neural estimates. However, it remains an empirical question whether certain analytic decisions consistently improve individual and group level reliability estimates in an fMRI task across multiple large, independent samples. This study uses three independent samples (approximate Ns: 65, 150 &amp; 2,000) that collected the same task (Monetary Incentive Delay task) across two runs and two sessions to evaluate the effects of analytic decisions on the individual (continuous) and group (binary/continuous) reliability estimates of neural activity in task fMRI. The analytic decisions in this study vary across four categories: smoothing kernel (five options), motion correction (six options), task parameterizing (three options) and task contrasts (four options), totaling 360 different modeling permutations. Continuous and binary reliability estimates of neural activity are calculated within and between sessions and associations between modeling decisions and reliability estimates (e.g., intraclass correlation (ICC), Jaccard similarity) are reported using specification curve analyses and hierarchical linear modeling. In addition to examining whether specific modeling decisions result in higher reliability, this study also evaluates an underexplored issue: How modeling decisions impact within- and between-subject variance and at which sample size the ICC stabilizes in fMRI data.</p>
You should fill this box only if you chose 'All or part of the results presented in this preprint are based on data'. URL must start with http:// or https://
You should fill this box only if you chose 'Scripts were used to obtain or analyze the results'. URL must start with http:// or https://
You should fill this box only if you chose 'Codes have been used in this study'. URL must start with http:// or https://
Test-retest Reliability, Intraclass Correlation, Jaccard Similarity, Functional Magnetic Resonance Imaging, Monetary Incentive Delay task, Individual Differences
NonePlease indicate the methods that may require specialised expertise during the peer review process (use a comma to separate various required expertises).
Life Sciences, Social sciences
No need for them to be recommenders of PCI Registered Reports. Please do not suggest reviewers for whom there might be a conflict of interest. Reviewers are not allowed to review preprints written by close colleagues (with whom they have published in the last four years, with whom they have received joint funding in the last four years, or with whom they are currently writing a manuscript, or submitting a grant proposal), or by family members, friends, or anyone for whom bias might affect the nature of the review - see the code of conduct
e.g. John Doe []
2023-04-17 22:27:54
Dorothy Bishop