Submit a report

Announcements

Please note that we will be CLOSED to ALL SUBMISSIONS from 1 December 2024 through 12 January 2025 to give our recommenders and reviewers a holiday break.

We are recruiting recommenders (editors) from all research fields!

Your feedback matters! If you have authored or reviewed a Registered Report at Peer Community in Registered Reports, then please take 5 minutes to leave anonymous feedback about your experience, and view community ratings.

551

Convenience Samples and Measurement Equivalence in Replication Researchuse asterix (*) to get italics
Lindsay J. Alley, Jordan Axt, Jessica Kay FlakePlease use the format "First name initials family name" as in "Marie S. Curie, Niels H. D. Bohr, Albert Einstein, John R. R. Tolkien, Donna T. Strickland"
2023
<p>A great deal of research in psychology employs either university student or online crowdsourced convenience samples (Chandler &amp; Shapiro, 2016; Strickland &amp; Stoops, 2019) and there is evidence that these groups differ in meaningful ways (Behrend et al., 2011). This practice could result in the presence of unaccounted-for measurement differences across convenience sample sources, which may bias results when these groups are compared or the resulting data are pooled. In this registered report, we used the openly available data from the Many Labs replication projects to test for measurement equivalence across different convenience sample sources. We examined 89 measures that showed acceptable baseline model fit and tested them for non-equivalence across convenience samples from different sources, including university participant pools, MTurk, and Project Implicit. We then examined whether replication results are robust to non-equivalence by fitting partial invariance models and sensitivity analyses of replication results. Many of the measures examined were not equivalent across student and crowdsourced convenience samples, or across different types of convenience samples. Only two tests, comparing lab and online student samples, retained strict equivalence, while 14 of 30 tests rejected configural equivalence. However, correcting for non-equivalence changed the estimated effect sizes of the replication effects very little. Based on these results, we advise researchers to test for measurement equivalence when combining or comparing data from different convenience samples. At the same time, due to a lack of validity evidence for many of the measures and variable power of our tests, we interpret results with caution.</p>
You should fill this box only if you chose 'All or part of the results presented in this preprint are based on data'. URL must start with http:// or https://
You should fill this box only if you chose 'Scripts were used to obtain or analyze the results'. URL must start with http:// or https://
You should fill this box only if you chose 'Codes have been used in this study'. URL must start with http:// or https://
measurement, psychometrics, equivalence, invariance, metascience, replication
NonePlease indicate the methods that may require specialised expertise during the peer review process (use a comma to separate various required expertises).
Social sciences
No need for them to be recommenders of PCI Registered Reports. Please do not suggest reviewers for whom there might be a conflict of interest. Reviewers are not allowed to review preprints written by close colleagues (with whom they have published in the last four years, with whom they have received joint funding in the last four years, or with whom they are currently writing a manuscript, or submitting a grant proposal), or by family members, friends, or anyone for whom bias might affect the nature of the review - see the code of conduct
e.g. John Doe [john@doe.com]
2023-08-31 20:26:43
Corina Logan
Alison Young Reusser