MONTOYA Amanda's profile
avatar

MONTOYA AmandaORCID_LOGO

  • Psychology, UCLA, Los Angeles, United States of America
  • Life Sciences, Social sciences
  • recommender

Recommendation:  1

Reviews:  0

Website akmontoya.com
Areas of expertise
Psychology; Methodology; Quantitative Psychology; Statistics; Quantitative Methods

Recommendation:  1

14 Feb 2024
STAGE 1
toto

Detecting DIF in Forced-Choice Assessments: A Simulation Study Examining the Effect of Model Misspecification

Developing differential item functioning (DIF) testing methods for use in forced-choice assessments

Recommended by based on reviews by Timo Gnambs and 2 anonymous reviewers
Traditional Likert-type items are commonly used but can elicit response bias. An alternative approach, the forced-choice question, required respondents to rank order all items. Forced-choice questions boast some advantages but required advanced item response theory analysis to generate scores which are comparable across individuals and to evaluate the properties of those scales. However, there has been limited discussion of how to test differential item functioning (DIF) in these scales. In a previous study, Lee et al. (2021) proposed a method for testing DIF.
 
Here, Plantz et al. (2024) explore the implications of incorrect specification of anchors in DIF detection for forced choice items. The study proposes to use a Monte Carlo simulation which manipulates sample size, equality of sample size across groups, effect size, percentage of differentially functioning items, analysis approach, anchor set size, and percent of DIF blocks in the anchor set. This study aims to answer research questions about the type I error and power of DIF detection strategies under a variety of circumstances, both evaluating whether the results from Lee et al. (2021) generalize to misspecified models and expanding to evaluate new research questions. Results of this study will provide practical implications for DIF testing with forced-choice questions. An important limitation of the study is that it does not explore non-uniform DIF, only uniform DIF. Additionally, as with all simulation studies not all results can only apply to conditions which are simulated and so rely on the realistic selection of simulation conditions. The authors have selected conditions to match reality in circumstances where data is available, but relied on previous simulations in cases when data is not available. 
 
This Stage 1 manuscript was evaluated over two rounds of review by two reviewers with expertise in psychometrics. An additional round of review was completed by the recommender only. Based on the merits of the original submission and responsiveness of the authors to requests from the reviewers, the recommender judged that the manuscript met the Stage 1 criteria and therefore awarded in-principle acceptance (IPA).​
 
URL to the preregistered Stage 1 protocol: https://osf.io/p8awx
 
Level of bias control achieved: Level 6. No part of the data or evidence that will be used to answer the research question yet exists and no part will be generated until after IPA.
 
List of eligible PCI RR-friendly journals:
 

References
 
1. Lee, P., Joo, S.-H. & Stark, S. (2021). Detecting DIF in multidimensional forced choice measures using the Thurstonian Item Response Theory Model. Organizational Research Methods, 24, 739–771. https://doi.org/10.1177/1094428120959822
 
2. Plantz, J. W.,  Brown, A., Wright, K. & Flake, J. K. (2024). Detecting DIF in Forced-Choice Assessments: A Simulation Study Examining the Effect of Model Misspecification. In principle acceptance of Version 3 by Peer Community in Registered Reports. https://osf.io/p8awx
avatar

MONTOYA AmandaORCID_LOGO

  • Psychology, UCLA, Los Angeles, United States of America
  • Life Sciences, Social sciences
  • recommender

Recommendation:  1

Reviews:  0

Website akmontoya.com
Areas of expertise
Psychology; Methodology; Quantitative Psychology; Statistics; Quantitative Methods