No probifactor model fit index bias, but a propensity toward selecting the best model.
Martina BaderMorten MoshagenPublished in: Journal of psychopathology and clinical science (2022)
Based on an extensive Monte Carlo simulation study, Greene et al. (2019) investigated the behavior of various measures of model fit for competing types of confirmatory factor analysis models of psychopathology, the correlated factors model and the bifactor model. Greene et al. (2019) found that fit indices mostly favored a bifactor model over a correlated factors model, which led to the conclusion of a "probifactor fit index bias." Here we show that this conclusion is misleading as far as conditions without complexities in the data-generating model are concerned and in fact incorrect in conditions with complexities (cross-loadings or correlated residuals) in the data-generating model. Specifically, we demonstrate that the very same data Greene et al. (2019) generated from a correlated three-factor model can be likewise obtained from a higher-order or a bifactor model, so that there is no basis for maintaining that the "true" to-be recovered model conformed to a correlated factors structure. Moreover, we show that a standard bifactor model was factually closer aligned with the data generated in conditions with added complexities. As such, fit indices necessarily and correctly favored the bifactor model in most conditions. We explain the observed behavior of several fit indices, thereby showing that the results were not characterized by bias, but were in fact in line with the expected and desired behavior. (PsycInfo Database Record (c) 2022 APA, all rights reserved).