Bi-cross-validation for factor analysis

11 Mar 2015  ·  A. B. Owen, J. Wang ·

Factor analysis is over a century old, but it is still problematic to choose the number of factors for a given data set. The scree test is popular but subjective. The best performing objective methods are recommended on the basis of simulations. We introduce a method based on bi-cross-validation, using randomly held-out submatrices of the data to choose the number of factors. We find it performs better than the leading methods of parallel analysis (PA) and Kaiser's rule. Our performance criterion is based on recovery of the underlying factor-loading (signal) matrix rather than identifying the true number of factors. Like previous comparisons, our work is simulation based. Recent advances in random matrix theory provide principled choices for the number of factors when the noise is homoscedastic, but not for the heteroscedastic case. The simulations we choose are designed using guidance from random matrix theory. In particular, we include factors too small to detect, factors large enough to detect but not large enough to improve the estimate, and two classes of factors large enough to be useful. Much of the advantage of bi-cross-validation comes from cases with factors large enough to detect but too small to be well estimated. We also find that a form of early stopping regularization improves the recovery of the signal matrix.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper