Assessment center construct validity
Siminovsky, Allison Bari
MetadataShow full item record
Assessment centers (ACs) have remained a popular selection and development tool for years. Although ACs often demonstrate criterion-related validity, researchers have struggled with finding content validity for AC dimensions. The problem exists for two reasons: nonconvergence and inadmissibility issues when using the MTMM framework, and dominance of dimension variance by exercise variance. The current study reexamined previously reported AC matrices, reclassified using three different schemes of broad dimension factors in order to increase each model’s indicator-to-factor ratio to promote convergence and admissibility. Additionally, a number of design modifications were examined as to their influence on increasing dimension variance. Results show a significant increase in convergence and admissibility rates as indicator-to-factor ratio increases. However, the remaining analyses reached inconclusive and nonsignificant results. Possible explanations for these results are discussed, as well as ramifications for the findings.