Impact of interrater reliability on the construct validity of assessment centers post-exercise dimension ratings (PEDRS) using single versus multiple raters
MetadataShow full item record
The purpose of the present research was to re-examine the traditional and experimental methods used by Kolk, Born, and van der Flier (2002) for their impact on the construct validity of assessment centers (AC). Data for this study were AC ratings for law enforcement officers. I calculated the reliability of the multiple raters for each dimension within an exercise, then using these reliabilities I corrected the correlations in the multi-trait multi-method (MTMM) matrix for attenuation due to unreliability in the single ratings for the different dimension-same exercise correlations. Results indicate there were no differences between the multiple raters model and any of the single rater models. Results are discussed in terms of construct validity of ACs and future direction for investigating the construct validity problems of ACs.