A Reevaluation of Assessment Center Construct-Related Validity

  •  Milton Cahoon    
  •  Mark Bowler    
  •  Jennifer Bowler    


Recent Monte Carlo research (Lance, Woehr, & Meade, 2007) has questioned the primary analytical tool used to
assess the construct-related validity of assessment center post-exercise dimension ratings (PEDRs) – a
confirmatory factor analysis of a multitrait-multimethod (MTMM) matrix. By utilizing a hybrid of Monte Carlo
data generation and univariate generalizability theory, we examined three primary sources of variance (i.e.,
persons, dimensions, and exercises) and their interactions in 23 previously published assessment center MTMM
matrices. Overall, the person, dimension, and person by dimension effects accounted for a combined 34.06% of
variance in assessment center PEDRs (16.83%, 4.02%, and 13.21%, respectively). However, the largest single
effect came from the person by exercise interaction (21.83%). Implications and suggestions for future
assessment center research and design are discussed.

This work is licensed under a Creative Commons Attribution 4.0 License.