The estimation of domain scores through IRT methods on a mathematics placement test
Abstract
This study applied item response theory (IRT) estimation methods to a
mathematics placement test containing multiple-choice items. Issues that were examined include the following: 1) the selection of a best fitting model for the data from the three most widely used IRT models; 2) the estimation of ability and item parameters; 3) the effect of the number of items on domain score estimation; and 4) comparing IRT estimated domain scores to classical test theory (CTT) domain scores. The two-parameter
logistic (2PL) and three-parameter logistic (3PL) models fit the data better than the Rasch model based on the -2 log likelihood values. Three alternate IRT ability estimation methods were considered. The ratios of the root mean squared errors were calculated for the classical test score and the IRT scale score. The res ults were displayed graphically and illustrate that CTT was more accurate than IRT in estimating an individual’s domain score.