Skip to main content

Citation:

Josh Cutler and Jacob M. Montgomery. 2013. “Computerized Adaptive Testing for Public Opinion Surveys.Political Analysis, 21, 2, 141-171.

Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up nonresponse rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this article, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals’ previous answers to select subsequent questions that most efficiently reveal respondents’ positions on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and empirically comparing dynamic and static measures of political knowledge.

Rex Deng

Written by Rex Deng

Leave a Reply