Dissertation/Thesis Abstract

Differential Item and Distractor Functioning Between Computer-Based and Paper-and-Pencil Testing Within Demographic Groups on a Statewide Mathematics Assessment
by Sandersfeld, Tyler Jay, Ph.D., The University of Iowa, 2020, 211; 28029142
Abstract (Summary)

Although many large-scale, high-stakes assessments are transitioning from traditional paper-and-pencil tests to computer-based tests, some of these assessments are still offering both test delivery modes to school districts. Thus, test developers must maintain standards of fairness and comparability of scores between test delivery modes when both are used. Differential item functioning (DIF) analysis is a popular method for finding any items that appear to favor one group of examinees over another despite similar abilities, and this method can be used to compare test delivery modes.

However, most DIF analyses do not take heterogeneity within the reference and focal groups into mind, nor do they search for possible reasons for DIF to occur. Differential distractor functioning (DDF) can help identify distractors that are attracting examinees in one group more than score-matched examinees in the other group, possibly identifying the reason why the item was flagged for DIF.

This dissertation examines items from Grades 3, 6, and 9 of the 2018-2019 administration of the Iowa Statewide Assessment of Student Progress (ISASP) mathematics test. Each item was analyzed for Mantel-Haenszel DIF between examinees of computer-based tests and examinees of paper-and-pencil tests, both within the total population of examinees and within a selection of demographic groups. Additionally, each item’s distractors were analyzed for DDF between examinees of the two test delivery modes, again within the total population and within the selected demographic subgroups. These DDF analyses used an odds ratio approach under the nominal response model, which works similarly to the Mantel-Haenszel method for DIF.

Only one item across all three grade levels showed evidence of DIF between test delivery modes within the total population of students taking this test. However, seven additional items across the three grade levels showed evidence of DIF between test delivery modes when the analyses were restricted to a single demographic group. Furthermore, a total of eleven items showed evidence of DDF between modes for at least one distractor within the total population, and a total of thirty-one items showed evidence of DDF between modes for at least one distractor within at least one demographic group.

The most important trend found among the results is that the average Mantel-Haenszel delta-DDF (MH D-DDF) values for Option 4 in each grade level and Option 5 in Grade 9 were significantly positive, suggesting that paper-and-pencil examinees were more likely to incorrectly select these options than score-matched computer-based examinees. Other trends, implications for test development and future research, and limitations of the dissertation were discussed.

Indexing (document details)
Advisor: Welch, Catherine J.
Commitee: Ankenmann, Robert D., Harris, Deborah J., Hong, Dae S., Templin, Jonathan
School: The University of Iowa
Department: Psychological & Quantitative Foundations
School Location: United States -- Iowa
Source: DAI-A 82/3(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Educational tests & measurements, Educational evaluation, Mathematics education
Keywords: Computer-based testing, Differential distractor functioning, Differential item functioning, Mathematics assessment, Paper-and-pencil testing
Publication Number: 28029142
ISBN: 9798672127736
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest