"Terminating the test after answering three consecutive items incorrectly is a common practice with individually-administered intelligence tests. Obviously, treating the not reached items as incorrect (which is the typical interpretation - that the examinee would not have answered any of the remaining items correctly) is not a good idea. This practice results in Rasch ability estimates that are biased. On the other hand, if you treat the data as missing, the bias is not as extreme, but it is still there."
Edward W. Wolfe, Michigan State University
Is this bias a serious problem? Ed Wolfe mentions further that he is doing authoritative simulation studies based on real data. But let's imagine a simple scenario. We have a uniform item bank of Rasch-behaving items. Start by administering an item three logits below the examinee. Then advance up the item bank, administering items equally spaced in difficulty. Stop after three consecutive failures. What examinee measures would we expect to obtain for different items spacings?
The Figure shows the results of a simple simulation study according to these criteria. Values close to -4 logits correspond to no correct answers at all. Although the overall bias is negligible, the impact on individual examinees can be huge! Definitely do not use the "3 error" stopping rule with a test containing closely spaced items!
How Much Bias? Wolfe, E.W., Linacre, J.M. Rasch Measurement Transactions, 2000, 14:2 p.750
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt142g.htm
Website: www.rasch.org/rmt/contents.htm