"I have a instrument of 13 items. I used the Partial Credit model. There are 3 items whose INFIT "t" statistic (ZSTD) is outside -9.9, and so significantly misfitting my Rasch model. But these items' score correlations are greater than 0.83. The discrimination of these three items' is higher than the other 10 items, so in classical test theory (CTT) and much of IRT, these are my best three items. Should I say that the Rasch model is not suitable for this instrument, and maybe a Generalized (2-parameter) Partial Credit model analysis is indicated?
Desperate Test Constructor
t-statistics (ZSTD) are tests of the hypothesis "these data fit the model perfectly." In statistics this is called a "false null hypothesis", because it can never be true! No empirical data fit the Rasch model perfectly. So a more crucial question is, "Do the data fit the model usefully?" "Do they distort the measures more than they contribute to measurement accuracy and precision?"
Here are some steps to take in your investigation:
1. Those three items are over-discriminating from a Rasch perspective. Are these items really good items or are they substantively flawed? Look at the content of the items and refer to Geoff Masters (1988) "Item discrimination: when more is worse", Journal of Educational Measurement, 25:1, 15-29, and www.rasch.org/rmt/rmt72f.htm - RMT 7:2, 289.
2. Don't be intimidated by the statistics. What is your sample size? Is it making the hypothesis test too sensitive? See www.rasch.org/rmt/rmt171n.htm - RMT 17:1, p. 918. Are the mean-squares (chi-squares divided by their degrees of freedom) so close to their expectations that the differences have no substantive implications, despite being significantly unexpected?
3. Are the three items contributing to accurate measurement, or are they distorting measurement? The usual way to check this is to measure the persons with and without these 3 suspect items and cross-plot the person measures. Who is off the diagonal? Which set of measures better represent the abilities of your sample?
4. If these 3 items really are substantively "bad", changing the analytical model will not make them "good". A different model will merely hide the symptoms. So omitting the items is preferable to changing the analytical model.
My best items don't fit!, Rasch Measurement Transactions, 2004, 18:3 p. 992
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt183n.htm
Website: www.rasch.org/rmt/contents.htm