We have problems in ordering students who have the same raw score.
We must do this to meet selection requirements. Here are 5 persons
who got the same raw score of 26 on the same items. Some students
answered correctly more difficult questions while some only
answered correctly easier questions. How can we make an ordering
among these students?
Kyung-Jin Cho
5 Persons with Raw Score 26, Measure .73 logits, Model S.E. .35. | |||
---|---|---|---|
Outfit Mean-square | Infit Mean-square | Point-biserial | Student |
.77 .85 .94 1.09 1.58 |
.86 .94 1.01 1.03 1.08 |
.48 .39 .30 .28 .18 |
A B C D E |
The scientific principle on which measurement is based makes a point of specifying that for a fixed set of items all occurrences of the same raw scores must imply the same measures ("Sufficiency of raw scores on the same set of items"). So a scientist would not differentiate among the same raw scores on the same tests. However, a scientist would have already eliminated whatever responses were found to be irrelevant or contradictory to constructing the best measures.
Some psychometricians feel that a correct answer to a hard item merits a higher measure than a correct answer to an easy item. But then, does an incorrect answer to an easier item merit a lower measure? Remember, all these students achieved the same raw score, so success on an unusually hard item implies failure on at least one easier item. If you really think success on a hard item merits a higher measure, then you must disregard the concomitant failure on an easier item. If so, then you could make the response to that easier item, which you have decided to disregard, missing. Now that student has taken a shorter, more difficult test, but obtained the same raw score - the reported measure will be higher, resolving the ordering problem. Such students will have high outfit statistics in the original analysis, so student E is the likely winner! Student A is the loser.
In general, however, better fitting response strings are more reproducible and also easier to understand. We know what these students can do and what they cannot do. Consequently their performance on the next item is more predictable. Who is better, an erratic genius or a consistently high performer? The most consistent student, A, has the smallest mean-square fit statistics. This makes student A the winner and student E the loser!
Clearly, mathematics can take us only so far. In the end, we must also think again how to use measurement wisely as well as correctly.
Benjamin D. Wright
Note: If lucky guesses or careless mistakes are to be eliminated, then use a technique similar to "Guessing and Measurement" www.rasch.org/rmt/rmt62a.htm
Who is awarded First Prize when the raw scores are the same? Wright B.D. Rasch Measurement Transactions, 1998, 12:2 p. 629.
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt122d.htm
Website: www.rasch.org/rmt/contents.htm