"I have a question about the limits of variability in the difficulty or challenge posed by different elements of the facets analyzed in a performance. Let us say that a data set derived from a large-scale performance assessment that has the following characteristics:
1. 5,000+ examinees.
2. each of whom performs two tasks drawn at random from a pool of 20 speeches.
3. each examinee is rated on both tasks by a random pair of raters from a pool of 50.
4. each task is rated on 4 assessment items with a common 6-point scale.
"According to the results of a Rasch analysis,
the examinee logit ability spread is from -8 to +8 logits
the raters vary in harshness from -2 to +2 logits, with Infit Mean-Square between 0.7 and 1.5
the tasks range in difficulty from -0.5 to +0.5 logits with Infit MnSq between 0.9 and 1.2
the assessment items range in challenge from -1 to +1 logit, Infit MnSq between 0.8 and 1.2
"The biggest problem seems to be rater variability. Can Rasch analysis produce fair ability estimates with these large measure and fit variations?"
Tom Lumley, Hong Kong Poly University
In this example, variability in rater severity could be an asset. The range of task and item difficulties is small relative to the examinee range. The wide range of rater severity would cause candidates of the same ability to be evaluated against different levels of the rating scale producing both better examinee measures and better validation of rating scale functioning. As long as the raters are self-consistent (across time and across examinees), I can't imagine how variability in severity would ever be a problem.
The variation in your rater fit statistics indicates that some part of your data may be of doubtful quality. This could be due to raters with different rating styles (e.g., halo effect, extremism). If so, you can discover the amount of mis-measurement this causes by allowing each rater to define their own rating scale. The person measures from this model can then be compared to the shared rating scale model. I have a paper using these methods, "Unmodelled Rater Discrimination Error", given at IOMW, 1998.
Peter Congdon
Looking at your quality-control fit statistics, your tasks and items are performing well. Raters with noisy fit statistics near 1.5 are problematic. Perhaps the misfitting raters encountered idiosyncratic examinees. Drop idiosyncratic examinees and judges from the data temporarily. Analyze the remaining examinees, items, tasks and raters. Verify that the 6 category rating scale is working as intended for all items, tasks and raters by allowing each of these in turn to have their own rating scale definitions. Finally anchor all measures at their most defensible values and reintroduce dropped examinees and judges for the final measurement report.
John Michael Linacre
Rater Variability Lumley T, Congdon P., Linacre J. Rasch Measurement Transactions, 1999, 12:4 p.
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt124f.htm
Website: www.rasch.org/rmt/contents.htm