Misfit Statistics for Rating Scale Categories

Analysis of ordinal observations has seldom included investigation into whether each invited category is performing as intended. Even in Rasch analysis such investigation has not been routine, partly because diagnostically useful fit statistics were not available. Now, analyses based on recent versions of BIGSTEPS indicate that two useful sets of statistics have been formulated.

Average Measure Difference

Implicit in the use of ordinal observations is the specification that the higher the category number, the more of the latent variable is evidenced. Thus, on average, "better" performers should produce higher ratings than "worse" performers. Also "easier" items should manifest higher ratings than "harder" items.

The observation Xni=x is modelled as governed by the difference between person n's ability Bn and item i's difficulty Di. The effect off the measure difference Bn-Di is observed as Xni=x. Examining the entire data set, the average measure difference (AMD) modelled to produce an observation x is

sum [over (Xni=x)] (Bn - Di) / sum(1) for i=1,L n=1,N

The AMD for each category can be computed. Since "more" of the rating scale is modelled to reflect "more" of the underlying variable, the AMDs are expected to increase up the rating scale. This pattern can be seen in Example 1, a well-behaved rating scale. As the categories ascend from 0 to 4, the AMDs increase from -2.34 to 2.21 logits.

Category           Average             Average
Number             Measure             Measure
                Difference (AMD)    Difference (AMD)
----------------------------------------------------
0                -2.34                -2.34
1                -1.56                -1.56
2                 0.12                 1.57
3                 1.57                 0.12
4                 2.21                 2.21
----------------------------------------------------
               Example 1:             Example 2:
              Well-behaved            Problematic
              rating scale            rating scale

Example 2, a problematic scale, shows a different pattern. The AMDs are ascending for the most part, except AMD for category 3 which, at 0.12, is less than that for category 2 at 1.57. This suggests that category 2 is not "less" than category 3 in practice, despite the scale designer's intention. A common cause of this is the use of the central option of a five category Likert scale to signify "No Opinion" or "Don't Know". "Don't know" either investigates behavior on a dimension different from the other categories or enables the respondent to escape from answering the question.

Here are some remedies to disordered AMDs, such as Example 2:

1) Some or all of the observations in categories 2 or 3 can be treated as missing. Indeed, if category 2 is off-dimension or used idiosyncratically, then it is not measuring the desired dimension and all observations in category 2 could be treated as missing.

2) Closer examination of the definitions of categories 2 and 3 may indicate that reversing their order would maintain the ordinally ascending meaning of the scale. Simply recode all 2's as 3's and all 3's as 2's.

3) Third, the difference between a "2" and a "3" may not be clear to respondents, e.g., the difference between "often" and "nearly always". Then categories "2" and "3" can be joined into one category, numbered 2, so that category 4 now becomes 3.

When combining or deleting categories, aim toward equalizing the category frequencies as much as possible, so that each category contributes about equally to the measurement process.

Observed / Expected Mean-Square Fit Ratios

AMDs can be correctly ordered, but the categories themselves still be used haphazardly. The modelled raw-score variance of an observation, Xni, on a rating scale is

Vni = sum from k=0 to m (k-Eni)^2 Pnik

The observed squared residual of Xni is

(Xni - Eni)^2

Summing these variances across the data and partitioning by rating scale category, the variance explained by ratings in category x is modelled to be

Mx = sum over all Xni of (x-Eni)^2 Pnix

The observed residual sum of squares due to ratings of Xni=x is

Ox = Sum over (Xni=x) (x-Eni)^2

When the data fit the model, the modelled variance approximates the residual sum of squares. Differences are diagnostic of misfit.

The INFIT statistic, Vx, summarizes their agreement for category x:

Vx = Ox/Mx

This fit ratio has a mean-square form with expectation 1.0, and range 0 to infinity. Values greater than 1.0 indicate improbable category use. Values less than 1.0 indicate overly predictable category use.

The squared standardized residual for an observation of Xni=x is

Znix^2 = (x-Eni)^2/Vni

Summing these terms across the data and partitioning by rating scale category, the contribution of category x is modelled to be

M'x = Sum (Znix^2 Pnix)

The observed sum of squared standardized residuals for observations of Xni=x is

O'x = Sum for (Xni=x) Znix^2

Again, when the data fit the model, the observed sum approximates the modelled sum.

The OUTFIT mean-square for observations in category x is the ratio of observed to expected sum-of-squared standardized residuals, Ux:

Ux = o'x/M'x

This fit ratio is also a mean-square with expectation 1.0 and range 0 to infinity. Values greater than 1.0 indicate improbable category use. Values less than 1.0 indicate overly predictable category use.

The Table shows the results for the familiar "Liking for Science" data set. AMDs exhibit the desired monotonically ascending pattern. The OUTFIT mean-squares, however, show some unwanted behavior. The bottom category with mean-square 1.20 is used approximately as modelled. The central category with mean-square .69 is overly predictable. This suggests that some children responding to this survey avoided making other than obvious choices. One child responded in the central category to every item. The top category, with mean-square 1.47, manifests improbable observations. A few children liked activities that they were expected to dislike. These activities included "watching rats" and "finding old bottles". From the perspective of measuring "Liking for Science", these idiosyncratic ratings are off-dimension and so perturb the measuring system. Measurement would be improved by recoding these inconsistent ratings as missing.

The INFIT mean-squares, which are more sensitive to idiosyncratic usage of adjacent categories, are within their typical range.

Category      Count   AMD    INFIT Mean-square   OUTFIT
0 "dislike"    378    -.87      1.09              1.02
1 "neutral"    620     .13       .86               .69
2 "like"       852    2.21      1.00              1.47

Misfit Statistics for Rating Scale Categories. Linacre JM. … Rasch Measurement Transactions, 1995, 9:3 p.450



Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn. George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

Rasch Measurement Transactions welcomes your comments:

Your email address (if you want us to reply):

If Rasch.org does not reply, please post your message on the Rasch Forum
 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Apr. 21 - 22, 2025, Mon.-Tue. International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Feb. - June, 2025 On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia
Feb. - June, 2025 On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt93j.htm

Website: www.rasch.org/rmt/contents.htm