More Objections to the Rasch Model

More objections have been raised to the application of the Rasch model to empirical data.

1. "The purpose of the Rasch model is to describe the data, so a poor fit of the Rasch model to the data invalidates the use of the Rasch model."

Describing the data is the purpose of many statistical models, such as regression models, but it is not the purpose for using the Rasch model. The purpose of the Rasch model is to use the data to construct additive measures on a latent variable. These measures may or may not be a good description of the data. For instance, if the data contain lucky guesses, the data will be intentionally badly described by a Rasch model. The lucky guesses will contradict the Rasch measures and be detected with misfit statistics. For more, see "Rasch model as Additive Conjoint Measurement" www.rasch.org/memo24.htm

2. "The Rasch-Andrich Rating-Scale model and the Rasch-Masters Partial Credit model assume that the respondent is making a series of consecutive choices between neighboring categories."

Those polytomous models specify that the respondent is making a choice from all categories simultaneously. Consecutive choices are specified in other models such as the Glas-Verhelst "Steps" ("Success") Model or the "Failure" model, see RMT 5:2, 155 www.rasch.org/rmt/52j.htm. However, experience indicates that even in situations where consecutive decisions are made, the Andrich or Masters models are often a better basis for measurement than consecutive-choice models. This may be because the respondent is aware of the other choices, even if they are not currently available for selection.

3. "Empirical items never measure in the same scale units. Real items have different discriminations. Consequently the Rasch model cannot be used."

This is true about real items, but not about the Rasch model. We do not need exact concordance between items, we need useable concordance. Then we need to be alerted to where the lack of concordance has become a threat to useful measurement. Rasch analysis constructs as-concordant-as-possible additive measures based on items with different scale units (discriminations). Rasch analysis then reports the degree of non-concordance of each item using misfit statistics. Items with exceedingly high or exceedingly low discrimination are usually defective items for other reasons, see RMT 7:2, 289 www.rasch.org/rmt/rmt72f.htm

4. "The responses by each respondent to each item must be independent for Rasch analysis to be successful."

The Rasch ideal is local independence. Each item has a difficulty, a location on the latent variable. Each respondent has an "ability", also a location on the same latent variable. A Rasch model predicts the expected response for each respondent to each item based on those locations. When the expected responses are subtracted from the observed responses, the resulting residuals are modeled to be independent. Of course, they never are! Again, misfit analysis comes to our rescue. Is the lack of local independence in the data sufficiently large and sufficiently pervasive to be a threat to the meaning of the additive measures? Experience indicates that thoughtfully-constructed instruments produce observations that are locally independent enough for the additive Rasch measures to be useful for inference.

5. "Rasch analysis can cause unidimensional data to appear multidimensional."

No empirical data are strictly unidimensional. Imagine a perfectly constructed test. Each item implements the intended unidimensional latent variable. But each item also differs from every other item. The ways in which two items differ from each other must be independent of any other item, otherwise they will be locally dependent. Thus each item must implement the intended dimension and also its own "difference" dimension, unique to the item, and uncorrelated with the "difference" dimension of any other item. Of course, empirical items fall short in both regards. They do not exactly implement the intended variable, and their "difference" dimensions are somewhat correlated with the "difference" dimensions of other items.

The choice of variant of the Rasch model, and other decisions made by the analyst, can alter the impact of the inherent multidimensionality of the items. For instance, if polytomous items are rescored as dichotomies, the choice of cut-point in the rating-scale may exacerbate or ameliorate the unwanted correlations in the data. Accordingly, the analyst must be aware of this and may adjust the scoring accordingly. See for instance "Communication validity and rating scales", RMT 10:1, 482 www.rasch.org/rmt/rmt101k.htm

6. "Factor Analysis of the original responses is more accurate for investigating possible multidimensionality than unidimensional Rasch analysis."

Factor analysis (FA) can report too many factors, RMT 8:1, 347, www.rasch.org/rmt/rmt81p.htm. But let us consider a practical situation, suppose that FA reports one substantial factor in the inter-item correlation matrix (according to Kaiser's rule or whatever), but the Rasch analysis (PCA of residuals) reports that there is a sizable secondary dimension in the inter-item correlation matrix of the Rasch residuals (or vice-versa). Which is correct?

An obvious solution is to split the set of items into two subsets based on their dimensionality in the analysis which reports two possible dimensions. Then cross-plot the person raw scores or Rasch measures on the two subsets. If the correlation is close to 1.0 (especially when disattenuated for measurement error - RMT 10:1, 479 www.rasch.org/rmt/rmt101g.htm) then we have falsified the empirical two-dimensional finding for this sample.

If the correlation between the two subsets is close to 0.0, then clearly there are two dimensions. Two different dimensions have been combined into one instrument. Inferences based on either dimension are weakened by the other. Suppose that the correlation of person scores or measures is not close to 1.0, but is, say, 0.8. Then is this one dimension or two? For instance, suppose the dimensions are reading and arithmetic for grade-school children. We see immediately that, for the purposes of instruction, they are different dimensions, but for the purposes of school administration, such as advancing the child to the next grade, they are different strands within the same "educational achievement" variable.

Consequently, from the Rasch perspective, the more accurate method for investigating multidimensionality is the method which provides the best guidance about the threat to the validity of the additive measures. FA may identify (or fail to identify) dimensions, but it provides uncertain information on which to base decisions about the threat to additive measurement.

John Michael Linacre


More Objections to the Rasch Model, J.M. Linacre ... Rasch Measurement Transactions, 2010, 24:3 p. 1298-9




Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn. George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

Rasch Measurement Transactions welcomes your comments:

Your email address (if you want us to reply):

If Rasch.org does not reply, please post your message on the Rasch Forum
 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Apr. 21 - 22, 2025, Mon.-Tue. International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Feb. - June, 2025 On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia
Feb. - June, 2025 On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt243f.htm

Website: www.rasch.org/rmt/contents.htm