"Likert or Rasch? Nothing is more applicable than good theory" proclaim A. van Alphen, R. Halfens, A. Hasman and T. Imbos (Journal of Advanced Nursing, 1994, 20, 196-201). Let us benefit from their comparison of these methodologies.
Rensis Likert's method of summed ratings is widely used for analyzing and reporting questionnaire responses. "Distances on the Likert [raw score] scale are interpreted as equal over the full range of the scale. The scale is treated as an interval scale based on ordinal level item scoring" (p. 200). "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments" (p. 197). "In Likert scaling, it is assumed that the trace lines [ICCs] of all items of the questionnaire coincide approximately. This implies that in Likert scaling no attention is paid to item `strengths' [difficulties]" (p.198). Further, Likert scaling limits its fit analysis to the computation of a reliability coefficient and inter-item correlations.
The paper continues with a discussion of Rasch scaling from the standpoint of the dichotomous model, because of the software available to the authors. Most remarks also apply to polytomous Rasch models.
"The aim of using the Rasch model is not only to scale subjects but also to scale items on the same continuum... The scaling procedure starts with the estimation of the parameters of the Rasch model... The next step is to test the fit between the model assumptions [specifications] and the data. When the items obey the Rasch assumptions, scale values for the items can be published and used for other [samples]. The researcher has shown explicitly that the scale is a unidimensional test with good measurement properties" (p. 199).
"Testing the fit between the data and the model, however, can lead to negative results [i.e., misfit]. In that situation three strategies are recommended in the literature. The first [Draco's] strategy is to start all over again, design a new instrument with new items and gather new data. The second [for people more interested in fit than meaning] is to look for another latent model which offers a better explanation of the pattern of item responses. The third strategy is to analyze the data and to remove items' or subjects' scores [or individual aberrant responses]. It may be that one or more items or a certain subsample of subjects are responsible for the misfit between the model and the data. In this [truly scientific] strategy, statistical arguments and arguments of content both have to govern item or subject reduction" (p. 199).
"Rasch models have very strong points. First, the probabilistic nature of the model, in contrast with deterministic models like the Guttman scale, takes into account that human [or any other] responses are subject to fluctuations. Second the assumptions [specifications] of the Rasch model can be tested statistically. Third, a Rasch scale is a psychometrically proven interval scale: you know better what you are measuring [surely the whole point of the process!]. Fourth, the estimates of the person and item parameters are sample-free. This means they will hold for every sample and not merely the sample under consideration. A final strong point is the availability of [fit] information about the various items [and persons]" (p.200-1).
"One of the disadvantages lies in the fact that applying the Rasch model requires some knowledge of and acquaintance with mathematics [of the analyst, but not of the intended audience]... Second, a disadvantage of the Rasch model is the great number of observations or replications that are needed to estimate the parameters of the model [but Wright & Stone manage fine with 35 children]. Third, the Rasch model holds strong assumptions which are not easy to meet by the observations. [In fact, Rasch specifications can never be met perfectly, but are nearly always met usefully by thoughtfully collected data]" (p.201).
This paper identifies the theoretical issues admirably. All the Rasch proponent need add is a worked example.
Likert or Rasch? Linacre JM. Rasch Measurement Transactions, 1994, 8:2 p.356
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt82d.htm
Website: www.rasch.org/rmt/contents.htm