EDITORIAL
This special issue is devoted to Rasch models for measurement: models which are being applied more and more frequently in studies involving quantitative educational and psychological research. One of the key features of these models is that they emphasize the study of individual persons rather than populations. Another is that when data accord with models, they permit the measurement of each person to be independent of the particular test questions chosen from a well defined class of questions. Early studies of these models emphasized the procedures for estimating parameters of the simplest model appropriate for dichotomously scored test questions.
Recently, emphasis has turned; firstly, to generalizing the principles of the simple model to models appropriate for more complex data collection designs; and secondly, to examining methods for checking the accordance or fit between the chosen model and the observed data.
One paper which deals with models appropriate for responses in multiple categories is that by Masters and Wright. In that paper they outline the various steps required in defining a variable from the perspective of Rasch models. Latimer applies a Rasch model to check hypotheses regarding the psychological processes involved in reading comprehension. The model he applies uses only dichotomously scored responses, but the difficulties of the reading tasks are taken to be made up of various combinations of difficulties of only four more elementary tasks.
The paper by Kissane offers new insights into the controversial topic of the measurement of change. He argues that the main emphasis in the study of change should be on the study of the rate of change, and that as a result, at least three measures across time are required. He also demonstrates how the principles of the Rasch models can be applied to the study of the rate of change.
Andrich's paper shows how the concept of reliability, found in traditional classical test theory (CTT), can be accommodated within the framework of Rasch models, and how this framework helps one appreciate more clearly the uses and the limitations of the traditional KR-20 index of internal consistency.
The other four papers deal explicitly with tests of fit between the model and the data. Consistent with the emphasis on persons, a feature of the analyses of test data from the perspective of Rasch models, is the study of the internal consistency of the responses of each person, that is, with the study of 'person-fit'. Smith and Hedges provide evidence that some of the more frequently used tests of fit associated with items can be applied equally well to persons. Douglas reinforces this point with his comments on the definitive position of the residual between the observed response of each person to each item and that predicted from the model. In Bell's paper, statistics for item-fit and person-fit are shown to be symmetrical, and these are related respectively to the ideas of item discrimination and person reliability. The paper shows explicitly that the values of the person-fit indices are correlated highly with the values of the person reliabilities. Rost's paper demonstrates that, for most practical purposes, the indices of fit using the numerically more simple unconditional likelihood ratio statistics are just as powerful as those obtained from the more complex conditional likelihood ratio tests. Rennie's research note also deals with fit, but not from a straightforward statistical approach. She shows how an examination of the asymmetry of parameter estimates, which may reasonably be expected to be symmetrical, reveal certain response sets.
Important principles in choosing a Rasch model are brought into focus by the fact that these papers deal with both the generalization of the simplest of Rasch models for measurement, and the tests of fit between empirical data and the chosen model. Firstly, while models may be postulated for more complex data collection designs than those involving a dichotomous response, the elaborations retain the separation of the person parameters from the item or question parameters. Thus the elaboration of models is not simply an ad hoc exercise designed to improve the modelling of the data. Secondly, the emphasis on checking the way the data and the chosen model might not accord with one another demonstrates a concern for understanding the principles behind the data. The papers in this issue show that the approach to test construction and analysis, for both applied and research purposes, involves a constant interplay between these strong models for measurement and the data collection designs which are employed.
David Andrich and Graham Douglas, Guest Editors
Education Research and Perspectives. Vol. 9, No. 1 June 1982, 5-6
Reproduced with permission of The Editors, The Graduate School of Education, The University of Western Australia. (Clive Whitehead, Oct. 29, 2002)
Go to Top of Page
Go to Institute for Objective Measurement Page
FORUM | Rasch Measurement Forum to discuss any Rasch-related topic |
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Our current URL is www.rasch.org
The URL of this page is www.rasch.org/erp1.htm