Scores, Reliabilities and Assumptions

Testing literature is rife with incomprehensible papers and reports whose aim seems not communication but obscurantism. Contributing to this sorry spectacle are the terms, "score", "reliability", and "assumption".

Call A Measure A "Measure" Not a "Score":
Even in our best writing we sometimes confuse ourselves and our readers by using the term "score" to convey two seriously different meanings.

The first meaning, sometimes specialized as "raw score", is useful. This "score" refers to a count of observed right answers, rating scale categories or partial credit steps. We count concrete events - however different in qualitative detail - as exchangeable replications of a single idea.

The second meaning, as a measure, is misleading. This version of "score" is often misused for awkward concoctions of raw counts which are mistaken for genuine measures. But when such "scores" are mistakenly thought to be measures and subjected to linear statistical analysis, the results are always wrong to some known extent.

We began moving away from nonlinear, test dependent, raw scores to their transformations into linear, test-free measures long ago. Our work is distinguished for the care we take to avoid the error of mistaking scores for measures. Why not be equally careful to use the noble term measure when we write and talk about the product of our analyses? Let's not hide our lovely light under that old decrepit barrel, the misleading term: "score"!

Think of "Measurement Error", Not "Reliability":
Test reliabilities are not useful indicators of the precision, accuracy or reproducibility of test measures. Reliabilities are sample specific and therefore limited as general characterizations of tests. They only tell how well a test worked on some particular past occasion with some particular past sample. Reliabilities are no more than bits of local history about "once upon a time" applications to long vanished samples.

The standard error of measurement (SEM), however, is sample-free and hence test specific. When sample and test mismatch, then the SEM's for that sample are larger than the SEM's for a sample which matches the test. But this variation of the SEM with test score extremeness is a fixed, sample-free property of the test and can be deduced precisely for any anticipated application. The test-specific pattern of SEM's specifies exactly how well the test can be expected to perform on any application to any sample - past, present or future.

"Specifications", Not "Assumptions":
Poor, weak, speculative "assumptions" have no useful place in discussions of models. "Assumptions" give a profoundly wrong impression about models and their use. "Assumptions", and the ever popular "violations" they lead to, make a model seem a helpless maid on a reckless blind date with dangerous data.

The purpose of a model is to enforce the discipline of a strong theory by applying the demanding and precisely expressed "specifications" the theory calls for. The scientific questions are:

NOT:"Does the model fit the data?"
"Is the model violated?"

BUT:"Can the data fit the model?"
"Are the data useful?"

The "specifications" of a model are its raison d'etre and its modus operandi. The scientific value of the Rasch model is what it specifies - and hence requires - for data. The Rasch model specifies that, for data to be useful for the construction of measurement, they must be collected and organized so that they can stochastically approximate:

a. a single invariant conjoint order of item and person parameters,
i.e. unidimensionality.
b. item and person parameter separability,
i.e. sample-free item calibration and test-free person measurement,
i.e. sufficient statistics.
c. local independence of the observations,
i.e. independence among the residual differences between the observed and estimated data.

Analysis of the fit of data to these specifications is the statistical device by which data are evaluated for their measurement potential - for their measurement validity. Only a model which implements well- defined intentions through its definitive "specifications" can show us which data can serve our purposes and contribute to knowledge and which data cannot.



Scores, Reliabilities and Assumptions, B Wright … Rasch Measurement Transactions, 1991, 5:3 p. 157-158




Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn. George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

Rasch Measurement Transactions welcomes your comments:

Your email address (if you want us to reply):

If Rasch.org does not reply, please post your message on the Rasch Forum
 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Apr. 21 - 22, 2025, Mon.-Tue. International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Feb. - June, 2025 On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia
Feb. - June, 2025 On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt53a.htm

Website: www.rasch.org/rmt/contents.htm