KR-20 / Cronbach Alpha or Rasch Person Reliability: Which Tells the "Truth"?

"The reliability of any set of measurements is logically defined as the proportion of their variance that is true variance... We think of the total variance of a set of measures as being made up of two kinds of variance: true variance and error variance... The true measure is assumed to be the genuine value of whatever is being measured." (Guilford, 1965, p. 488). So,

Reliability = true variance / observed variance

and, when measures and errors are uncorrelated,

Observed variance = true variance + error variance

Thus "reliability" is not an index of quality ("Is this a good measure of ...?"), but of relative reproducibility ("How repeatable is this measure?"). The most popular estimator of raw-score reliability is the Kuder-Richardson 20, a special case of Cronbach's Alpha:

where L is the length of the test, ²t is the variance of test raw scores across subjects, pi is the number of subjects who succeeded on item i, and qi is the number of subjects who failed. σ²t is the observed variance. pq estimates a binomial error variance. KR-20 is an index of the repeatability of raw scores, misinterpreted as linear measures.

The focal question is "Does this test produce repeatable measures for this sample?" The observed variance is of the Rasch measures. Each observation is modeled to include error:

Xni=0,1 is the response by subject n to item i. Pni is the probability that subject n would succeed on item i. Pni(1-Pni) is the binomial variance of an observation like Xni. The error variance of Rasch measures is estimated from the sum of the modeled variance of observations. This "model" error variance requires the data to conform stochastically to the Rasch model. Since there is always additional noise in the data, a more "real" error variance is:

"Real" error variance = model variance * MAX(1.0, INFIT mean-square)

Rasch reliability = (observed measure variance - "real" error variance) / observed measure variance

How does test targeting affect reliability? A test of 50 dichotomous items, uniformly distributed from -2 to +2 logits, was simulated on hypothetical samples of 1,000 subjects with abilities distributed, N(0,1). The sample was initially targeted on the test, then mistargeted one-half logit away from the test in five successive steps. This was repeated three times and their means reported.

For each targeting, KR-20 and "true" raw score S.D. were computed.

"True" score S.D. = observed S.D. * (KR-20)

In Figure 1, the raw score S.D.s (using the right-hand Y-axis) are highly offset-dependent, but approximate the values predicted from the generating measures over the entire range.

For Rasch reliabilities, Adj. S.D. estimates the generator, "true", S.D.: Adj. S.D. = observed measure S.D. * (reliability). Zero and perfect scores were replaced by scores 0.5 score points more central, and the corresponding measures and standard errors imputed. In Figure 1, the success of the recovery of the generating measures from the simulated data is shown by the Adjusted S.D. curve. The recover is reasonably accurate with offsets in the range 0 to 3 logits.

Rasch reliabilities can also be computed directly from the generating measures and item difficulties without data. These generator-based reliabilities are the maximum possible. Figure 2 plots all three reliability coefficients against the target offsets. As offset increases, the proportion of extreme scores increases and all reliabilities decrease. Rasch data-based reliabilities are less than the generator-based reliabilities because (i) measures are estimated from discrete (not continuous) raw scores; (ii) measures for extreme scores are biased towards the test center as targeting becomes more offset; (iii) Rasch S.E.s are INFIT-inflated, (but this is a minor effect). Thus Rasch data-based reliability understates measure reliability, providing assurance that a test has performed at least as well as Rasch "real" reliability.

KR-20 (Cronbach Alpha) always exceeds the maximum reliability possible for the measures underlying these simulated data. This misleads the test-user into believing a test has better measurement characteristics than it actually has. Yet KR-20 has met its design criteria, because estimated raw-score "true" S.D.s in Figure 1 match their predicted values. It reports the reliability of raw scores accurately, but these are local, test-dependent rankings. KR-20 overstates the reliability of the test-independent, generalizable measures the test is intended to imply. For inference beyond the test, Rasch reliability is more conservative and less misleading.

There is much more at RMT 13:2 Relating Cronbach and Rasch Reliabilities

John M. Linacre

Guilford J.P. (1965) Fundamental Statistics in Psychology and Education. New York: McGraw-Hill.

KR-20 / Cronbach Alpha or Rasch Person Reliability: Which Tells the "Truth"? Linacre J.M. … Rasch Measurement Transactions, 1997, 11:3 p. 580-1

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK,
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX,
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is