Standard Errors: Means, Measures, Origins and Anchor Values

Statistics text books explain the "standard error of the mean", but are generally silent about the "standard error of a measure". How do they relate?

The standard error is the modeled standard deviation of the observed estimate around the unobservable "true" value. In practice, the observed estimate substitutes for the "true" value and we think of the standard error being centered on observed estimate.

Both the observed estimate and its standard error are computed from the data. Each data point gives us an estimate of the mean or the measure, and the accumulation of the estimates provides the final best estimate along with its precision, its standard error. Thus:

Accumulation of estimates (one per observation) => mean parameter estimate ± S.E. of estimate

For a typical "text book" normal distribution, the parameter of interest is the mean, which is the sum of all perfectly-precise observations divided by their count. And its standard error is the sample standard deviation of the observations divided by the square-root of the count.

A Rasch measure has parallels to a sample mean. Conceptually, each qualitative observation ("Right", "Wrong", etc.) provides an estimate of the relevant measure, so

Accumulation of estimates (one per observation) => measure estimate ± S.E. of estimate

Implementing this directly is awkward, It is more convenient to rearrange the computation:

Estimate of (accumulation of observations) => measure estimate ± S.E. of estimate

Here, the standard error is computed by summing the statistical model variance across the observations, and then the standard error is the square-root of the inverse of the summed variance. For example, consider 1000 reasonably targeted observations of a dichotomous item. Experience shows that a reasonable p-value for such an item is .8. So the average binomial variance = p-value*(1 - p-value) = .8*.2 = .16. So the variance of 1000 observations = 1000 * .16 = 160. Standard error of the logit estimate = 1 / square root (variance) = 1 / square-root (160) = .08 logits. The ease of this type of computation is one reason the Rasch model is formulated in logits, rather than in log10, probits, etc.

Local Origins and Standard Errors

The standard error of the mean is usually computed in an absolute frame of reference in which the zero point is defined external to the data. Rasch measures are defined relative to a local zero point. How does this impact standard error computations?

In the same way as the zero point on a temperature scale is an arbitrary point, chosen according to some definition, e.g., "the freezing point of water", the zero point (local origin) of a Rasch measurement scale is an arbitrary point on the latent variable, defined in some manner. Typical choices are "the average difficulty measure of all items", "the difficulty of a specific item" or "the average ability measure of all respondents".

In general, the Rasch local origin is considered to be the absolute location on the latent variable with which the empirically-derived location happens to coincide. Thus the measures and standard errors are considered to be in an absolute frame of reference.

Standard error of an Average

Imagine we measure the lengths of three pieces of wood: 1 m with precision 2 mm, and 3 m. with precision 3 mm, and 5 m with precision 3 m. If we sum the lengths (putting the pieces of wood end-to-end) then: total = 1+3+5 = 9 m with precision = sqrt( 2*2 + 3*3 + 3*3) = sqrt (22) mm

Now we want the "average length" = 9/3 = 3 m with precision = sqrt(22/3) mm. So if we put the three "average lengths" end-to-end we construct the total length and its precision.

So, for measures, Mi with precision SEi where i=1,L:
Average = sum(Mi)/L = M (where M=0 for the local origin)
Precision = sqrt ( sum(SEi*SEi)/L ) = Root-Mean-Square-Error (RMSE) of the measures.

However, when comparing measures across parallel analyses, shifts in the locations of local origins might be crucial. Accordingly the standard error of the empirical zero could be included. This suggests that the most stable possible choice of local origin be made to minimize the need for this computation. In general, if the mean of the item difficulties is chosen, and the same set of items is administered a second time, then the standard error of the mean-item "origin" is the average standard error (root-mean-square-error, RMSE) of the items. Typically, this would be much smaller than the standard error of a person measure.

So the joint standard error of the difference between two measures across test forms comprising the same items would approximate:

SE(measure1 - measure2) = square-root( SE(measure1)2 + SE(origin1)2 + SE(measure2)2 + SE(origin2)2 )

Anchor Values and Standard Errors

An anchored (fixed) measure is treated as though it is an estimate of the "true" value of the parameter, so it is reported along with the standard error around the "true" value. If the corresponding local empirical value is also computed, this can be compared with the anchor value along with its standard error in order to test the hypothesis that the data were generated by the true (anchor) value.

John Michael Linacre

Standard Errors: Means, Measures, Origins and Anchor Values. Linacre J.M. … Rasch Measurement Transactions, 2005, 19:3 p. 1030

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK,
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX,
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is