Repeated Measure Designs (Time Series) and Rasch

"Subjects in repeated-measures designs provide more than one score" (Introduction to Analysis of Variance, J. R. Turner, Sage, 2001). This motivates well-intentioned statisticians to proclaim that "since the repeated measures cannot be independent, the Rasch model is not appropriate." But this statement can be paradoxical.

If the analysis or repeated-measures is based on raw scores, then the analysis is treating the raw scores as though they represent all the relevant information in the underlying observations. In other words, the raw scores are the "sufficient statistics" for the underlying observations. Further, if a raw score is to represent one quantitative ability, then it must not matter which pattern of observations on a given set of items generated that raw score. If we apply these considerations to a set of responses, we discover that they require the raw scores to fit the Rasch model. See "Rasch Model from Raw Scores as Sufficient Statistics", RMT 3:2, p. 62,

If the reviewers doubt that the Rasch model is applicable, they are also doubting that the raw score is an accurate summary of the observations.

This suggests that the statisticians have their logic backwards. If we have raw scores from a repeated-measures design, we need to submit their underlying observations to Rasch analysis in order to discover whether the raw scores are locally independent enough that they can form the basis for valid statistical analysis.

What if the Rasch analysis does indicate that the raw scores are defective? One approach is to select a subset of the observations that do fit the Rasch model. Then the dependency among the repeated measures will not distort the Rasch measures of the subjects.

For instance, select at random the observations for one time-point for each subject. Use this subset of observations to generate definitive item measures and definitive rating-scale structures (Rasch-Andrich thresholds). Then perform an analysis of the entire dataset with the items and thresholds anchored (fixed) at their definitive values.

Overly predictable data, appearing as overfit to a Rasch model, are rarely a problem because the data become redundant (not needed), but the measures do correspond to the data. Underfit (noisy misfit) to the Rasch model is a problem because the measures do not accurately correspond to the data.

If the time-point data are dependent then
1a) choose one point as definitive
1b) select at random across time points so that each person is in the selection only once.
2) Rasch analyze the data from 1a) or 1b), then output the item difficulties and Rasch-Andrich thresholds 3) Anchor (fix) the item difficulties and Rasch-Andrich thresholds at their values from 2) and analyze all the data. The anchored values prevent the dependency from distorting the estimated measures.

(Suggested by Tsair-Wei Chien, Chi-Mei Medical Center, Tainan, Taiwan.)

    See also:
  1. Rasch Analysis of Repeated Measures
  2. Rack and Stack: Time 1 vs. Time 2: Repeated Measures

Repeated Measure Designs (Time Series) and Rasch … T.-W. Chien, Rasch Measurement Transactions, 2008, 22:3, 1171

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK,
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX,
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is