Reporting Candidate Performance on Computer Adaptive Tests: IPARM

What can be reported to a CAT examinee? Raw scores are meaningless. Percentiles provide no substantive guidance on strengths and weaknesses. Even measures have little value without a context.

Instructionally useful diagnostic information can be given, however, by using a variation of KIDMAP output (Wright, Mead, and Ludlow, 1980). This is implemented in the IPARM program (Smith, 1991). This program performs item and person analysis at a higher level of detail than calibration programs. The "IPARMM" option creates detailed maps of examinee performance for fixed length or computer adaptive tests.

Abbreviated versions of two person maps are shown. The top of each map summarizes the person's test performance: raw score, number of items attempted, logit ability, and standard error of measurement. Conventionally, the center of the scale, 0.0, represents the average item difficulty for the bank of test items. In practice, logits are usually rescaled into positive integers for published reports.

The center section of each map shows item level performance. The items answered correctly are shown in the top half, those answered incorrectly in the bottom half. Items are identified by five character item names supplied by the user. Here, item names are textbook chapter numbers followed by objectives within chapters.

The band that separates the correct and incorrect responses indicates the person's ability estimate by "®" with a ±1 standard error band around that estimated ability indicated by "-".

The line of symbols beneath the ability estimate indicates the expected success rate for this examinee. The probabilities of correctly answering an item in the indicated ranges are: 80%-100%: "///", 65%-80%: "<lt;lt;", 35%-65%: "===", 20%-35%: ">gt;gt;", 0%-20%:"\\\". These symbols provide a frame of reference for the person's mastery of the material and for the consistency (fit) of the person's performance.

The histogram below the item section shows the performance of all persons taking the examination. Each "*" represents one person. This provides a normative interpretation of the measure.

In the first example, Person 1 has a low score, falling in the lower quartile of the class performance. The CAT test covered a wide range of item difficulties, from -4.0 to +2.0 logits. There were no surprising responses. The map of wrong responses shows what objectives this person missed and forms a guide for further study.

In the second example, Person 38 has a much higher ability estimate, third in the class. This test also covers a wide range of item difficulties, -3.0 to +3.0 logits. Here there are three unexpected incorrect responses to what should be easy items for this person. Person 38 had more than an 80 percent chance of answering items 02-33, 02-39, and 02-40 correctly. It seems that even this able examinee would benefit from further study of parts of Chapter 2!

Richard M. Smith 1994 RMT 8:1 p. 344-5

Smith RM (1991) IPARM Computer Program. Chicago: MESA Press

Wright BD, Mead RJ, Ludlow LH (1980) KIDMAP: Person-by-Item Interaction Mapping. MESA Memorandum #29. Chicago: MESA Press



Reporting candidate performance on computer-adaptive tests. Smith RM. … Rasch Measurement Transactions, 1994, 8:1 p.344-5



Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt81m.htm

Website: www.rasch.org/rmt/contents.htm