What kind of feedback is helpful to failing examinees after they take a computer-adaptive test (CAT)?
In certification and licensure, successful examinees usually receive only notification that they passed. Unsuccessful examinees, however, need feedback to guide them in their studies. Typical feedback from paper-and-pencil examinations includes examinee score and passing score, or an indication of how many more correct responses the examinee needed to pass. Such feedback is meaningless in CAT where item targeting insures that everyone answers approximately the same percentage of items correctly.
A straightforward approach is to report the examinee's estimated ability measure and the difficulty calibration of the passing standard. But this is too abstract because examinees do not know what falling short by .05 units means in terms of additional study. Norm- referenced standardizations of ability estimates are even less helpful, because they differ for each test administration.
A domain-referenced index may be the solution. Each examinee's ability estimate can always be converted into a percent-correct score on the entire item pool. The same can be done with the passing standard. But, when carefully constructed, the CAT item pool defines the domain. A percent-correct on the item pool is equivalent to percent-success on the domain. A CAT report can state that the examinee's performance level corresponds to success on XX% of the domain, with the passing level corresponding to YY% success. Since the Examination Board makes public a general definition of the domain, such a CAT report could indicate to the examinee how much study is needed. Care would still be required to insure that examinees do not confuse percent-correct on their own tests with percent-success on the domain.
What do you think of this approach? Have you a better alternative? Please tell me your thoughts, experiences or suggestions on feedback to CAT examinees:
CAT: What feedback?, E Julian Rasch Measurement Transactions, 1993, 6:4 p. 246
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt64b.htm
Website: www.rasch.org/rmt/contents.htm