CAT: Taking Items Twice?

Computer-adaptive testing gives wonderful flexibility. Certification tests can be administered at the candidate's convenience, as often as the candidate wants, until the candidate passes. But there is a book-keeping problem. How are we to keep track of all the items different candidates have seen in order to prevent asking a candidate the same question twice? Does it matter if we ask the same question twice?

On paper-and-pencil tests we don't want to ask questions from earlier test administrations on later administrations. This could enable candidates taking the test a second time to raise their raw scores, if they had taken the trouble to discover the answers to the test items they had seen.

On CAT tests, it is different. Everyone's raw scores are about the same. Failing candidates see a majority of easy items, i.e., those with difficulties below the pass-fail point. On these items, the failing candidate had a pre-set success rate (say 60%). Passing candidates have the same pre-set success rate, but on items that at least straddle the pass-fail point.

The concern is that a candidate might gain an unfair advantage, if that candidate saw items from the first CAT test again in the second CAT test. But if the candidate sees the same items twice, then that candidate is performing at the same level as in the first test administration, i.e., is failing! To pass the test, a candidate, who previously failed, must answer questions correctly that were harder than those previously administered. Answering previously administered questions correctly will improve that candidate's measure, but will only turn failure into success for a candidate who only just failed last time.

The Figures show what happened on a 90 item CAT test that was readministered to failing candidates some months later. The number of items common to the two tests for each candidate are shown. In Figure 1, the performance of the 104 candidates who failed both tests is shown. Their measures barely improve or even worsen. Apparently they had not made the effort even to study the items they saw the first time. Only the 5 very poorly performing candidates, who effectively took the same test over again, improve noticeably, but they are still a long way from passing.

Figure 2 shows the 74 candidates who succeeded on their second attempt. Only those already close to the pass-fail line may have benefitted from seeing items for the second time.

Since the CAT success rate is set at 60% and candidates are administered 90 items, these candidates failed about 36 items the first time. There is no advantage to being readministered items passed that first time. The credit for success both times is identical. The advantage would be in changing wrong answers to right answers. On average for all 178 candidates, they changed 4 wrong answers to right answers, but also 3 right answers to wrong!

Imagine that, during the second CAT test of 90 items, a candidate, whose ability was unchanged, happened to be readministered the 36 items answered wrongly the first time. Answering all these items correctly would raise the candidate's measure by about 0.5 logits. This could explain why the 2 candidates passed who received 41-50 repeat items. In practice, one would check whether this was the case before making the final pass-fail decision. In order to verify the accuracy of the candidate's measure, an ability estimate would be made based solely on the responses to non-repeated items. If this estimate is above the pass-fail criterion, the candidate is a genuine pass.



Mary Lunz, Institute for Objective Measurement

Tom O'Neill, American Society of Clinical Pathologists

CAT: Taking items twice? Lunz M.E., O'Neill T.R. … Rasch Measurement Transactions, 1998, 12:3 p. 656-7.




Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt123m.htm

Website: www.rasch.org/rmt/contents.htm