Evaluating a ROC Screening Test

Short, cheap screening tests are used to tell which examinees must take time-consuming, expensive tests. A good screening test distributes resources efficiently. A poor screening test wastes examinee time and lowers testing efficiency.

Colliver, Vu and Barrows (CVB), in AERA (Division I) award-winning research, evaluate the screening effectiveness of a standardized patient (SP) examination for medical clerkship by means of Receiver Operating Characteristic (ROC) signal detection. All CVB students took the 3 day examination of 18 SP cases. CVB consider using the first day to screen the next two days. They want to minimize testing effort for clear passes, so they investigate how results differ when a pass on the first day is accepted as a pass on all three days, but a fail on the first day requires the next two days. CVB's "true positive" success rate is the proportion of examinees passing the full examination who pass the screening test. The higher the "true positive" rate, the more resources are saved by the screening test. Their "false positive" failure rate is the proportion of examinees failing the full examination who pass the screening test. The lower the "false positive rate", the fewer unqualified students are passed.

Empirical "true positive" and "false positive" rates for screening tests of 2, 4, 6, 8, and 10 standardized patients are reported by CVB in ROC format (ROC Rate % plot). Each curve describes the ROC of a screening test. The five points along each curve mark 5 pass-fail cut-points. From lower left to upper right of each curve, these cut-points range from one standard error of measurement (SEM) above the mean pass-fail level of the SP cases in the screening test to one SEM below. The center point marks a cut-point at the mean. ROC curves are interpolated by straight lines between these five points.

In principle, the closer a ROC curve is to the top left, the better the screening test, i.e., longer tests are better. The best cut-point for a test is a point near the top left, where the "true positive" rate is much greater than the "false positive" rate. This is usually close to the cut-point corresponding to 0.5 SEM above the screening test's mean SP pass-fail level.

Unfortunately, ROC curves are in a non-linear % metric. Consequently it is difficult to "measure" distances between ROC curves or to calculate how much "nearness" costs in terms of extra SP cases administered. The ROC curves, however, can be linearized by converting them to log-odds (Log-Odds plot). Now the ROC curves are seen to be empirical manifestations of parallel straight lines that relate success on the screening test to success on the full test in a simple way. The unequal spacing of the 5 cut-points across tests exposes the arbitrariness of cut-points based on SP (or item) distribution.

The line nearest top left is the "best" test, and the "best" cut-point has not changed. Nevertheless, the misleading geometry of the ROC plot is exposed. Now there are many reasonable "nearness" rules. Any diagonal line, top-left to bottom-right, could suffice.

The ROC rule specifies that credit for correct selection and debit for incorrect selection are equal. But this is seldom so. For the screening test that CVB describe, the penalty for a "false positive" (passing an incompetent examinee) is much greater than the benefit for a "true positive" (shorter test administration for a competent examinee). Otherwise the test would long since have been shortened. An Examination Board's preferred trade-off between debits and credits can be expressed by "best" cut-point contours on either plot. But the log-odds plot has important advantages over the ROC plot: 1) Screening test lines are parallel, straight and easily extrapolated. 2) Reasonable lines for screening tests of other lengths are simple to interpolate between the existing lines. 3) Alternative "nearness" rules can be expressed and evaluated as simple straight lines. One is shown for when the Board specifies that the penalty for passing an unqualified candidate is twice the benefit for passing a qualified candidate on the screening test.

John Michael Linacre

Colliver JA, Vu NV, Barrows HS. Screening test length for sequential testing with a standardized-patient examination: a Receiver Operating Characteristic (ROC) analysis. Academic Medicine, 67(9) 592-595, September 1992

Evaluating a ROC screening test. Linacre JM. Rasch Measurement Transactions 1994 7:4 p.317-8


Evaluating a screening test. Linacre JM. … Rasch Measurement Transactions, 1994, 7:4 p.317-8



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt74a.htm

Website: www.rasch.org/rmt/contents.htm