"Underlying the concept of achievement testing is the notion of a continuum of knowledge acquisition." Glaser, 1963
"The idea of a measure requires an idea of a variable on which the measure is located... Our intention is to show how calibrated items can be used to define a variable." Wright & Stone, Best Test Design, 1979
Practitioners of modern measurement have realized the hazards of using untransformed raw scores in every area except one - that of setting performance standards. The employment of antiquated models, such as Angoff (1971), seems to be more the rule than the exception.
Objective standard setting (Wright & Grosse, RMT 7:3 p. 315-6,1993) has demonstrated its psychometric effectiveness at providing testing bodies with reasonable ways to develop stable standards, which also produce acceptable examinee pass rates. The traditional models, like Angoff, have not produced useful results without a variety of "adjustments" which alter and corrupt whatever standard does emerge. The differences between the Objective and Angoff style approaches are more fundamental than pass rate stability, however. Indeed, the meaning and existence of the construct upon which the standard is based is considerably different. [Another attempt at objective standard setting is the Lewis, Mitzel, Green (1996) IRT-based Bookmark standard-setting procedure.]
Angoff attempts to quantify only one point on a construct by asking judge panels to define "minimal competence". This "quantification" is generated solely from predictions of examinee success and is expressed as a proportion of correct responses on the entire test (e.g. a raw score of 100 out of 150). Such simple, untransformed proportions are useless for the construction of meaning, however, because no variable is defined.
The Objective model asks judges to define required knowledge directly through item selection. Wright and Stone (1979) demonstrate that item calibrations define the variable and quantify the standard. The Figure illustrates a judge-defined standard from a recent objective standard setting conducted at the National Certification Corporation. The items are placed at their empirical logit difficulties. Inspection of their content discloses the stratification indicated. The judges then located the "Standard" at a defensible transition point between basic and advanced items. This Figure demonstrates how the Objective model allows for a clear and meaningful description of the standard. Such a description requires the adequate construction of a variable. The construct, itself, quantifies the qualitative understanding.
The vagaries of the Angoff method are replaced in the Objective approach with clear definitions, descriptions and quantifications. Whereas Angoff may begin with content, it ends up atomized into hundreds of contentless score fractions. Only an Objective approach retains the full richness of content understanding throughout the process, synthesizing it into a useful definition of the meaning of the standard.
------------------------------------------------ Item Item Item Descriptors Logit Map ------------------------------------------------ 7.00 x x Items in this range: x xxx Advanced Physiology x New Medical Advances xxx Drug Dosage Calculations 6.0 xxxx Psycho-Social Questions xxxxx xxxx ------------------------------------------------ xxxxxxxxxx STANDARD xxxxxxxxx xxxxxxx Items in this range: 5.0 xxxxxxx xxxxxx Gen Anatomy/Physiology xx Intake/Evaluation xxxx Routine Patient Care xxxx Drug Usage x Diagnostic Tools 4.0 x ------------------------------------------------
Angoff W.H. 1971. Scales, norms and equivalent scores. Chapter 15 in R.L. Thorndike, ed., Educational Measurement, 2 Ed. Washington DC: American Council on Education.
Glaser, R. 1963. Instructional technology and the measurement of learning. American Psychologist 18 519-521.
Lewis D.M., Mitzel, H. C., Green, D. R. (1996). Standard Setting: A Bookmark Approach. In D. R. Green (Chair), IRT-Based Standard-Setting Procedures Utilizing Behavioral Anchoring. Symposium presented at the 1996 Council of Chief State School Officers 1996 National Conference on Large Scale Assessment, Phoenix, AZ.
Standard setting methods. Stone GE. Rasch Measurement Transactions, 1995, 9:3 p.452
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt93m.htm
Website: www.rasch.org/rmt/contents.htm