Standard Setting Methods

"Underlying the concept of achievement testing is the notion of a continuum of knowledge acquisition." Glaser, 1963

"The idea of a measure requires an idea of a variable on which the measure is located... Our intention is to show how calibrated items can be used to define a variable." Wright & Stone, Best Test Design, 1979

Practitioners of modern measurement have realized the hazards of using untransformed raw scores in every area except one - that of setting performance standards. The employment of antiquated models, such as Angoff (1971), seems to be more the rule than the exception.

Objective standard setting (Wright & Grosse, RMT 7:3 p. 315-6,1993) has demonstrated its psychometric effectiveness at providing testing bodies with reasonable ways to develop stable standards, which also produce acceptable examinee pass rates. The traditional models, like Angoff, have not produced useful results without a variety of "adjustments" which alter and corrupt whatever standard does emerge. The differences between the Objective and Angoff style approaches are more fundamental than pass rate stability, however. Indeed, the meaning and existence of the construct upon which the standard is based is considerably different. [Another attempt at objective standard setting is the Lewis, Mitzel, Green (1996) IRT-based Bookmark standard-setting procedure.]

Angoff attempts to quantify only one point on a construct by asking judge panels to define "minimal competence". This "quantification" is generated solely from predictions of examinee success and is expressed as a proportion of correct responses on the entire test (e.g. a raw score of 100 out of 150). Such simple, untransformed proportions are useless for the construction of meaning, however, because no variable is defined.

The Objective model asks judges to define required knowledge directly through item selection. Wright and Stone (1979) demonstrate that item calibrations define the variable and quantify the standard. The Figure illustrates a judge-defined standard from a recent objective standard setting conducted at the National Certification Corporation. The items are placed at their empirical logit difficulties. Inspection of their content discloses the stratification indicated. The judges then located the "Standard" at a defensible transition point between basic and advanced items. This Figure demonstrates how the Objective model allows for a clear and meaningful description of the standard. Such a description requires the adequate construction of a variable. The construct, itself, quantifies the qualitative understanding.

The vagaries of the Angoff method are replaced in the Objective approach with clear definitions, descriptions and quantifications. Whereas Angoff may begin with content, it ends up atomized into hundreds of contentless score fractions. Only an Objective approach retains the full richness of content understanding throughout the process, synthesizing it into a useful definition of the meaning of the standard.

------------------------------------------------
Item     Item         Item Descriptors
Logit    Map
------------------------------------------------
7.00     x
         x            Items in this range:
         x
         xxx          Advanced Physiology
         x            New Medical Advances
         xxx          Drug Dosage Calculations
6.0      xxxx         Psycho-Social Questions
         xxxxx
         xxxx
------------------------------------------------
         xxxxxxxxxx   STANDARD
         xxxxxxxxx
         xxxxxxx      Items in this range:
5.0      xxxxxxx
         xxxxxx       Gen Anatomy/Physiology
         xx           Intake/Evaluation
         xxxx         Routine Patient Care
         xxxx         Drug Usage
         x            Diagnostic Tools
4.0      x
------------------------------------------------

Angoff W.H. 1971. Scales, norms and equivalent scores. Chapter 15 in R.L. Thorndike, ed., Educational Measurement, 2 Ed. Washington DC: American Council on Education.

Glaser, R. 1963. Instructional technology and the measurement of learning. American Psychologist 18 519-521.

Lewis D.M., Mitzel, H. C., Green, D. R. (1996). Standard Setting: A Bookmark Approach. In D. R. Green (Chair), IRT-Based Standard-Setting Procedures Utilizing Behavioral Anchoring. Symposium presented at the 1996 Council of Chief State School Officers 1996 National Conference on Large Scale Assessment, Phoenix, AZ.


Standard setting methods. Stone GE. … Rasch Measurement Transactions, 1995, 9:3 p.452



Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt93m.htm

Website: www.rasch.org/rmt/contents.htm