Applying The Rasch Rating-Scale Model
To Set Multiple Cut-Offs

Setting multiple cut-offs in educational contexts where score points are required to indicate transitions from one ability level to the next is a challenging issue. One such context is the Common European Framework of Reference for Languages (CEF). The CEF is a six-point proficiency scale with descriptors for each band in the form of 'can-do statements'. The levels on the CEF are A1, A2, B1, B2, C1 and C2, A1 being the lowest level and C2 the highest. Linking tests to the CEF and validating the claims of links to the CEF is an important issue that European language testers are wrestling with. In this paper a methodology for linking tests to the CEF or any other similar proficiency scale is suggested.

Procedures
The material was a reading comprehension test comprising 75 items. A group of 20 raters who were familiar with the CEF proficiency scale and its descriptors were asked to rate the 75 items, indicating what minimum ability level on the six-point CEF proficiency scale a student should exhibit to get each item right. The items were rated from 1 to 6, 1 indicating the lowest CEF level and 6 the highest. The items were then calibrated on the basis of these ratings using Andrich's (1978) rating scale model. The next step was to calibrate the items on the basis of actual student performances. The following tables show the descriptive statistics for the item measures based on the two analyses.
Table 1. Item measure summary.
Item measure summary statisticsRater-based analysisStudent-based analysis
N7575
Mean.26-.00
Median.16-.05
Std. Deviation2.721.80
Range10.408.42
Minimum-5.97-3.28
Maximum4.435.14
Reference difficulty0.00item mean

Figure 1. Cross-plot of item measures from rater-based and student-based analyzes

The cross plot of the item calibrations from the two analyses is shown in Figure 1. The two sets of item calibrations, i.e., those based on raters and those based on the students' performances, correlated at 0.80. It can be seen that there are a few conspicuous outliers, and there may be two trendlines, one for the upper half of the plot, and the other for the lower, but the overall pattern is clear. The slope of an empirical joint "best fit" line (through the two means, and two means+1 S.D.) is 0.66. The mean difference between the average item measures is 0.26 logits. Thus the person measures were converted into the rater frame-of-reference by means of the equating formula:

M2 = (M1 -mean(1))*SD(2)/SD(1) + mean(2)

i.e., Adjusted measure = (measure-.00)/0.66 + 0.26

Rater analysis equated with student analysis
When the person measures are equated for both the intercept and the slope of the trendline, they are mapped into the framework of the rater-based analysis. Table 2 shows the descriptive statistics for the 160 person measures in three different modes: (1) unequated, (2) equated with the rater-based analysis by correction for intercept only, and (3) equated with the rater-based analysis by correction for both intercept and slope.

Table 2: Descriptive statistics for 160 persons in three different modes.
 Person Measures
Unequated
Person Measures
Equated for Intercept
Person Measures
Equated for Intercept & Slope
N160160160
Mean.07.33.37
Median.26.52.65
Mode-1.51-1.25-2.02
Std. Deviation1.341.342.03
Range5.785.788.73
Minimum-3.28-3.02-4.70
Maximum2.502.764.04

Setting cut-points
Half-score-point thresholds on the reference item at zero logits in the rater analysis set the cut-off scores. Since the person measures have been brought to the framework of the rater-based analysis these half-score-point thresholds are directly applicable to the person measures after equating. The expected score ICC for the reference item is shown in Figure 2. The half-score point intervals are indicated on the latent variable. The locations of the 6 proficiency levels are indicated by their codes, A1, etc.

Figure 2: Expected score ICC: means.

Cross validation
In order to check the accuracy of the link, a small sample of students at different locations along the ability scale can be selected. It is better to select students whose ability measures on the test (after being equated with the raterbased analysis) fall well in the middle of the bands and students who fall very close to the transition points. Then the group of expert raters who rated the items can interview these students and try to rate them on the proficiency scale, they rated the items on. Agreements between rater judgments of where the students fall on the proficiency scale and students' measures, which empirically put them at certain levels on the scale, confirm the equating. Disagreements can be examined in case they indicate the need for slight adjustments to the criterion levels thresholds.

Purya Baghaei


Applying The Rasch Rating-Scale Model To Set Multiple Cut-Offs, Baghaea, P. … Rasch Measurement Transactions, 2007, 20:4 p. 1075-6

Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, www.ncme.org - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX, www.aera.net
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt204a.htm

Website: www.rasch.org/rmt/contents.htm