Setting multiple cut-offs in educational contexts where score points are required to indicate transitions from one ability level to the next is a challenging issue. One such context is the Common European Framework of Reference for Languages (CEF). The CEF is a six-point proficiency scale with descriptors for each band in the form of 'can-do statements'. The levels on the CEF are A1, A2, B1, B2, C1 and C2, A1 being the lowest level and C2 the highest. Linking tests to the CEF and validating the claims of links to the CEF is an important issue that European language testers are wrestling with. In this paper a methodology for linking tests to the CEF or any other similar proficiency scale is suggested.
Procedures
The material was a reading comprehension test
comprising 75 items. A group of 20 raters who were
familiar with the CEF proficiency scale and its descriptors
were asked to rate the 75 items, indicating what minimum
ability level on the six-point CEF proficiency scale a
student should exhibit to get each item right. The items
were rated from 1 to 6, 1 indicating the lowest CEF level
and 6 the highest. The items were then calibrated on the
basis of these ratings using Andrich's (1978) rating scale
model. The next step was to calibrate the items on the
basis of actual student performances. The following tables
show the descriptive statistics for the item measures based
on the two analyses.
Table 1. Item measure summary. | ||
---|---|---|
Item measure summary statistics | Rater-based analysis | Student-based analysis |
N | 75 | 75 |
Mean | .26 | -.00 |
Median | .16 | -.05 |
Std. Deviation | 2.72 | 1.80 |
Range | 10.40 | 8.42 |
Minimum | -5.97 | -3.28 |
Maximum | 4.43 | 5.14 |
Reference difficulty | 0.00 | item mean |
Figure 1. Cross-plot of item measures from rater-based and student-based analyzes |
The cross plot of the item calibrations from the two analyses is shown in Figure 1. The two sets of item calibrations, i.e., those based on raters and those based on the students' performances, correlated at 0.80. It can be seen that there are a few conspicuous outliers, and there may be two trendlines, one for the upper half of the plot, and the other for the lower, but the overall pattern is clear. The slope of an empirical joint "best fit" line (through the two means, and two means+1 S.D.) is 0.66. The mean difference between the average item measures is 0.26 logits. Thus the person measures were converted into the rater frame-of-reference by means of the equating formula:
M2 = (M1 -mean(1))*SD(2)/SD(1) + mean(2)
i.e., Adjusted measure = (measure-.00)/0.66 + 0.26
Rater analysis equated with student analysis
When the person measures are equated for both the
intercept and the slope of the trendline, they are mapped
into the framework of the rater-based analysis. Table 2
shows the descriptive statistics for the 160 person
measures in three different modes: (1) unequated, (2)
equated with the rater-based analysis by correction for
intercept only, and (3) equated with the rater-based
analysis by correction for both intercept and slope.
Table 2: Descriptive statistics for 160 persons in three different modes. | |||
---|---|---|---|
Person Measures Unequated | Person Measures Equated for Intercept | Person Measures Equated for Intercept & Slope | |
N | 160 | 160 | 160 |
Mean | .07 | .33 | .37 |
Median | .26 | .52 | .65 |
Mode | -1.51 | -1.25 | -2.02 |
Std. Deviation | 1.34 | 1.34 | 2.03 |
Range | 5.78 | 5.78 | 8.73 |
Minimum | -3.28 | -3.02 | -4.70 |
Maximum | 2.50 | 2.76 | 4.04 |
Setting cut-points
Half-score-point thresholds on the reference item at zero
logits in the rater analysis set the cut-off scores. Since the
person measures have been brought to the framework of
the rater-based analysis these half-score-point thresholds
are directly applicable to the person measures after
equating. The expected score ICC for the reference item is
shown in Figure 2. The half-score point intervals are
indicated on the latent variable. The locations of the 6
proficiency levels are indicated by their codes, A1, etc.
Figure 2: Expected score ICC: means. |
Cross validation
In order to check the accuracy of the link, a small sample
of students at different locations along the ability scale
can be selected. It is better to select students whose ability
measures on the test (after being equated with the raterbased
analysis) fall well in the middle of the bands and
students who fall very close to the transition points. Then
the group of expert raters who rated the items can
interview these students and try to rate them on the
proficiency scale, they rated the items on. Agreements
between rater judgments of where the students fall on the
proficiency scale and students' measures, which
empirically put them at certain levels on the scale,
confirm the equating. Disagreements can be examined in
case they indicate the need for slight adjustments to the
criterion levels thresholds.
Purya Baghaei
Applying The Rasch Rating-Scale Model To Set Multiple Cut-Offs, Baghaea, P. Rasch Measurement Transactions, 2007, 20:4 p. 1075-6
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt204a.htm
Website: www.rasch.org/rmt/contents.htm