Comparisons Require Stability

Between last year's test and this year's test, or the pre-test and the post-test, or any pair of tests, there have always been changes. Examinees have changed. Item difficulties have drifted or been modified by instructional effects. The raters have changed, even if minutely. The empirical definitions of the rating scale categories have altered. How can changes across time in examinee performance be investigated when everything else is simultaneously in flux?

Comparisons require a stable frame of reference. In order to compare performance across time, all other changes across time must be eliminated or controlled. There are several methods:

a) Assertion of constancy.

Most test items retain approximately stable difficulties over the testing period. For these items, the pre-test, post-test and pre-post joint calibrations are statistically stable. Their calibrations may be anchored (fixed) at whichever set of calibrations makes the most sense to the test consumers. Constancy may also be expressed in terms of groups of items or raters (group means) or demographic groups of examinees, for which individual fluctuation, but no overall shift, is asserted.

b) Assertion of difference.

Some items change difficulty noticeably from pre-test to post-test. Each of these items can be asserted to be acting like two different items, a pre-test item and a post-test item. The data set can be reformatted so that each of these items is split into two items. Responses to the pre-test item are missing for the post-test. Responses to the post-test item are missing for the pre-test.

c) Assertion of compromise.

Rating scales often present a challenge. Scale structure often changes from pre-test to post-test. Indeed, on the pre-test, the highest categories of the scale may not be observed, but on the post-test, the lowest categories may be missing. It is impossible to conceptualize, compare and communicate measures based on different rating scales, the pre-test and the post-test versions. Consequently,however much the scale structure may have changed, measures must be based on a compromise set of rating scale step calibrations obtained from a joint analysis of the pre-test and post-test data combined.

d) Assertion of meaning.

To an examinee unfamiliar with the Greek alphabet, all Greek words are exceedingly difficult. After learning the Greek alphabet, however, some words become easy, while others remain difficult. If the purpose of the test is to measure improvement in Greek reading comprehension, then the post-test item difficulties (which differentiate between easy and hard words) are more useful than the pre-test difficulties (in which the distinction between easy and hard words is muted). On the other hand, if instruction in, say, safety procedures, is intended to give examinees complete mastery of all tested material, then all items on post-test will be very easy for most examinees. Then it will be the set of pre-test calibrations that distinguish between new and familiar material.

In practice, there will be different, but equally-reasonable, assertions for establishing a stable frame of reference. There will also be different sets of assertions for examining performance improvement and for examining item difficulty drift. The criteria for choosing the definitive assertions are meaningfulness and ease of communication to the user of the results.

Benjamin D. Wright

Wright B.D. (1996) Comparisons require stability. Rasch Measurement Transactions 10:2 p. 506.


Comparisons require stability. Wright B.D. … Rasch Measurement Transactions, 1996, 10:2 p. 506

[an error occurred while processing this directive] [an error occurred while processing this directive]

The URL of this page is www.rasch.org/rmt/rmt102p.htm

Website: www.rasch.org/rmt/contents.htm