Test validity, the extent to which a test measures what it is intended to measure, is critical. Although many researchers review content, construct, and statistical aspects of validity, even conscientious researchers usually take for granted that respondents understood the tasks they were asked to perform and then performed them in a coherent way.
Despite the fact that rating scales and response formats are the media of communication with respondents, researchers ignore "communication validity". Did the rating scale categories perform as intended? Did respondents converse with the test developer in a common language free of idiosyncratic category usage, response sets, and ambiguous terminology? Were respondents able to distinguish the response levels of each rating scale? How did they order the levels? It is pointless to examine any other form of validity until we have established that we have listened carefully to what test respondents have told us about our variable.
We want our respondents to manifest a clear definition of the variable. We also want to locate them at separate locations along the variable. Their use of the rating categories is crucial. We need respondents to provide an unambiguous hierarchical ordering of our categories. Their response behavior may not concur with our original presentation of our response categories.
Rasch analysis provides a statistical method for ascertaining and verifying respondents' perceptions of the ordering of category meanings (RMT 9:3 450-451, 9:4 464-465). Categories labeled "Don't know", "No opinion", and "Does not apply" are prime candidates for misplacement in the category hierarchy. Such category labels provoke irrelevant and evasive responses. Usually they do not belong in the hierarchy at all. It is often better not to use them or, when used, to treat their selection as missing data.
Each category is intended to increase the discrimination of the rating scale and so to increase the information in all responses. But confrontation by too many response alternatives muddles respondents. Respondents rarely make stable discriminations among more than 6 levels. Sometimes 2 or 4 levels are all they can negotiate. Excess categories introduce more noise than information by forcing respondents to make their fine choices idiosyncratically, such as by preference for even or odd numbering.
Responses to excess categories can be combined with those of
adjacent categories in a "collapsing" process. When we collapse
adjacent categories, we construct new categorizations. Rasch
analysis provides the opportunity to study how well these new
categories function. The optimal categorization is that which
a) provides the best construct definition,
b) best separates respondents along the variable,
c) produces the best fit of data to model.
These criteria usually cooperate to identify an optimal scoring
solution.
The Figures summarize different categorizations of the responses of teachers to 19 items about reading instruction. The printed rating scale was:
No Emphasis . . . . . Major Emphasis
1 . . . . 2 . . . . 3 . . . . 4
This scale suffered from the common flaw of unlabelled (and hence not clearly defined) categories.
The Figures show the statistical implications of different collapsings. "1234" means the categories are assigned their printed ordering. "1222" means that original category "1" is retained as "1", but original categories "2", "3", and "4" are collapsed into one category "2". The statistics are almost unanimous in declaring that collapsing categories "1" and "2" provides the most informative categorization. Thus, our respondents tell us that they can only discriminate three levels of emphasis in this context. The most valid communication with our respondents is then not our printed scale of 4 theoretical categories, but their experiential scale of three empirical categories. It is on the basis of their scale that investigation of the other forms of validity is best pursued.
Communication validity and rating scales. Lopez WA. Rasch Measurement Transactions, 1996, 10:1 p.482
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Aug. 5 - Aug. 6, 2024, Fri.-Fri. | 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals |
Aug. 9 - Sept. 6, 2024, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt101k.htm
Website: www.rasch.org/rmt/contents.htm