A rating scale is an aid to disciplined dialogue. Its precisely defined format focuses the conversation between the respondent and the questionnaire on the relevant areas. All respondents are invited to communicate in the shared language of the specified option choices (Low 1988).
Ambiguity and uncertainty, however, remain. First, some respondents may not use the rating scale as it was intended to be used. Choosing socially acceptable responses or falling into a response set defeat the purpose of the questionnaire. Second, respondents can only interpret a rating scale in terms of their own understandings of category labels. Lack of clear, shared category definitions invites ambiguity and idiosyncratic category use. Different interpretations lead to inconsistent use patterns.
Traditional statistical analysis, however, mistreats all rating scale observations as precise and accurate communications. Researchers seldom provide for differences in perspectives among respondents. These differences cannot be overlooked if our objective is the pursuit of useful knowledge and sound decision- making. We must recognize the various ways in which rating scale categories might be used and identify those which enable the maximum extraction of meaning. While this involves choice on the part of the analyst, "selective emphasis, choice, is inevitable whenever reflection occurs" (Dewey 1925). Because there can be no knowledge without choice, it becomes the responsibility of the analyst to develop criteria by which those choices can be made.
"Meanings do not come into being without language and language implies two selves in a conjoint or shared understanding" (Dewey 1925). Some level of ambiguity is unavoidable because language can never be exact. Nevertheless, shared meaning cannot be extracted from individual responses unless analysis can identify a common, cooperative mode of communication among all parties concerned.
Rating scale analysis must take the perspective that while a rating scale offers respondents a common language, a tool for "categorizing, ordering and representing the world" (Halliday 1969), it does not by itself make for meaningful communication. Since "meaning is located neither in the text nor in the reader but in their interaction" (Bloome & Green, 1984), we must include a step concerned with discovering, rather than asserting, meaning as we conduct our statistical analyses. Just as readers "must choose between competing interpretations of text" (Bloome & Green 1984) so must the analyst choose between different interpretations of the rating scale in order to find a coherent, shared representation of what is investigated.
A rating scale, like any other tool, "is defined by how it is used" (Halliday 1969). A focus of our analysis must be how the rating scale is actually used by respondents. We must discover which transformation of the initial rating scale categorization extracts the "maximum amount of useful [shared] meaning from the responses observed" (Wright et al. 1992).
As shared meaning develops, we establish criteria so that we do not ignore the individual, but rather provide a scoring medium through which the dissenting individual's voice may be heard more clearly. We set the stage so that individuals who do not subscribe to our construction of shared meaning can stand out and be noticed. By establishing an explicit commonality among most respondents, we enable the meaning which stems from an individual's unique interaction with an item or a group of items to emerge.
The constructive analysis of rating scale data can promote both general dialogue with the group and specific dialogue with the individual.
Bloome D, Green G (1984) Directions in the socio-linguistic theory of reading. In PD Pearson (Ed.), Handbook of Reading Research (pp 395-421). White Plains NY: Longman.
Dewey J (1925). Experience and nature. Republished in J.A. Boydston (Ed.) John Dewey: The Later Works, 19925-1953, Vol. 1. 1981. Carbondale IL: Southern Illinois University Press.
Halliday M (1969) Relevant models of language. Educational Review, 22, 1-128.
Low GD (1988) The semantics of questionnaire rating scales. Evaluation and Research in Education 2(2), 69-70.
Wright BD, Linacre JM (1992) Combining and splitting categories. RMT 6:3, 233.
Rating scales and shared meaning. Lopez WA. Rasch Measurement Transactions, 1995, 9:2 p.434
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt92k.htm
Website: www.rasch.org/rmt/contents.htm