As I have worked with colleagues and students just starting to learn how and why to measure with Rasch there has been a topic that has come up over and over. As I draw a vertical line for a test construct and identify one end of the line as "Easy" and the other end of the line as "Difficult", students/and colleagues have no problem thinking of test items as being possibly of different difficulty. But when I discuss a rating scale survey, there is often confusion how an item can be perhaps "Easier to Agree With" or "Harder to Agree With". Usually I try to draw an analogy to a test and ask my students and colleagues if it makes sense to them that some items of a survey can be easier to agree with as opposed to other items (e.g., a traditional Likert scale). Sometimes I think they get it, but honestly later on, it is clear that it is hard for them to understand this continuum for a survey.
What are some techniques I have used to help them?
Sometimes I will return to a plot of a vertical line for a test, and then instead of labeling the end points with "Easy" and "Difficult" I will label the end points "Less Difficult" and "More Difficult" as a reminder that we are talking about the "Difficulty" of items.
Difficult ↕ Easy |
More Difficult ↕ Less Difficult |
Less Easy ↕ Easier |
Next, I will draw another line for the same test, and then I will label the ends of the line "Easier" and "Less Easy". Even though the phrase "Less Easy" is very awkward, these two newly labeled lines seem to help the learners understand that we are talking about a variable of difficulty. I point out to them that if we use the end terms "Easy" and "Difficult" we are also fine, but sometimes it is easier to grasp the issue, if the same sort of words are used to describe the ends of the variable.
I then move on to a rating scale. I ask students to consider a rating scale with just two possible ratings: "Agree" and "Not Agree". I ask my learners to imagine they are answering a 10 item survey and that they can think of "Agree" as a correct answer, and "Not Agree" as a wrong answer. This seems to help them see a link to a dichotomous test in which items are right/wrong, and that survey items can be of different levels of "difficulty" (but in this case items are of differing levels of "Ease to Agree With"!). This really seems to help them see that a survey with a rating scale can be along a construct.
|
|
I continue my work with the 10 item survey by drawing a vertical line and labeling the two endpoints with "Easier to Agree With" and "Harder to Agree With".
Harder to Agree With ↕ Easier to Agree With |
Easier to Disagree With ↕ Harder to Disagree With |
Then I might ask my students what if the rating scale was "Easier to Disagree With" & "Harder to Disagree With" labeling the end points. Usually they are able to place the words in the right place and they see the same message using both labeling techniques. This activity seems to help them understand that not only is it possible to have a construct with a survey, but they begin to understand that a continuum can be defined with a survey, just as a right/wrong test can define a continuum.
The next step that I take involves a rating scale of "Strongly Agree", "Agree", "Disagree", and "Strongly Disagree". So now I move to surveys in which the rating scale is not dichotomous. Students now seem to "get it" that they could think of this 4 step scale as an "Agree" scale with a scale showing different levels of agreement. Often I will ask them to first re-label the scale with similar words...some will write something like this "Strongly Agree", "Agree", "Agree Less than Agree", "Hardly Agree at All". They understand that even though "Strongly Disagree" does not at first sound like a level of agreement that they can really just think of "Strongly Disagree" as a very low level of agreement.
↑ ┃ ┃ ↓ |
Strongly Agree Agree Disagree Strongly Disagree |
↑ ┃ ┃ ↓ |
Strongly Agree Agree Agree Less Than Agree Hardly Agree At All |
The next step is for them to draw a line and label the end points with "More Strongly Agree" and "Really Less Strongly Agree". I point out that they could think of "Really Less Strongly Agree" as "Strongly Disagree". Even though the words are awkward, this seems to work. My point is to help them understand the continuum and not to be tripped up on words that at first blush might seem to involve different issues (e.g., Agree, Disagree).
More Strongly Agree ↕ Really Less Strongly Agree |
Really Less Strongly Disagree ↕ More Strongly Disagree |
Now the grand finale is to talk about rating scales in which there is a wide mix of words to describe a rating scale step. My favorite is "Always", "Often", "Sometimes", "Seldom", and "Never". In this case none of the words look like they are linked in meaning based upon a similar word being present in a rating scale step (e.g., a scale of "Very Often", "Often".... or a scale of "Very Important", "Important"...).
In this case we also draw a vertical line, and I make use of the reasoning that I have previously presented. I try to point out that the scale could have been "Often" or "Not Often", and that a line could be labeled with "More Often" and "Less Often", or (very awkward! "More Sometimes" and "Less Sometimes").
More Always ↕ Less Always |
More Sometimes ↕ Less Sometimes |
I think at the end of the activity I have helped them better understand that a rating scale can be expressed on a line, as one can do for item difficulty. Also the students better understand how to think about the meaning of going up or going down the line of the continuum. The understanding of going up or down the line is very important as they later learn how to interpret person measures and item measures.
William J. Boone
Miami University (Ohio)
Teaching Rasch Measurement: Students' Confusion with Differing Levels of Agreement. W.J. Boone Rasch Measurement Transactions, 2014, 27:4 p. 1445-6
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt274e.htm
Website: www.rasch.org/rmt/contents.htm