Teaching Rasch Measurement: Students' Confusion with Differing Levels of Agreement

As I have worked with colleagues and students just starting to learn how and why to measure with Rasch there has been a topic that has come up over and over. As I draw a vertical line for a test construct and identify one end of the line as "Easy" and the other end of the line as "Difficult", students/and colleagues have no problem thinking of test items as being possibly of different difficulty. But when I discuss a rating scale survey, there is often confusion how an item can be perhaps "Easier to Agree With" or "Harder to Agree With". Usually I try to draw an analogy to a test and ask my students and colleagues if it makes sense to them that some items of a survey can be easier to agree with as opposed to other items (e.g., a traditional Likert scale). Sometimes I think they get it, but honestly later on, it is clear that it is hard for them to understand this continuum for a survey.

What are some techniques I have used to help them?

Sometimes I will return to a plot of a vertical line for a test, and then instead of labeling the end points with "Easy" and "Difficult" I will label the end points "Less Difficult" and "More Difficult" as a reminder that we are talking about the "Difficulty" of items.

Difficult

Easy
  More Difficult

Less Difficult
  Less Easy

Easier

Next, I will draw another line for the same test, and then I will label the ends of the line "Easier" and "Less Easy". Even though the phrase "Less Easy" is very awkward, these two newly labeled lines seem to help the learners understand that we are talking about a variable of difficulty. I point out to them that if we use the end terms "Easy" and "Difficult" we are also fine, but sometimes it is easier to grasp the issue, if the same sort of words are used to describe the ends of the variable.

I then move on to a rating scale. I ask students to consider a rating scale with just two possible ratings: "Agree" and "Not Agree". I ask my learners to imagine they are answering a 10 item survey and that they can think of "Agree" as a correct answer, and "Not Agree" as a wrong answer. This seems to help them see a link to a dichotomous test in which items are right/wrong, and that survey items can be of different levels of "difficulty" (but in this case items are of differing levels of "Ease to Agree With"!). This really seems to help them see that a survey with a rating scale can be along a construct.

More Difficult Dichotomous Test Items





 
Q3 Not Correct
 
Q1 Not Correct
 
 
Q4 Correct
 
Q2 Correct
 
Less Difficult Dichotomous Test Items
More "Difficult to Agree With" Dichotomized Rating Scale Items





 
Q3 Not Agree
 
Q1 Not Agree
 
 
Q4 Agree
 
Q2 Agree
 
Less "Difficult to Agree With" Dichotomized Rating Scale Items

I continue my work with the 10 item survey by drawing a vertical line and labeling the two endpoints with "Easier to Agree With" and "Harder to Agree With".

Harder to Agree With

Easier to Agree With
  Easier to Disagree With

Harder to Disagree With

Then I might ask my students what if the rating scale was "Easier to Disagree With" & "Harder to Disagree With" labeling the end points. Usually they are able to place the words in the right place and they see the same message using both labeling techniques. This activity seems to help them understand that not only is it possible to have a construct with a survey, but they begin to understand that a continuum can be defined with a survey, just as a right/wrong test can define a continuum.

The next step that I take involves a rating scale of "Strongly Agree", "Agree", "Disagree", and "Strongly Disagree". So now I move to surveys in which the rating scale is not dichotomous. Students now seem to "get it" that they could think of this 4 step scale as an "Agree" scale with a scale showing different levels of agreement. Often I will ask them to first re-label the scale with similar words...some will write something like this "Strongly Agree", "Agree", "Agree Less than Agree", "Hardly Agree at All". They understand that even though "Strongly Disagree" does not at first sound like a level of agreement that they can really just think of "Strongly Disagree" as a very low level of agreement.





Strongly Agree
 
Agree
 
Disagree
 
Strongly Disagree
     



Strongly Agree
 
Agree
 
Agree Less Than Agree
 
Hardly Agree At All

The next step is for them to draw a line and label the end points with "More Strongly Agree" and "Really Less Strongly Agree". I point out that they could think of "Really Less Strongly Agree" as "Strongly Disagree". Even though the words are awkward, this seems to work. My point is to help them understand the continuum and not to be tripped up on words that at first blush might seem to involve different issues (e.g., Agree, Disagree).

More Strongly Agree

Really Less Strongly Agree
  Really Less Strongly Disagree

More Strongly Disagree

Now the grand finale is to talk about rating scales in which there is a wide mix of words to describe a rating scale step. My favorite is "Always", "Often", "Sometimes", "Seldom", and "Never". In this case none of the words look like they are linked in meaning based upon a similar word being present in a rating scale step (e.g., a scale of "Very Often", "Often".... or a scale of "Very Important", "Important"...).

In this case we also draw a vertical line, and I make use of the reasoning that I have previously presented. I try to point out that the scale could have been "Often" or "Not Often", and that a line could be labeled with "More Often" and "Less Often", or (very awkward! "More Sometimes" and "Less Sometimes").

More Always

Less Always
  More Sometimes

Less Sometimes

I think at the end of the activity I have helped them better understand that a rating scale can be expressed on a line, as one can do for item difficulty. Also the students better understand how to think about the meaning of going up or going down the line of the continuum. The understanding of going up or down the line is very important as they later learn how to interpret person measures and item measures.

William J. Boone
Miami University (Ohio)


Teaching Rasch Measurement: Students' Confusion with Differing Levels of Agreement. W.J. Boone … Rasch Measurement Transactions, 2014, 27:4 p. 1445-6




Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt274e.htm

Website: www.rasch.org/rmt/contents.htm