Teaching Rasch Measurement: Students' Confusion with Differing Levels of Agreement

As I have worked with colleagues and students just starting to learn how and why to measure with Rasch there has been a topic that has come up over and over. As I draw a vertical line for a test construct and identify one end of the line as "Easy" and the other end of the line as "Difficult", students/and colleagues have no problem thinking of test items as being possibly of different difficulty. But when I discuss a rating scale survey, there is often confusion how an item can be perhaps "Easier to Agree With" or "Harder to Agree With". Usually I try to draw an analogy to a test and ask my students and colleagues if it makes sense to them that some items of a survey can be easier to agree with as opposed to other items (e.g., a traditional Likert scale). Sometimes I think they get it, but honestly later on, it is clear that it is hard for them to understand this continuum for a survey.

What are some techniques I have used to help them?

Sometimes I will return to a plot of a vertical line for a test, and then instead of labeling the end points with "Easy" and "Difficult" I will label the end points "Less Difficult" and "More Difficult" as a reminder that we are talking about the "Difficulty" of items.

Difficult

Easy
  More Difficult

Less Difficult
  Less Easy

Easier

Next, I will draw another line for the same test, and then I will label the ends of the line "Easier" and "Less Easy". Even though the phrase "Less Easy" is very awkward, these two newly labeled lines seem to help the learners understand that we are talking about a variable of difficulty. I point out to them that if we use the end terms "Easy" and "Difficult" we are also fine, but sometimes it is easier to grasp the issue, if the same sort of words are used to describe the ends of the variable.

I then move on to a rating scale. I ask students to consider a rating scale with just two possible ratings: "Agree" and "Not Agree". I ask my learners to imagine they are answering a 10 item survey and that they can think of "Agree" as a correct answer, and "Not Agree" as a wrong answer. This seems to help them see a link to a dichotomous test in which items are right/wrong, and that survey items can be of different levels of "difficulty" (but in this case items are of differing levels of "Ease to Agree With"!). This really seems to help them see that a survey with a rating scale can be along a construct.

More Difficult Dichotomous Test Items





 
Q3 Not Correct
 
Q1 Not Correct
 
 
Q4 Correct
 
Q2 Correct
 
Less Difficult Dichotomous Test Items
More "Difficult to Agree With" Dichotomized Rating Scale Items





 
Q3 Not Agree
 
Q1 Not Agree
 
 
Q4 Agree
 
Q2 Agree
 
Less "Difficult to Agree With" Dichotomized Rating Scale Items

I continue my work with the 10 item survey by drawing a vertical line and labeling the two endpoints with "Easier to Agree With" and "Harder to Agree With".

Harder to Agree With

Easier to Agree With
  Easier to Disagree With

Harder to Disagree With

Then I might ask my students what if the rating scale was "Easier to Disagree With" & "Harder to Disagree With" labeling the end points. Usually they are able to place the words in the right place and they see the same message using both labeling techniques. This activity seems to help them understand that not only is it possible to have a construct with a survey, but they begin to understand that a continuum can be defined with a survey, just as a right/wrong test can define a continuum.

The next step that I take involves a rating scale of "Strongly Agree", "Agree", "Disagree", and "Strongly Disagree". So now I move to surveys in which the rating scale is not dichotomous. Students now seem to "get it" that they could think of this 4 step scale as an "Agree" scale with a scale showing different levels of agreement. Often I will ask them to first re-label the scale with similar words...some will write something like this "Strongly Agree", "Agree", "Agree Less than Agree", "Hardly Agree at All". They understand that even though "Strongly Disagree" does not at first sound like a level of agreement that they can really just think of "Strongly Disagree" as a very low level of agreement.





Strongly Agree
 
Agree
 
Disagree
 
Strongly Disagree
     



Strongly Agree
 
Agree
 
Agree Less Than Agree
 
Hardly Agree At All

The next step is for them to draw a line and label the end points with "More Strongly Agree" and "Really Less Strongly Agree". I point out that they could think of "Really Less Strongly Agree" as "Strongly Disagree". Even though the words are awkward, this seems to work. My point is to help them understand the continuum and not to be tripped up on words that at first blush might seem to involve different issues (e.g., Agree, Disagree).

More Strongly Agree

Really Less Strongly Agree
  Really Less Strongly Disagree

More Strongly Disagree

Now the grand finale is to talk about rating scales in which there is a wide mix of words to describe a rating scale step. My favorite is "Always", "Often", "Sometimes", "Seldom", and "Never". In this case none of the words look like they are linked in meaning based upon a similar word being present in a rating scale step (e.g., a scale of "Very Often", "Often".... or a scale of "Very Important", "Important"...).

In this case we also draw a vertical line, and I make use of the reasoning that I have previously presented. I try to point out that the scale could have been "Often" or "Not Often", and that a line could be labeled with "More Often" and "Less Often", or (very awkward! "More Sometimes" and "Less Sometimes").

More Always

Less Always
  More Sometimes

Less Sometimes

I think at the end of the activity I have helped them better understand that a rating scale can be expressed on a line, as one can do for item difficulty. Also the students better understand how to think about the meaning of going up or going down the line of the continuum. The understanding of going up or down the line is very important as they later learn how to interpret person measures and item measures.

William J. Boone
Miami University (Ohio)


Teaching Rasch Measurement: Students' Confusion with Differing Levels of Agreement. W.J. Boone … Rasch Measurement Transactions, 2014, 27:4 p. 1445-6


Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Jan. 25-26, 2017, Wed.-Thurs. In-person workshop: Measurement with the Rasch Model (M. Pampaka, J. Williams, Winsteps), Manchester, UK, website
Feb. 27 - June 24, 2017, Mon.-Sat. On-line: Advanced course in Rasch Measurement Theory (EDUC5606), Website
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, www.ncme.org
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX, www.aera.net
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

The URL of this page is www.rasch.org/rmt/rmt274e.htm

Website: www.rasch.org/rmt/contents.htm