Rasch Measures Hamlet

Does objective expertise in judging acting ability exist? What distinguishes experts' judgments from the judgments of others? To investigate this, I compared the ratings given by experts, theater buffs, and novices to high school students' videotaped performances of Shakespearean monologues. The experts were casting directors from Chicago theaters or high school drama teachers who had spent many hours evaluating actors' abilities. The theater buffs were not formally trained in drama but frequented professional theater, read acting reviews, and enjoyed talking about drama. Novices seldom attended the theater, rarely read reviews, and had little experience with drama.

These judges rated videotapes with the Judging Acting Ability Inventory, a 36-item rating instrument designed to measure the technical aspects of acting, such as vocal technique and body movement, and the emotional and creative aspects of acting needed to build an effective characterization. The judges repeated the rating task one month later so that the stability of aesthetic judgments over time could also be investigated. I hypothesized that there would be significant differences between the judge groups in their item calibrations, measures of actors' abilities, and judge severities.

The data was analyzed with the FACETS program. Ray Adams devised a chi-square test for rating consistency, analogous to Hedges and Olkin (1985). The advantage of Ray's technique over ANOVA is that the calculation of this chi-square takes into consideration not only each calibration but also its standard error.

To my surprise, I found that the three groups shared a common understanding of nearly all the items and employed those items consistently when judging performances. There were only four items which had significantly different calibrations across the three groups. Additionally, the items performed in a stable manner for each of the groups across rating occasions. Buffs and novices used the rating criteria in the same way as experts when those criteria were explicit and couched in understandable language.

Yet there were also some noticeable differences between the three groups. First, experts were the most severe while novices were the most lenient. Second, the groups rated certain performances differently. Experts and buffs gave three actors significantly lower measures than novices did. Those three actors portrayed characters in mourning, and their characterizations were emotionally charged. Novices seemed to base their judgments upon a single criterion - the actor's ability to display intense emotion - and were unaware of the technical shortcomings of the performance. By contrast, experts and buffs seemed to view an actor from a number of perspectives and were not overwhelmed by the emotionalism displayed. Third, experts were better able to replicate their ratings one month later than buffs and novices. All three groups showed some change across time, but the amount of change for buffs was nearly twice that for experts, while the amount of change for novices was nearly twice that again.

This study breaks new ground by examining aesthetic judgment in the performing arts. It is a step towards the construction of an objective measurement system which drama teachers can employ to assess student growth in acting ability. Through the behavior of an intermediate group of judges, the theater buffs, it gives insight into the transition from novice to expert judge.

Hedges LV, Olkin I 1985 Statistical methods for meta-analysis. New York:Academic Press

Myford CM 1989 The nature of expertise in aesthetic judgment. Ph.D. dissertation, Univ. of Chicago. Dissertation Abstracts International, 50, 3562A



Rasch Measures Hamlet, C Myford … Rasch Measurement Transactions, 1990, 4:2 p.105




Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt42c.htm

Website: www.rasch.org/rmt/contents.htm