MEASUREMENT RESEARCH ASSOCIATES
TEST INSIGHTS
May 2009
Greetings
 
Since time is literally money with computer-based testing, it is advisable to know how the difficulty of the multiple choice items included on a test impact the time needed by the candidate to complete the test.

Phil Higgins
Manager, Computer Based Testing


Item Difficulty and Time Usage
Computer-based testing provides the opportunity to track the amount of time spent on an item, how long candidates take to respond initially and later how long they take to review items. To better understand time usage and candidates use of time, item difficulty and time usage was studied.
 
For purposes of this study, the items were divided into three groups based on their percent correct (p-value). Percent correct is usually considered to be a measure of the difficulty of the item.  Group 1 included the difficult items which less than 40% of the candidates answered correctly, Group 2 included the items that 40% to 80% answered correctly, and Group 3 included the items that over 80% of the candidates answered correctly.  The ANOVA found significant differences among groups for both the initial amount of time used per item (F = 7.05 p< .001) and the time used for review per item (F = 15.13, p< .001). More time was required to answer and review the more difficult items. The details of the analysis are shown in the table.
 
While this is a logical outcome, it provides some insight into the amount of time needed for an examination.  When an examination is composed primarily of items in the 40% - 80% range of difficulty, more time is required than when the test includes primarily easy items in the 80%-99% range of difficulty.  With criterion referenced testing, test item difficulty tends to be targeted to the pass point, which when presented in percents may often be around to 60% correct.  Thus, a test with mostly easy items will require less time to complete than a test that is well targeted or contains mostly difficult items.  
 
Not all candidates reviewed all items.  In fact many candidates reviewed very few items.  However, similar patterns of time usage were found for all candidates.  Easier items required less review time than moderate or difficult items.
 
This study used only one data set, so the results may not generalize.  It does provide an indicator of the time needed for candidates to complete an examination by considering the difficulty of the items on the examination.


Descriptive Statistics for Time Usage by % Correct Item Groups in Seconds


Group

Percent Correct

Mean seconds used

Std. Deviation

Minimum

Maximum








Initial time to respond

1

less than 40% (difficult items)

61.47

24.57

26.38

164.86


2

40% to 80% correct (moderate items)

55.56

22.60

18.28

131.26


3

80% or higher correct (easy items)

43.93

18.42

16.53

100.64


Total

Total

54.22

22.97

16.53

164.86








Review time to respond

1

less than 40% (difficult items)

12.67

4.87

3.48

29.88


2

40% to 80% correct  (moderate items)

11.11

5.36

3.11

27.83

 

3

80% or higher correct (easy items)

7.18

3.33

2.15

18.92


Total

Total

10.54

5.19

2.15

29.88


Measurement Research Associates, Inc.
505 North Lake Shore Dr., Suite 1304
Chicago, IL  60611
Phone: (312) 822-9648     Fax: (312) 822-9650



Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou Journal of Applied Measurement
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:
Please email inquiries about Rasch books to books \at/ rasch.org

Your email address (if you want us to reply):

 

FORUMRasch Measurement Forum to discuss any Rasch-related topic

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com