one-step (concurrent) equating of General Medical Knowledge, L Shen
Done by using 2647 items and 5886 persons to equate
three medical certification examinations measuring basic science
knowledge, clinical science knowledge, and clinical practice related
knowledge.
Factors influencing form-development items, G Kramer
Data for 50 items from 6062 Dental Admission Test examinees were used
to study 11 design characteristics that contribute to the difficulty
of form-development items. Rasch item difficulties were regressed on
the 11 characteristics to estimate the variance explained by each
characteristic. The importance of complexity was confirmed,
especially for irregular items.
Review and decision confidence on CAT, G Stone & M Lunz
The effect of allowing computer-adaptive test examinees to review
items and alter responses on ability estimates, decision confidence,
and pass-fail decisions is explored. Ability measures before and
after review were highly correlated. Review did not interact with
specific ability groupings. The validity benefits of review outweigh
psychometric concerns.
CAT with successive intervals, W Koch & B Dodd
The Rasch "successive intervals" model, proposed by Jurgen Rost
following Thurstone, was applied to CAT-administered Likert items.
The properties of the model with different item pool sizes, methods of
item selection and sizes of item dispersion parameters are reported.
Assessing rater behavior, W Zhu, C Ennis, A Chen
298 physical education teachers rated each item of an inventory
assessing physical education value. A 6 facet model for the ratings
was used to analyze these data. Although rater group membership did
not affect model-data fit, its impact on item ratings was detected.
Levels of constructivism among math teachers, L Roberts
A Teacher Belief Scale was administered to "Beyond Activities Project"
exemplary program elementary school teachers and a control group to
assess project teachers' levels of constructivism. The project
teachers were more constructivist.
Quick norms, R Schumacker
Rasch quick norming, a method that overcomes the dependency of "true
score" norming on a particular set of items and sample of examinees,
is applied to simulated data sets of varying test length, sample size
and distribution.
DIF detection stability, C Parshall, R Smith, J Kromrey
A Rasch procedure for detecting biased items is investigated with
Monte Carlo reorganization of real data. Reported are: 1) sensitivity
and stability of a Rasch DIF statistic, 2) comparison of results with
different numbers of replications, 3) comparison with a Mantel-
Haenszel study.
Detecting item bias with separate calibration and between-fit, R Smith
The separate calibration t-test approach is compared with the common
calibration between-fit approach to detecting item bias. Detection of
non-existing bias and failure to detect existing bias are examined for
different sample sizes, bias sizes, number of biased items, and
ability differences between reference and focus groups.
Understanding performance assessments, D Kenyon
Audience members will rate ESL speakers led by Charles Stansfield.
Robert Hess will discuss writing assessment. Carol Myford and Robert
Mislevy will discuss art portfolio assessment. Dorry Kenyon will then
present a facet analysis of the audience's ratings.
Scoring Model for partial credit data, P Pedler
The probability function of the Scoring Model for polytomous items is
generalized from the Rasch dichotomous model. Parameter estimation
from real data is illustrated.
Measuring change with graphical techniques, B Sheridan & B Hands
A study investigated change in attitudes of teachers exposed to a
novel teaching strategy. Simple graphical techniques are used to
assess the quality of the variable and the precision of measurement.
This technique detects local interactions and changes in measures at
group and individual level.
Computerized clinical simulation testing (CST), A Bersky
A CST examinee works on a simulated nursing case entering actions at
any time and in any sequence. The actions are scored for problem
solving and decision making competence. The concurrent validity of
this CST is investigated by comparisons with examinee performance on
an MCQ licensure examination.
Generalizability Theory, G Boodoo, L Bachman, G Marcoulides
Performance appraisal is an important practice in education. The
criteria for assessing performance are ratings. Human judgement,
however, is fallible. It is an obligation of evaluators to provide
evidence of the psychometric quality of the ratings they use. This
presentation introduces generalizability (G) theory as an approach to
the assessment of the quality of ratings, and exemplifies it with
performance assessment data. The basic concepts of G theory will be
reviewed. These will be followed by an analysis of actual performance
data. It is hoped that an understandable picture of G theory will
enable this technique to contribute to future performance evaluations.
Gwyneth Boodoo: Explanation of G theory, Universe of observations, Random & fixed facets, Variance components, G-studies
Lyle Bachman: D-studies, Absolute & relative error, Norm and criterion dependability indicators, "What if?" projections
George A. Marcoulides: Design optimization, Estimation of variance components, Multivariate G theory
Many-facet Rasch measurement, J Linacre, M Lunz, C Myford
The Rasch approach to judged tests will be explained. The inevitably
non-linear ordinal ratings are used to construct linear examinee
measures adjusted for the difficulty of the specific items and the
severity of the specific judges each examinee encounters. Success of
construction is evaluated through the meaning (construct validity of
the variable definition), consistency (quality-control fit) and
utility ("reliability" separation of performance spread vs.
replication-dependent precision) of the measures. The Rasch approach
features practical, flexible, minimum-effort judging plans with
predictable characteristics. Since judge severity is measured and
removed, rather than included as a source of "error" variance, judge
training emphasizes rater self-consistency and a shared understanding
of the judging task across raters, rather than rating uniformity
across judges.
Michael Linacre: Explanation of Rasch approach, Rating scales, Judge training (intra-rater consistency), Precision, quality and sample size
Mary Lunz: Judging plans, Feedback to judges, Judge effect on raw scores
Carol Myford: Variable definition, Judge expertise
Rasch SIG brief Abstracts for AERA 1993 Rasch Measurement Transactions, 1993, 6:4 p. 252-3
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt64d.htm
Website: www.rasch.org/rmt/contents.htm