Refining an Instrument to Measure the Effectiveness of School-based
Partnerships (1.38)
Deborah L. Bainer, Ohio State University-Mansfield, & Richard M.
Smith, Rehabilitation Foundation Inc.
The purpose of this study is to refine an instrument designed to measure a single construct, the effectiveness of school-based partnerships. The instrument was designed to measure the "health" of the partnership teams and identify specific problems for which intervention might be appropriate. The items were based on four theoretical models of partnering efforts. The partnerships studied were based on the elementary school science curriculum and involved elementary teachers and resource professionals in a school based program that covered a four year period. The results show how Rasch analysis, using the traditional item and person fit statistics, bias analysis using separate calibration groups for contrasts of interest, and principal component analysis can be used to evaluate the unidimensionality of a scale.
Hierarchical Structure of Memory Beliefs (1.38)
Everett V. Smith Jr., Hartford Hospital, Scott W. Brown, and
Bethany B. Silver, University of Connecticut; Maryanne Garry,
Victoria University (NZ); Elizabeth Loftus, University of
Washington
This paper examines data presented by Brown, Garry, Silver, and Loftus (1997). Their presentation evaluated responses to the Beliefs about Memory Survey (BMS: Brown, Garry, Loftus, Silver, DuBois, & DuBreuil, 1996) using principal component analysis. Application of the Rasch Rating Scale Model supports a unidimensional interpretation of a memory beliefs construct, while reported principal component evidence indicated the notion of three dimensions. Investigation of item bias between males and females identified one item exhibiting bias. Replication with an independent sample from the same target population resulted in a similar item hierarchy. Implications for rating scale construction and potential applications of the BMS are outlined.
Using Rasch Analysis to Bridge the Theory - Practice Divide 1:
Children's Understanding of Area Concepts (1.38)
Trevor G. Bond & Kellie Parkinson, James Cook University
A preliminary investigation of the claim that children's math achievement is, in part, related to their levels of cognitive development used Rasch modelling to quantify children's performance on a school-based test of Area concepts with a sample of 42 primary school students. Rasch partial credit analysis was also applied to qualitative data derived from interview tasks based on Piaget's theory of intellectual development. The results are discussed in terms of the light shed on the theory/practice nexus and suggest that more comprehensive investigations of this sort based on Rasch modelling have the potential to better inform developmentalists.
Rasch Analysis of the Measurement Level of Three Health Utility
Scales (1.38)
Karon F. Cook, VA/Baylor College of Medicine
Single-item scales of patient preference are employed in medical research and used in decision analyses. They have been assumed to be interval (required in decision analysis models). Three such scales were used to elicit patient preferences for eight health states and then analyzed as a 24-item scale (3 scales X 8 health states) using the partial credit model. Good data to model fit was obtained for 22 of the items. Distances between pairs of Thurstone thresholds were compared to evaluate the degree to which the raw scores were interval. The results demonstrate the lack of intervalness of the single-item scales.
An ANOVA-like Rasch Analysis of Differential Item Functioning
(5.63)
Wen-chung Wang, National Chung Cheng University
The differential item functioning (DIF) analysis between a focal group and a reference group is analogous to t-test of two means. As t-test is extended to simple or factorial analysis of variance (ANOVA), DIF analysis can be extended to multiple groups and multiple factors. In this paper, an ANOVA Rasch DIF analysis is proposed. The proposed analysis has two major advantages. First, as ANOVA is statistically more powerful than t-test, the ANOVA DIF analysis is more powerful than the conventional DIF analysis. In addition, main effects and interaction effects can be partitioned and investigated in ANOVA DIF analysis. Several models are proposed. Results of simulation studies show that various kinds of DIF parameters were well recovered. A real data set was analyzed. Implications and applications are addressed.
Person Fit and Its Relationship with Other Measures of Response Set
(14.08)
Dorothy L. Swearingen, University of Denver
Response set is a response to items which is thought to be unrelated to item content, and threatens accurate interpretation of survey results. Associations between person fit and response set were examined, using infit, outfit, and three response sets. High correlations suggested redundancy of infit and outfit. Person fit was significantly associated with response range and extreme responding style for controversial and non-controversial topics in two of three item formats, but not with acquiescence or directional bias. Closer inspection of misfitting persons' responses is expected to demonstrate the usefulness of person fit for detecting several response sets.
Children's Construction of the Operation of Addition (14.08)
Betsey Grobecker, Auburn University
Six-to eight-year-old children (N=42) participated in three mathematics tasks: (a) a flashcard task; (b) nonverbal replication of button equality placed under a box; and (c) determination of length equality of various string segments that varied in length, number of cuts, and spatial orientation (associativity of length). Rasch statistics indicated that within each of the tasks there existed a sequential construction of increasingly complex cognitive abilities. Further, a comparison between the flashcard, nonverbal, and associativity of length tasks elicited a developmental relationship between the ability to generate more sophisticated strategies to solve mathematics problems and the evolution of operational structures.
An Investigation of Factors Affecting Test Equating (36.05)
Surintorn Suanthong & Randall E. Schumacker, University of North
Texas
Test equating using the Rasch measurement model is presented and discussed. Test equating places item difficulties on a common metric. The use of a common metric is necessary to compare results of students taking different sets of items (tests). Test equating was done after two tests had been administered to provide a common scale for ability estimates from the two tests. The number of common items, the range of common item difficulties, and the stability of the constant link value due to common item misfit are investigated to determine the effects on person ability estimates. A practical example is presented.
Influence of Outward Bound on Post Traumatic Stress Disorder in War
Veterans: A Longitudinal Study (36.05)
Everett V. Smith Jr., Hartford Hospital, Steven V. Owen, University
of Connecticut, John M. Parsons, Veterans Affairs Medical Center
This study evaluated the influence of an Outward Bound program on Post Traumatic Stress Disorder in war veterans. Five sets of items were identified, one set for each of five outcome measures, which functioned in a consistent manner over experimental groups and five data collection periods. Using anchoring methods, person measures were obtained for each outcome measure at each of the five measurement occasions. A series of RM-ANCOVAs indicated group differences in favor of the Outward Bound treatment group on all outcome measures. These differences appeared immediately following the intervention and persisted over the one year of data collection.
The Measurement of Morality (36.05)
John M. Linacre, University of Chicago
Kohlberg posits six levels of moral development as the personal focus orients from the external to the internal. Rasch measurement is used to construct a "morality" variable. An investigation is conducted into whether there are clearly marked "Piagetian" stages of moral development or a smooth progression. Implications for behavior management are discussed.
The Diagnostic Utility of a Study Strategies Self-Efficacy
Instrument for Use with Community College Students (46.39)
Barbara A. Greene, University of Oklahoma, Everett V. Smith Jr.,
Hartford Hospital, Bethany Silver, University of Connecticut
The purpose of this study is to examine the utility of an instrument for measuring the study strategies self-efficacy of community college students. The Rasch Rating Scale Model was used to obtain interval level person measures. A MANOVA demonstrated that these measures reliably differentiate among groups of students reporting A's and B's versus those reporting C's and D's as frequently obtained grades. Using expected score maps, the potential diagnostic utility of study strategies self-efficacy appraisals are outlined for two situations: identifying students that are academically at-risk and planning individual study strategy interventions.
Rasch Measurement Abstracts, AERA 1998 Rasch Measurement Transactions, 1998, 11:4 p. 592-3.
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt114n.htm
Website: www.rasch.org/rmt/contents.htm