CFA | Rasch | |
---|---|---|
1 Fundamental and theoretical issues of measurement | ||
Concept of Measurement | · Based on classical test theory (CTT) · Numbers are assigned to respondents' attributes (Stevens 1946, 1951) | · The measure of a magnitude of a quantitative attribute is its ratio to the unit of measurement, the unit of measurement is that magnitude of the attribute whose measure is 1 (Michell 1999, p.13) · Measurement is the process of discovering ratios rather than assigning numbers · Rasch Model is in line with axiomatic framework of measurement · Principle of specific objectivity |
Model | xi=τi + λijξj + δi xi ... manifest item score τi ... item intercept parameter λij ... factor loading of item i at factor j ξj... factor score of factor j δi... stochastic error term | For dichotomous data: P(aνi=1)= e(βν-δi) / [1 + e(βν-δi)] aνi ... response of person ν to item i βν ... person location parameter δi ... item location parameter (endorsability) |
Relationship of measure and indicators (items) | · Measure is directly and linearly related to the indicators · Hence, the weighted raw score is considered to be a linear measure | · Probability of a response is modeled as a logistic function of two measures, the person parameter bv and the item location (endorsability) di · Raw score is not considered to be a linear measure, transformation of raw scores into logits (Wright 1996, p.10) |
In/dependence of samples and parameters | Parameters are sample dependent, representative samples are important | Item parameters are independent of sample used (subject to model fit and sufficient targeting) |
2 Item selection and sampling (scale efficiency) issues | ||
Item selection | · Items selected to maximize reliability, leads to items that are equivalent in terms of endorsability which plays no explicit role in CTT · Favors items that are similar to each other (see bandwidth-fidelity problem, Singh 2004) | · Items are selected to cover a wide range of the dimension (see 'bandwidth', Singh 2004) · Endorsability of item plays a key role |
Item discrimination | · Discrimination varies from item to item but is considered fixed within an item | · Discrimination is equal for all items to retain a common order of all items in terms of endorsability for all respondents · Discrimination varies within an item (concept of information which equals P (avi=1)* P (avi=0) in the dichotomous case), it reaches its maximum at bv = di |
Targeting | Items that are off-target may even increase reliability and feign a small standard error which can actually be quite large | Items that are off-target provide less information, standard errors will increase and the power of the test of fit will decrease |
Standard error of measurement | Based on reliability, assumed to be equal across the whole range | Based on the information the items yield for a specific person |
Sample size | The required sample size mirrors recommendations for structural equation modeling (SEM). SEM is not appropriate for sample sizes below 100. As a rule of thumb sample sizes of greater than 200 are suggested (Boomsma 1982; Marsh, Balla, and McDonald 1988). Bentler and Chou (1987) recommend a minimum ratio of 5:1 between sample size and the number of free parameter to be estimated. | In general, the sample sizes used in structural equation modeling are sufficient but insufficient targeting increases the sample size needed. According to Linacre (1994) the minimum sample size ranges from 108 to 243 depending on the targeting with n=150 sufficient for most purposes (for item calibrations stable within ± 0.5 logits and .99 confidence) |
Distribution of persons | Commonly assumed to be normal | Irrelevant due to specific objectivity (subject to sufficient targeting) |
Missing data | Problematic, missing data has to be imputed, deleting persons may alter the standardizing sample, deleting items may alter the construct, pairwise deletion biases the factors (Wright 1996, p.10) | Estimation of person and item parameters not affected by missing data (except for larger standard errors) |
Interpretation of person measures | Usually in reference to sample mean | In reference to the items defining the latent dimension |
3 Dimensionality issues | ||
Multi-dimensionality | Multi-dimensionality easily accounted for | A priori multi-dimensional constructs are split up into separate dimensions |
Directional factors | Sensitivity to directional factors (Singh 2004) in case of items worded in different directions | Low sensitivity to directional factors (Singh 2004) |
4 Investigation of comparability of measures across groups | ||
Assessment of scale equivalence | · Multi-group analysis · Equivalence statements of parameters estimated across groups | · Differential item functioning analysis (DIF) capitalizing on the principle of specific objectivity · Analysis of residuals in different groups |
Incomplete equivalence | Partial invariance (for group specific items separate loadings and/or intercepts are estimated) | Item split due to DIF (for group specific items separate item locations are estimated) |
Typical sequence and principal steps of analysis | · Estimation of baseline model (group specific estimates of loadings and item intercepts) · equality constraints imposed on loadings (metric invariance) · equality constraints imposed on intercepts (scalar invariance) · selected constraints lifted if necessary (partial invariance) | · estimation of model across groups · collapsing of categories if necessary · assessment of fit · assessment of DIF · items displaying DIF are split up if necessary |
Etic (external) versus emic (internal) | · In principle etic-oriented approach. A common set of invariant items is indispensable. · Concept of partial invariance allows for equal items functioning differently. · Emic items, i.e. items confined to one group, can be considered but technical set-up complicated compared to Rasch analysis | · In principle etic-oriented approach. A common set of invariant items is indispensable. · Accounting for DIF by splitting the item allows for equal items functioning differently. · Emic items, i.e. items confined to one group, can be considered very easily because handling of missing data is unproblematic compared to CFA |
Table 1 in Ewing, Michael T., Thomas Salzberger, and Rudolf R. Sinkovics (2005), "An Alternate Approach to Assessing Cross-Cultural Measurement Equivalence in Advertising Research," Journal of Advertising, 34 (1), 17-36.
Courtesy of Rudolf Sinkovics, with permission.
For more information,
The Impact of Rasch Item Difficulty on Confirmatory Factor Analysis , S.V. Aryadoust
Rasch Measurement Transactions, 2009, 23:2 p. 1207
Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, M.T. Ewing, T. Salzberger, R.R. Sinkovics
Rasch Measurement Transactions, 2009, 23:1 p. 1194-5
Conventional factor analysis vs. Rasch residual factor analysis, Wright, B.D.
2000, 14:2 p. 753.
Rasch Analysis First or Factor Analysis First? Linacre J.M.
1998, 11:4 p. 603.
Factor analysis and Rasch analysis, Schumacker RE, Linacre JM.
1996, 9:4 p.470
Too many factors in Factor Analysis? Bond TG.
1994, 8:1 p.347
Comparing factor analysis and Rasch measurement, Wright BD.
1994, 8:1 p.350
Factor analysis vs. Rasch analysis of items, Wright BD.
5:1 p.134
Ewing M.T., Salzberger T., Sinkovics R.R. (2009) Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, Rasch Measurement Transactions, 2009, 23:1, 1194-5
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt231g.htm
Website: www.rasch.org/rmt/contents.htm