The Consequences for Measurement of Mindsets

Dweck's (2000, 2006) social-cognitive model of motivation, personality and development leads to the realization that individuals' behaviors and beliefs are often influenced by implicit theories concerning the malleability of intelligence. For example, implicit self-theories about intelligence have been shown to influence

• individuals' responses to achievement challenges (Dweck & Leggett, 1988; Henderson & Dweck, 1990),

• their conceptions of morality as duty-based versus human-rights based, and their overall assumptions about the kind of person someone is and how they will behave (Chiu, Dweck, Tong, & Fu, 1997),

• differences in degree of social stereo-typing (Levy, Stroessner, & Dweck, 1998), and

• individuals' predictions of the future behavior of another individual (Chiu, Hong, & Dweck, 1997).

In the course of conducting research aimed at the quantification of Dweck's constructs, it became apparent that the different ways self-theories influence responses to achievement challenges suggests an illuminating approach to understanding the history of measurement in psychology.

The implicit self-theory research demonstrates that an individual usually responds to life experiences through the unconscious lens of one of two theories about intelligence, fixed or malleable (Dweck, 2000, 2006). Dweck and colleagues (2000, 2006) carefully articulate that, though the fixed "Entity" and malleable "Growth" mindsets present different ways of approaching a variety of experiences, neither is wrong or right - they are just different in their consequences.

The fixed theory sees intelligence as a quantity received as a function of biologic inheritance that cannot be enlarged by experience. In other words, a person's ability is set in stone at birth. Although one may learn new things, the amount of available intelligence remains the same. The malleable theory (referred to as the "growth theory" here) sees intelligence as a function of experience and effort. A person's ability over the course of a lifetime may be increased through effort and experience. One can increase intelligence through trial-and-error problem solving in the course of learning new things. Research has shown that holding one or the other of these theories has specific consequences when the individual is faced with an achievement challenge.

Self-Theories of Intelligence

Fixed theorists usually exhibit a need to protect their perception of the amount of intelligence they possess. When given a choice, fixed theorists select achievement challenges they perceive as not likely to threaten their image of themselves as successful. So, they rarely choose to take a risk in order to learn. Instead, they prefer to repeat a challenge known to be within their capacity to succeed. Fixed theorists center their concerns on how they appear as they perform; they are more likely to be concerned with looking smart than with entertaining the possibility of failure in order to learn something new.

Growth theorists usually associate their intelligence with the amount of effort they invest in achievement challenges. The consequences of holding this theory means that when given a choice they more often select achievement challenges that offer the opportunities to learn something new instead of one that repeats a previous success. They see trial and error problem solving as a confirmation of intelligence. When faced with achievement challenges, growth theorists' concerns center on matching effort with the challenge. Big challenges require big efforts.

Dweck and colleagues (Dweck 2000, 2006) find differences in the way these two implicit self-theories influence individual's explanations of, and responses, to failure. Fixed theorists are likely to explain failure in terms of external circumstances beyond their control. Typical comments in the face of failure might be "I've never been good at that," or "I'm just not that smart," or "That teacher is just too demanding," blaming failure on their fixed amount of intelligence or the level of the challenge and exhibit a "helpless" response. Growth theorists on the other hand typically say, "Okay, so that way doesn't work," or "If I had just worked a little harder," suggesting that failure could be averted by moving beyond what doesn't work to calculate another approach predicting success as a function of effort.

Fixed theorists are known to develop maladaptive behaviors when confronted with achievement challenges that keep them from solving problems they had previously thought soluble (Dweck, 2000, 2006). These maladaptive behaviors bear striking resemblances to the responses of some theorists to the problems of psychological measurement.

Measurement in the Human Sciences: The View from Two Mindsets

Psychology and the human sciences faced a particularly salient measurement puzzle when fundamental measurement as defined by Campbell (1920) seemed impossible for these fields.

Without fundamental measurement there could be no derived measurement and so, according to Campbell's theory, psychology was without measurement. What is more, the task of locating analogues of numerical addition pertinent to psychological measurement appeared grim. (Michell, 1990)

How might implicit self-theories of intelligence help us understand the ways in which these problems were addressed in the history of science? In other words, construing the situation as an "achievement challenge," how might the historical approaches taken by different individuals to the problem of psychological measurement be interpreted in light of the implicit self-theory constructs? Let's look at two such individuals.

S. S. Stevens: The Road Most Traveled

The problem of measurement filled many in the field of psychology with consternation, and none more than S. S. Stevens. Stevens attacked the problem and constructed the road most traveled by human scientists since. His approach was to redefine measurement operationally.

In an autobiographical article Stevens wrote, "My own central problem throughout the 1930's was measurement, because the quantification of the sensory attributes seemed impossible unless the nature of measurement could be properly understood" (1974, p. 409). The theory of measurement that he came to propose owed much to Campbell's, but it also owed a lot to what was then a fledgling philosophical movement, operationalism. (Michell, 1990, p. 15)

In drawing from operationalism, Stevens' response to the measurement challenge exhibited the characteristics of a "fixed theory of intelligence" - a fixed mindset.

Remember, the fixed mindset looks for a way to appear smart or rigorous with the least risk. Failure would be a flagrant denial of innate intelligence, or, in this case, a flagrant denial of the value of a putative science. So, Stevens' approach to the achievement challenge was to retreat to a challenge posing a diluted risk of failure.

Thus, in his 1935 article, "The operational definition of psychological terms," in the journal, Psychological Review, Stevens set the stage for success in measuring by establishing new guidelines. His proposal set forth a program that said as long as you define your operations and keep true to those definitions then you have a "measurement." Fundamental measurement as defined by Campbell was abandoned due to circumstances beyond the control of psychologists, in favor of a solution that guarantees the appearance of success.

As in the case of Dweck's mindset research, the consequence of making the choice in favor of the immediately solvable problem unnecessarily limits the learning boundaries of those involved. This is seen in the way research that reports results defined through Stevens' model consistently restricts itself to descriptive statistics with context specific applications. The consequences of this mindset in human science research means that the utility of reported findings is restricted to instrument- and sample-specific results. The alternative, although a road less traveled, yields quite different results.

G. Rasch: The Road Less Traveled

Rasch (1960) was also concerned with the problem of measurement in the human sciences, particularly as it affected the measurement of individuals. In the preface to his Probabilistic Models for Some Intelligence and Attainment Tests, he presents the problem as one of requiring models that demand that the result of an encounter between an instrument and an individual depend only on the individual's ability and the instrument's difficulty.

Symmetrically, it ought to be possible to compare stimuli belonging to the same class - "measuring the same thing" - independent of which particular individuals within a class considered were instrumental for comparison.

This is a huge challenge, but once the problem has been formulated it does seem possible to meet it. (Rasch, 1960, p. xx). In other words, he advocated for models that maintained the requirements of fundamental measurement, that the "calibration of the measuring instruments must be independent of those objects that happen to be used for calibration. Second, the measurement of objects must be independent of the instrument that happens to be used for measuring," (Wright and Stone, 1979, p. xii). And it is apparent that Rasch did so with the full knowledge of the scope of the challenge.

Embracing the challenge, like a growth mindset theorist, he saw the solution as one of effort, a trial and error endeavor. Unlike Stevens, Rasch engaged the challenge at its heart, rather than change the nature of the problem to something less formidable. Rasch's effort produced a set of probabilistic models making human science variables amenable to fundamental measurement requirements. Results can be reported in invariant units creating a common language among interested parties. This releases results from context specific applications and is crucial to meaningful, linked conversations among various interested parties, such as teachers, students, parents, administrators, researchers, accreditors, etc.

In this context, the historian of science, Bruno Latour, remarks, "Every time you hear about a successful application of science, look for the progressive extension of a network" (Latour, 1987, p. 249). Choosing methods capable of supporting expanding networks of people communicating in common languages about the same things breaks the silence of non-connected networks enforced by analyses that contextually imprison research results.

When a common language is mobilized within a network of shared signification, with its terms and symbols everywhere recognized and accepted by those trained in reading them, meaningful communication is achieved, shared understandings and histories are more easily accumulated, and collective productivity is markedly enhanced. (Fisher, 2003, p. 801)

Those following Rasch's model (and other models like his) open learning boundaries. They extend what is possible for communities or networks of human science researchers to accomplish. They participate in the fascinating power of shared knowledge by way of preferring effort over retreat - a road too long less traveled.

Sharon G. Solloway

Campbell, N. R. (1920). Physics, the elements. Cambridge: Cambridge University Press.

Chiu, C., Dweck, C. S., Tong, J. Y., & Fu, J. H. (1997). Implicit theories and conceptions of morality. Journal of Personality and Social Psychology, 73(5), 923-940.

Chiu, C., Hong, Y., Dweck, C.S. (1997). Lay dispos-itionism and implicit theories of personality. Journal of Personality and Social Psychology, 73(1), 19-30.

Dweck, C. S. (2000). Self-theories: Their role in motivation, personality, and development. Philadelphia, PA: Psychology Press.

Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House.

Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256-273.

Fisher, W. P., Jr. (2003, December). Mathematics, measurement, metaphor, metaphysics: Part II. Accounting for Galileo's "fateful omission." Theory & Psychology, 13(6), 790-828.

Henderson, V., & Dweck, c. s. (1990). Motivation and achievement. In S. S. Feldman & G. R. Elliott (Eds.). At the threshold: The developing adolescent (pp. 308-329). Cambridge, MA: Harvard University Press.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. New York: Cambridge University Press.

Levy, S. R., Stroessner, S. J., & Dweck, C. S. (1998). Stereotype formation and endorsement: The role of implicit theories. Journal of Personality and Social Psychology, 74(6), 1421-1436.

Michell, J. (1990). An introduction to the logic of psychological measurement. Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Rasch, G. 1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danmarks Paedogogiske Institut. [Reprinted, 1980, Chicago, IL: University of Chicago Press.]

Wright, B. D., & Stone, M. H. (1979). Best Test Design. Chicago: MESA Press.

The Consequences for Measurement of Mindsets, Solloway, S.G. … Rasch Measurement Transactions, 2006, 20:3 p. 1066-8

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK,
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX,
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is