page_content
stringlengths 39
150k
|
---|
Reliability
Internal Consistency
Cronbach alpha coefficients for the eight domain scales of SDQ-I range from .80 to .90 in the normative sample
as reported in the manual ( N53,562; Marsh, 1992a ). The alpha coefficient for the global scale was .81 based on
739 participants ( Marsh, 1992a ). The range of alpha coefficients reported in the test manual for the 10 domain
scales of the SDQ-II was .83 to .90 and for the global self-esteem estimate .88 ( N54,494; Marsh, 1992b ). For the
SDQ-III, the range of alpha coefficients reported for the 12 domain scales was .75 to .95 (Marsh & O’Neill, 1984).
Similar coefficients for the SDQ scales (ranging from .76 to .95 with a mean of .90) were reported in Byrne (1996,
pp. 199 /C0200). In addition, Boyle (1994) provided a favorable evaluation of the SDQ-II and suggested that the
item content was not unduly narrow or redundant as to generate concerns about attenuated predictive validity
coefficients (see Boyle, 1991 for a detailed description of this issue).
Test/C0Retest
Test/C0retest reliability information for the SDQ-I is available in Blascovich and Tomaka (1991, p. 145) . Stability
coefficients for the SDQ-I scales across an interval of about six months were reported in Marsh, Smith, Barnes,
and Butler (1983) . The mean stability coefficient was .61 for a combined sample of 671 students in fourth to sixth
grade. Stability coefficients for the SDQ-II scales across a 7-week interval for 137 female students were reported
inMarsh and Peart (1988) , ranging from .72 to .88 (median 5.79). Similar levels of reliability were reported for
the SDQ-II short-form ( Marsh et al., 2005a,b ) using a sample of 3,731 adolescents who completed assessments at
three time points during the academic year. The average stability coefficients for the short-form ranged from .62
(Time 1 to Time 3) to .72 (Time 2 to Time 3) and these values remained largely unchanged when coefficients
were corrected for measurement error (.68 and .79). Short-term stability coefficients for the SDQ-III have been
reported for 361 participants in an Outward Bound course ( Marsh, Richards, & Barnes, 1986 ), with stability coef-
ficients across a one-month interval ranging from .77 to .94 (median 5.87).
Validity
Marsh and colleagues have accumulated much validity inf ormation for the SDQ instruments. A good starting point
for this literature is the extensive reviews by Boyle (1994) ,Byrne (1996) ,Marsh (1990a) ,a n d Marsh and Craven (2006) .
Convergent/Concurrent
Scores on general self-esteem scales of the SDQ-III are strongly associated with the RSE. For example, Marsh,
Byrne, and Shavelson (1988) reported a correlation of .79 between the RSE and the SDQ-III general scale based on
a dataset of 991 Canadian high school students (.87 when adjusted for attenuation due to measurement error).
Marsh (1992a) reported that the SDQ-I general scale correlated .57 with the Harter global scale in a sample of 508
Australian students in 7th through 9th grades (.69 when corrected for attenuation). Further evidence based on
specific domain correlations suggests that the SDQ and Harter scales measure similar content. The corresponding
scales measuring physical self-concept correlated .67 and the scales measuring peer relations correlated .74
(Marsh, 1992a ). The correlation between self-reported and inferred general self-esteem (i.e., self-esteem rated by
an outside informant) was .41 and .58 for the domains for the SDQ-III ( Marsh, 1992b ). Marsh and his colleagues
have linked academic achievement to specific domains of the SDQs (e.g., Marsh & Craven, 2006 ). For example,
Marsh et al. (1988) reported a correlation of .55 between mathematics self-concept and mathematics achievement
test scores and a correlation of .24 between verbal self-concept and verbal achievement. Marsh and Peart (1988)
reported that a performance based composite of physical fitness correlated positively with physical ability self-
concept scores from the SDQ-II ( r5.45) in an Australian sample of 137 8th grade girls.
Divergent/Discriminant
As an example of divergent validity, the SDQ general self scale correlates negatively with measures of negative
mood states including anxiety as measured by Spielberger’s State /C0Trait Anxiety Inventory ( r52 .51,N5130,
Marsh, 1992b ).
Construct Factor Analytic
Marsh initially used item pairs or parcels in all exploratory and confirmatory factor analyses (see Byrne, 1996 )
on the logic that item pairs are more reliable than single items and offer some convenience to researchers by
reducing the number of indicators. Marsh (1992a) reported the results of an initial exploratory factor analysis of
the SDQ-I with an oblique rotation based on 3,562 responses from his normative database. Factor coefficients148 6. MEASURES OF SELF-ESTEEM
II. EMOTIONAL DISPOSITIONS |
were large for item parcels connected to the target factor (ranging from .46 to .85, median 5.73) and small for
item parcels connected to non-target factors (ranging from 2.02 to .19, median 5.03). Higher-order factor analy-
sis of the intercorrelations of the SDQ-I domain scales indicated that the non-academic domains were separate
from the reading and math domains. And the reading and math factors were virtually uncorrelated ( r5.05).
Exploratory factor analysis results for the SDQ-II (with oblique rotation) based on a sample of 901 students
(Marsh, 1992b ) were similar to those for the SDQ-I ( Marsh, 1992a ). Factor coefficients were large for item parcels
connected to the target factor (ranging from .48 to .80, median 5.68) and small for item parcels connected to
non-target factors (ranging from 2.12 to .27, median 5.03). Correlations between latent factors ranged from
2.03 to .39 with a median of .15. The structure of a French language translation of SDQ-II was also tested with a
confirmatory factor analysis on a sample of 480 French students aged 15 to 17 years. The results with the French
versions were replicated using a separate sample of 903 French adolescents ( Marsh, Parada, & Ayotte, 2004 ).
Findings for the SDQ-III are also in line with the results for the SDQ-I and SQQ-II scales supporting the factorial
integrity of the measure (see Byrne, 1996 ;Vispoel, 1995 ). The overall findings from the extensive series of explor-
atory and confirmatory factor analyses provide strong support for the internal structure of the SDQ measures.
Criterion/Predictive
Marsh and Craven (2006) found that academic outcomes were predicted by academic self-concept scores.
Marsh (1990b) previously had reported that academic self-concept predicted future grades after controlling for
prior grades ( β/C25.20). These predictive effects were replicated in Germany ( Marsh et al., 2005a,b ) and were con-
sistent with the meta-analytic review by Valentine, DuBois, and Cooper (2004) , who reported an overall standard-
ized regression coefficient of .12 for measures of academic self-belief predicting later achievement, with initial
achievement as a covariate (p. 127). Similar findings have been reported for research focused on physical self-
concept and physical fitness composites ( Marsh & Craven, 2006 ).
Location
Marsh, H.W. (1992a). Self Description Questionnaire (SDQ) I: A theoretical and empirical basis for the measurement of
multiple dimensions of preadolescent self-concept. An interim test manual and research monograph. Macarthur, New
South Wales, Australia: University of Western Sydney.
Marsh, H.W. (1992b). Self Description Questionnaire (SDQ) II: A theoretical and empirical basis for the measurement
of multiple dimensions of adolescent self-concept. A test manual and research monograph. Macarthur, New South Wales,
Australia: University of Western Sydney.
Marsh, H.W. (1992c). Self Description Questionnaire (SDQ) III: A theoretical and empirical basis for the measurement
of multiple dimensions of late adolescent self-concept. An interim test manual and research monograph. Macarthur, New
South Wales, Australia: University of Western Sydney.
Note: Items, Test Manuals, and information about permission are available: www.uws.edu.au/cppe/research/
instruments (Retrieved May 20, 2014).
Global items are listed in Marsh et al. (1985a, p. 586) .
Results and Comments
Byrne (1996) noted that the three SDQ measures are the most ‘well-validated’ measures for their respective
age groups (SDQ-I, p. 117; SDQ-II, p. 153, SDQ-III, p. 204). Likewise, Boyle (1994) noted that the SDQ measures
‘should be among the instruments of choice for researchers wanting to measure well-defined multiple measures
of self-concept’ (p. 641). The SDQ measures are extraordinarily well researched and documented. Each SDQ mea-
sure has a large and detailed manual that summarizes a number of studies using impressive sample sizes and
sophisticated statistical analyses. The most likely alternative to the SDQ would be the family of measures created
by Harter and her colleagues. However, as noted previously, the response format for the SDQ items is probably
easier to use for most participants than the Harter response format.
The time needed to administer the SDQ-I and Harter scales for children is roughly equivalent, despite the
added length of the SDQ-I. The two-step forced-choice procedure associated with the Harter measures adds addi-
tional time to the administration of the inventories. This two-step format may create problems for some respon-
dents, perhaps contributing to measurement errors. Beyond this concern with response format, the SDQ
produces separate reading and mathematics self-concept scores. The multiple domains of the academic self-
concept assessed by the SDQ may be an advantage over the Harter inventories. Finally, the sheer amount of psy-
chometric detail on the SDQ measures tends to overshadow that for the Harter measures.
The one area where the Harter measures may hold an advantage over the SDQ concerns the multidimensional
assessment of the adult self-concept for individuals who are no longer embedded within an academic context.149 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
The Harter adult measure captures a wider range of domains than the SDQ-III and thus might be more
suitable for non-college students. These additional domains include scales related to work and parenting, con-
cerns which are relevant for many adults past their college years. We also acknowledge that the Harter measures
appear to have generated more citations than the SDQ measures.
Like the Harter scales, there are several notes of caution to consider before deciding to use the SDQ scales. First,
the number of response scale options changes across the SDQ-I, SDQ-II, and SDQ-III measures, making longitudinal
research across a wide range of age groups more challenging. For researchers studying more than one developmental
period, there are potential advantages to adopting a standard response set across measures. Second, some of the
items and specific wording varies across the SDQ-I, SDQ-II, and SDQ-III. Third, there might be important domains
not well captured by the SDQ and there are domain specific scales on the SDQ which need more attention. Similar to
the Harter scales, there might be too many domains on some SDQ versions. These domains add length without neces-
sarily providing critically important information. Thus, more work is needed to establish the validity and necessity of
all the domains. Much of the existing work has focused on achievement-related domains. Fourth, there seems to be
room to develop an SDQ-IV to tap more of the domains of life relevant to adults outside of a college or university set-
ting. Fifth, the length of the SDQ is a potential concern but development of the short-form of the SDQ-II seems prom-
ising (see Marsh et al., 2005a,b ). The final concern is that if researchers only want to measure global self-esteem, then
it would be hard to recommend the SDQ over the RSE, given the latter’s ongoing popularity.
SCALES OF THE SELF-DESCRIPTION QUESTIONNAIRE
Version of the self
description questionnaire
ScaleSDQ-I Age:
preadolescent SDQ-II Age: adolescentSDQ-III Age: late
adolescent/college age
General A lot of things about
me are good.Overall, I have a lot to be
proud of.Overall, I am pretty accepting
of myself.
Physical
abilitiesI can run fast. I am good at things like
sports, gym, and dance.I am a good athlete.
Physical
appearanceI like the way I look. I am good looking. I have a physically attractive
body.
Peer
relationshipsI am easy to like.
Same-sex
relationshipsI make friends easily with
members of my own sex.I share lots of activities with
members of the same sex.
Opposite-sex
relationshipsPeople of the opposite sex
whom I like, don’t like me.I get a lot of attention from
members of the opposite sex.
Parent
relationshipsMy parents like me. I get along well with my
parents.My parents understand me.
Emotional
stabilityI get upset easily. I worry a lot.
Honesty/
trustworthinessHonesty is very important
to me.I am a very honest person.
Spiritual
values/religionI am a spiritual/religious
person.
Reading/verbal I am good at reading. I am hopeless in English
classes.I have good reading
comprehension.
Mathematics I learn things quickly
in mathematics.I am good at mathematics. I am quite good at
mathematics.
General school
or academicsI enjoy doing work in
all school subjects.I am too stupid at school to
get into a good university.I am good at most academic
subjects.
Problem solving I am an imaginative person.
Note: Reproduced with permission.150 6. MEASURES OF SELF-ESTEEM
II. EMOTIONAL DISPOSITIONS |
INSTRUCTIONS FOR THE SDQ-II
PLEASE READ THESE INSTRUCTIONS FIRST
This is not a test - there are no right or wrong answers.
This is a chance for you to look at how you think and feel about yourself. It is important that you:
are honest
give your own views about yourself, without talking to others
report how you feel NOW (not how how you felt at another time in your life, or how you might feel
tomorrow)
Your answers are confidential and will only be used for research or program development. Your answers will not
be used in any way to refer to you as an individual.
Use the six-point scale to indicate how true (like you) or how false (unlike you), each statement over the page is
as a description of you. Please do not leave any statements blank.
12 3 4 5 6
False Mostly
falseMore false
than trueMore true
than falseMostly
trueTrue
Not like me at all; it
isn’t like me at allThis statement describes me well;
it is very much like me
Statement False True
01. MATHEMATICS is one of my best subjects 1 2 3 4 5 6
02. Nobody thinks that I am good looking 1 2 3 4 5 6
03. Overall, I have a lot to be proud of 1 2 3 4 5 6
Notes:
A scoring key is provided in the user manual: www.uws.edu.au/cppe/research/instruments/sdqii
The entire SDQ-II is available here: www.uws.edu.au/__data/assets/pdf_file/0005/361886/SDQII_Inst.pdf
The entire SDQ-I is available here: www.uws.edu.au/data/assets/pdf_file/0008/361871/SDQI_Inst.pdf
Sample items from the SQQ-I:
In general, I like being the way I am. (General-Self Scale)
I am good looking. (Physical Appearance Scale)
I am good at MATHEMATICS. (Mathematics Scale)
The entire SDQ-III is available here: www.uws.edu.au/data/assets/pdf_file/0020/361901/SDQIII_Inst.pdf
Sample items from the SQQ-III:
Overall, I have a lot of respect for myself. (General Esteem Scale)
I have a good body build. (Physical Appearance Scale)
I like most academic subjects. (Academic Scale)
Source : Reproduced with permission.
FUTURE RESEARCH DIRECTIONS
Although the current self-report measures of self- esteem appear to have strong psychometric support
(especially the RSE and the SDQ family of instruments) , there are important future directions for assessing
self-esteem. Foremost, more work is needed on crea ting and evaluating implicit measures of self-esteem
(see Buhrmester et al., 2011 ) .I tw o u l db ev a l u a b l et oh a v eas e to fi m p l i c i tm e a s u r e st h a tw e r es t a b l ea n d
showed good convergent validity with each other and g ood predictive validity. Such instruments could allow
researchers to address interesting and important questi ons such as whether individuals with heightened levels151 FUTURE RESEARCH DIRECTIONS
II. EMOTIONAL DISPOSITIONS |
of narcissism have underlying (and perhaps unacknowledged) sets of concerns about their levels of self-worth.
Likewise, ‘good’ implicit measures could help researcher s better understand the psychological correlates of
potential discrepancies and interac tions between explicit and implicit self-esteem. On a purely practical level,
implicit measures may help researchers address concern s with response biases with e xplicit self-report mea-
sures of self-esteem.
It might be reasonable to ask whether there is a need for another survey-based measure of self-esteem.
One possibility might be to create a single inventory that has a global scale and taps the major domains of life
such as physical appearance, achievement contexts (school and work), and relationships (parents, friends, roman-
tic partners) using the same set of items and response formats for all ages. Such a standardized inventory could
facilitate more life-span research into self-esteem and the self-concept because it would be straightforward to test
measurement invariance and there would be no need to adopt different forms for different ages. Such a measure
could help address issues such as when during human development a global sense of the self first emerges.
We have started such an effort and we refer interested readers to our website for more details: www.selflab.org/
lse(Retrieved January 7, 2014).
References
Ackerman, R. A., Brecheen, C., Corker, K. A., Donnellan, M. B., & Witt, E. A. (2013). [The College Life Study]. Unpublished raw data.
University of Texas, Dallas. Further information is available from Dr. Robert Ackerman ,[email protected] .or M. Brent Donnellan
,[email protected] ..
Ackerman, R. A., & Donnellan, M. B. (2013). Evaluating self-report measures of narcissistic entitlement. Journal of Psychopathology and
Behavioral Assessment ,35, 460/C0474.
Ahadi, S., & Diener, E. (1989). Multiple determinants and effect size. Journal of Personality and Social Psychology ,56, 398/C0406.
Aidman, E. V. (1998). Analyzing global dimensions of self-esteem: Factorial structure and reliability of the Self-Liking/Self-Competence Scale.
Personality and Individual Differences ,24, 735/C0737.
Ames, D. R., Rose, P., & Anderson, C. P. (2006). The NPI-16 as a short measure of narcissism. Journal of Research in Personality ,40, 440/C0450.
Barrick, M. R., & Mount, M. K. (1996). Effects of impression management and self-deception on the predictive validity of personality con-
structs. Journal of Applied Psychology ,81, 261/C0272.
Baumeister, R. F., Campbell, J. D., Krueger, J. I., & Vohs, K. E. (2003). Does high self-esteem cause better performance, interpersonal success,
happiness, or healthier lifestyles? Psychological Science in the Public Interest ,4,1/C044.
Blascovich, J., & Tomaka, J. (1991). Measures of self-esteem. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality
and social psychological attitudes (pp. 115 /C0160). New York: Academic.
Boivin, M., Vitaro, F., & Gagnon, C. (1992). A reassessment of the self-perception profile for children: Factor structure, reliability, and con-
vergent validity of a French version among second through six grade children. International Journal of Behavioral Development ,15,
275/C0290.
Bornstein, M. H., Hahn, C., & Haynes, O. M. (2010). Social competence, externalizing, and internalizing behavioral adjustment from early
childhood through early adolescence: Developmental cascades. Development and Psychopathology ,22, 717/C0735.
Bosson, J. K., & Swann, W. B. (1999). Self-liking, self-competence, and the quest for self-verification. Personality and Social Psychology Bulletin ,
25, 1230/C01241.
Bosson, J. K., Swann, W. B., Jr., & Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: The blind men and the
elephant revisited? Journal of Personality and Social Psychology ,79, 631/C0643.
Boyle, G. J. (1991). Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Personality and Individual
Differences ,12, 291/C0294.
Boyle, G. J. (1994). Self-Description Questionnaire II. In D. J. Keyser, & R. C. Sweetland (Eds.), Test critiques (Vol. 10, pp. 632 /C0643). Kansas
City, MO: Test Corporation of America.
Brown, R. P., & Zeigler-Hill, V. (2004). Narcissism and the non-equivalence of self-esteem measures: A matter of dominance?. Journal of
Research in Personality ,38, 585/C0592.
Buhrmester, M. D., Blanton, H., & Swann, W. B., Jr. (2011). Implicit self-esteem: Nature, measurement, and a new way forward. Journal of
Personality and Social Psychology ,100, 365/C0385.
Burwell, R. A., & Shirk, S. R. (2006). Self-processes in adolescent depression: The role of self-worth contingencies. Journal of Research on
Adolescence ,16, 479/C0490.
Byrne, B. M. (1988). Measuring adolescent self-concept: Factorial validity and equivalency of the SDQ III across gender. Multivariate Behavioral
Research ,23, 361/C0375.
Byrne, B. M. (1996). Measuring self-concept across the lifespan . Washington, DC: American Psychological Association.
Chen, Q., Hughes, J. N., Liew, J., & Kwok, O. (2010). Joint contributions of peer acceptance and peer academic reputation to achievement in
academically at risk children. Journal of Applied Developmental Psychology ,31, 448/C0459.
Cicero, D. C., & Kerns, J. G. (2011). Is paranoia a defense against or an expression of low self-esteem? European Journal of Personality ,25,
326/C0335.
Coopersmith, S. (1967). The antecedents of self-esteem . San Francisco: W. H. Freeman.
Coopersmith, S. (1981). Self-esteem inventories . Palo Alto, CA: Consulting Psychologists Press Inc.152 6. MEASURES OF SELF-ESTEEM
II. EMOTIONAL DISPOSITIONS |
Corwyn, R. F. (2000). The factor structure of the global self-esteem among adolescents and adults. Journal of Research in Personality ,34,
357/C0379.
Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology ,24,
349/C0354.
DiStefano, C., & Motl, R. W. (2006). Further investigating method effects associated with negatively worded items on self-report surveys.
Structural Equation Modeling ,13, 440/C0464.
Dollinger, S. J., & Malmquist, D. (2009). Reliability and validity of single-item self-reports: With special relevance to college students’ alcoho l
use, religiosity, study, and social life. Journal of General Psychology ,136, 231/C0242.
Donnellan, M. B., Kenny, D. A., Trzesniewski, K. H., Lucas, R. E., & Conger, R. D. (2012). Using trait-state models to evaluate the longitudinal
consistency of self-esteem from adolescence to adulthood. Journal of Research in Personality ,46, 634/C0645.
Donnellan, M. B., & McAdams, K. A. (2013). [The College Student Development Study]. Unpublished raw data . Michigan State University.
Further information is available from M. Brent Donnellan ,[email protected] ..
Donnellan, M. B., Trzesniewski, K. H, Conger, K. J., & Conger, R. D. (2007). A three-wave longitudinal study of self-evaluations during young
adulthood. Journal of Research in Personality ,41, 453/C0472.
Donnellan, M. B., Trzesniewski, K. H., & Robins, R. W. (2011). Self-esteem: Enduring issues and controversies. In T. Chamorro-Premuzic,
S. von Stumm, & A. Furnham (Eds.), The Wiley-Blackwell handbook of individual differences (pp. 718 /C0746). New York: Wiley-Blackwell.
Donnellan, M. B., Trzesniewski, K. H., Robins, R. W., Moffitt, T. E., & Caspi, A. (2005). Low self-esteem is related to aggression, antisocial
behavior, and delinquency. Psychological Science ,16, 328/C0335.
Eagly, A. H. (1967). Involvement as a determinant of response to favorable and unfavorable information. Journal of Personality and Social
Psychology ,7. (3, Pt. 2) [Whole No. 643].
Egberink, J. L., & Meijer, R. R. (2011). An item response theory analysis of Harter’s Self-Perception Profile for children or why strong clinical
scales should be distrusted. Assessment ,18, 201/C0212.
Eiser, C., Eiser, J. R., & Havermans, T. (1995). The measurement of self-esteem: Practical and theoretical considerations. Personality and
Individual Differences ,18, 429/C0432.
Elfering, A., & Grebner, S. (2012). Getting used to academic public speaking: Global self-esteem predicts habituation in blood pressure
response to repeated thesis presentations. Applied Psychophysiology and Biofeedback ,37, 109/C0120.
Ferrier, A. G., & Martens, M. P. (2008). Perceived incompetence and disordered eating among college students. Eating Behaviors ,9, 111/C0119.
Fleming, J. S., & Courtney, B. E. (1984). The dimensionality of self-esteem: II. Hierarchical facet model for revised measurement scales. Journal
of Personality and Social Psychology ,46, 404/C0421.
Gana, K., Saada, Y., Bailly, N., Joulain, M., Herve ´, C., & Alaphilippe, D. (2013). Longitudinal factorial invariance of the Rosenberg Self-Esteem
Scale: Determining the nature of method effects due to item wording. Journal of Research in Personality ,47, 406/C0416.
Gebauer, J. E., Riketta, M., Broemer, P., & Maio, G. R. (2008). ‘How much do you like your name?’ An implicit measure of global self-esteem.
Journal of Experimental Social Psychology ,44, 1346/C01354.
Granleese, J., & Joseph, S. (1993). Self-perception profile of adolescent girls at a single-sex and a mixed-sex school. Journal of Genetic Psychology ,
60, 210.
Gray-Little, B., Williams, V. S. L., & Hancock, T. D. (1997). An item response theory analysis of the Rosenberg Self-Esteem Scale. Personality
and Social Psychology Bulletin ,23, 443/C0451.
Greenberger, E., Chen, C., Dmitrieva, J., & Farruggia, S. P. (2003). Item-wording and the dimensionality of the Rosenberg Self-Esteem Scale:
Do they matter? Personality and Individual Differences ,35, 1241/C01254.
Greenwald, A. G., & Farnham, S. D. (2000). Using the Implicit Association Test to measure self-esteem and self-concept. Journal of Personality
and Social Psychology ,79, 1022/C01038.
Greenway, A. P., Milne, L. C., & Clarke, V. (2003). Personality variables, self-esteem and depression and an individual’s perception of God.
Mental Health, Religion & Culture ,6,4 5/C058.
Harter, S. (1982). The perceived competence scale for children. Child Development ,53,8 7/C097.
Harter, S. (1985). The self-perception profile for children: Revision of the perceived competence scale for children . Unpublished manuscript, Colorado,
USA: University of Denver.
Harter, S. (1988). Manual for the Self-Perception Profile for Adolescent . Denver, CO: University of Denver.
Harter, S. (2012a). The self-perception profile for children: Manual and questionnaires . Unpublished manuscript, Colorado, USA: University of
Denver.
Harter, S. (2012b). The self-perception profile for adolescents: Manual and questionnaires . Unpublished manuscript, Colorado, USA: University of
Denver.
Harter, S., & Pike, R. (1984). The pictorial scale of perceived competence and social acceptance for young children. Child Development ,55,
1969/C01982.
Heatherton, T. F., & Polivy, J. (1991). Development and validation of a scale for measuring state self-esteem. Journal of Personality and Social
Psychology ,60, 895/C0910.
Heatherton, T. F., & Wyland, C. L. (2003). Assessing self-esteem. In S. J. Lopez, & C. R. Snyder (Eds.), Positive psychological assessment: A hand-
book of models and measures (pp. 219 /C0233). Washington, DC: American Psychological Association.
Heise, D. R. (1969). Separating reliability and stability in test /C0retest correlation. American Sociological Review ,34,9 3/C0101.
Helmreich, R., & Stapp, J. (1974). Short forms of the Texas Social Behavior Inventory (TSBI), an objective measure of self-esteem. Bulleting of
the Psychonomic Society ,4, 473/C0475.
Hess, R. S., & Petersen, S. J. (1996). Reliability and validity of the self-perception profile for children with Mexican American elementary-aged
children. Journal of Psychoeducational Assessment ,14, 229/C0239.
Hofmann, W., Gawronski, G., Gschwendner, T., Le, H., & Schmitt, M. (2005). A meta-analysis on the correlation between the Implicit
Association Test and explicit self-report measures. Personality and Social Psychology Bulletin ,31, 1369/C01385.153 REFERENCES
II. EMOTIONAL DISPOSITIONS |
Holm-Denoma, J. M., & Hankin, B. L. (2010). Perceived physical appearance mediates the rumination and bulimic symptom link in adolescent
girls. Journal of Clinical Child & Adolescent Psychology ,39, 537/C0554.
James, W. (1985/1892). Psychology: The briefer course . Notre Dame, IN: University of Notre Dame Press.
Janis, I. L., & Field, P. B. (1959). Sex differences and factors related to persuasibility. In C. I. Hovaland, & I. L. Janis (Eds.), Personality and per-
suasibility (pp. 55 /C068). New Haven, CT: Yale University Press.
Jonason, P. K., & Webster, G. D. (2010). The dirty dozen: A concise measure of the dark triad. Psychological Assessment ,22, 420/C0432.
Judge, T. A., Erez, A., Thoresen, C. J., & Bono, J. E. (2002). Are measures of self-esteem, neuroticism, locus of control, and generalized self-
efficacy indicators of a common core construct?. Journal of Personality and Social Psychology ,83, 693/C0710.
Kansi, J., Wichstrom, L., & Bergman, L. R. (2005). Eating problems and their risk factors: A 7-year longitudinal study of a population sample
of Norwegian adolescent girls. Journal of Youth and Adolescence ,34, 521/C0531.
Kim-Spoom, J. K., Ollendick, T. H., & Seligman, L. D. (2012). Perceived competence and depressive symptoms among adolescents: The moder-
ating role of attributional style. Child Psychiatry and Human Development ,43, 612/C0630.
Koole, S. L., Dijksterhuis, A., & van Knippenberg, A. (2001). What’s in a name: Implicit self-esteem and the automatic self. Journal of Personality
and Social Psychology ,80, 669/C0685.
Krizan, Z., & Suls, J. (2008). Are implicit and explicit measures of self-esteem related? A meta-analysis for the Name /C0Letter Test. Personality
and Individual Differences ,44, 521/C0531.
Kuster, F., & Orth, U. (2013). The long-term stability of self-esteem: Its time-dependent decay and nonzero asymptote. Personality and Social
Psychology Bulletin ,39, 677/C0690.
Kuster, F., Orth, U., & Meier, L. L. (2013). High self-esteem prospectively predicts better work conditions and outcomes. Social Psychological
and Personality Science ,4, 668/C0675.
Li, A., & Bagger, J. (2006). Using the BIDR to distinguish the effects of impression management and self-deception on the criterion validity of
personality measures: A meta-analysis. International Journal of Selection and Assessment ,14, 131/C0141.
Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the Depression Anxiety Stress Scales (2nd ed.). Sydney, Australia: Psychology Foundation
of Australia.
Mar, R. A., DeYoung, C. G., Higgins, D. M., & Peterson, J. B. (2006). Self-liking and self-competence separate self-evaluation from self-
deception: Associations with personality, ability, and achievement. Journal of Personality ,74, 1047/C01078.
Marsh, H. W. (1988). Self Description Questionnaire: A theoretical and empirical basis for the measurement of multiple dimensions of preadolescent self-
concept: A test manual and research monograph . San Antonio, TX: The Psychological Corporation.
Marsh, H. W. (1990a). A multidimensional, hierarchical model of self-concept: Theoretical and empirical justification. Educational Psychology
Review ,2,7 7/C0172.
Marsh, H. W. (1990b). Causal ordering of academic self-concept and academic achievement: A multiwave, longitudinal panel analysis. Journal
of Educational Psychology ,82, 646/C0656.
Marsh, H. W. (1992a). Self Description Questionnaire (SDQ) I: A theoretical and empirical basis for the measurement of multiple dimensions of preadoles-
cent self-concept. An interim test manual and research monograph . Macarthur, New South Wales, Australia: University of Western Sydney.
Marsh, H. W. (1992b). Self Description Questionnaire (SDQ) II: A theoretical and empirical basis for the measurement of multiple dimensions of adoles-
cent self-concept. A test manual and research monograph . Macarthur, New South Wales, Australia: University of Western Sydney.
Marsh, H. W. (1992c). Self Description Questionnaire (SDQ) III: A theoretical and empirical basis for the measurement of multiple dimensions of late
adolescent self-concept. An interim test manual and research monograph . Macarthur, New South Wales, Australia: University of Western
Sydney.
Marsh, H. W. (1994). Using the National Longitudinal Study of 1988 to evaluate theoretical models of self-concept: The Self-Description
Questionnaire. Journal of Educational Psychology ,86, 439/C0456.
Marsh, H. W. (1996). Positive and negative global self-esteem: A substantively meaningful distinction or artifactors? Journal of Personality and
Social Psychology ,70, 810/C0819.
Marsh, H. W., Barnes, J., Cairns, L., & Tidman, M. (1984). The Self Description Questionnaire (SDQ): Age and sex effects in the structure and
level of self-concept for preadolescent children. Journal of Educational Psychology ,76, 940/C0956.
Marsh, H. W., Byrne, B. M., & Shavelson, R. J. (1988). A multifaceted academic self-concept: Its hierarchical structure and its relation to aca-
demic achievement. Journal of Educational Psychology ,80, 366/C0380.
Marsh, H. W., & Craven, R. G. (2006). Reciprocal effects of self-concept and performance from a multidimensional perspective: Beyond seduc-
tive pleasure and unidimensional perspectives. Perspectives on Psychological Science ,1, 133/C0163.
Marsh, H. W., Craven, R. G., & Debus, R. (1991). Self-concepts of young children 5 to 8 years of age: Measurement and multidimensional
structure. Journal of Educational Psychology ,83, 377/C0392.
Marsh, H. W., Craven, R. G., & Debus, R. (1998). Structure, stability, and development of young children’s self-concepts: A multicohort-
multioccasion study. Child Development ,69, 1030/C01053.
Marsh, H. W., Ellis, L., & Craven, R. G. (2002). How do preschool children feel about themselves? Unraveling measurement and multidimen-
sional self-concept structure. Developmental Psychology ,38, 376/C0393.
Marsh, H. W., Ellis, L. A., Parada, R. H., Richards, G., & Heubeck, B. G. (2005a). A short version of the Self Description Questionnaire II:
Operationalizing criteria for short-form evaluation with new applications of confirmatory factor analyses. Psychological Assessment ,17,
81/C0102.
Marsh, H. W., & Gouvernet, P. J. (1989). Multidimensional self-concepts and perceptions of control: Construct validation of responses by chil-
dren. Journal of Educational Psychology ,81,5 7/C069.
Marsh, H. W., & Holmes, I. W. M. (1990). Multidimensional self-concepts: Construct validation of responses by children. American Educational
Research Journal ,27,8 9/C0117.
Marsh, H. W., Parada, R. H., & Ayotte, V. (2004). A multidimensional perspective of relations between self-concept (Self Description
Questionnaire II) and adolescent mental health (Youth Self-Report). Psychological Assessment ,16,2 7/C041.154 6. MEASURES OF SELF-ESTEEM
II. EMOTIONAL DISPOSITIONS |
Marsh, H. W., Parker, J., & Barnes, J. (1985a). Multidimensional adolescent self-concepts: Their relationship to age, sex, and academic mea-
sures. American Educational Research Journal ,22, 422/C0444.
Marsh, H. W., & Peart, N. D. (1988). Competitive and cooperative physical fitness training programs for girls: Effects on physical fitness and
on multidimensional self-concepts. Journal of Sports Psychology ,10, 390/C0407.
Marsh, H. W., Richards, G. E., & Barnes, J. (1986). Multidimensional self-concepts: The effect of participation in an Outward Bound program.
Journal of Personality and Social Psychology ,50, 195/C0204.
Marsh, H. W., Scalas, L. F., & Nagengast, B. (2010). Longitudinal tests of competing factor structures for the Rosenberg Self-Esteem Scale:
Traits, ephemeral artifacts, and stable response styles. Psychological Assessment ,22, 366/C0381.
Marsh, H. W., Smith, I. D., & Barnes, J. (1985b). Multidimensional self-concepts: Relations with sex and academic achievement. Journal of
Educational Psychology ,77, 581/C0596.
Marsh, H. W., Smith, I. D., Barnes, J., & Butler, S. (1983). Self-concept: Reliability, stability, dimensionality, validity, and the measurement of
change. Journal of Educational Psychology ,75, 772/C0790.
Marsh, H. W., Trautwein, U., Lu ¨dtke, O., Ko ¨ller, O., & Baumert, J. (2005b). Academic self-concept, interest, grades, and standardized test
scores: Reciprocal effects models of causal ordering. Child Development ,76, 397/C0416.
McElhaney, K. B., Antonishak, J., & Allen, J. P. (2008). . ‘They like me, they like me not’: Popularity and adolescents’ perceptions of acceptance
predicting social functioning over time. Child Development ,79, 720/C0731.
M e a g h e r ,B .E . ,&A i d m a n ,E .V .( 2 0 0 4 ) .I n d i v i d u a ld i f f e r e n c e si ni m p l i c i ta n dd e c l a r e ds e l f - e s t e e ma sp r e d i c t o r so fr e s p o n s et on e g a -
tive performance evaluation: Validating implicit association test as a measure of self-attitudes. International Journal of Testing ,4,
19/C042.
Messer, B., & Harter, S. (1986). Manual for the Self-Perception Profile for College Adults . Denver, CO: University of Denver.
Messer, B., & Harter, S. (2012). The self-perception profile for adults: Manual and questionnaires . Unpublished manuscript, Colorado, USA:
University of Denver.
Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., et al. (2001). Psychological testing and psychological assessment.
American Psychologist ,56, 128/C0165.
Miller, H. A. (2000). Cross-cultural validity of a model of self-worth: Application to Finnish children. Social Behavior and Personality ,28,
105/C0118.
Moorman, R. H., & Podsakoff, P. M. (1992). A meta-analytic review and empirical test of the potential confounding effects of social desirability
response sets in organizational behaviour research. Journal of Occupational and Organizational Psychology ,65, 131/C0149.
Muris, P., Meesters, C., & Fijen, P. (2003). The Self-Perception Profile for Children: Further evidence for its factor structure, reliability, and
validity. Personality and Individual Differences ,35, 1791/C01802.
Neeman, J., & Harter, S. (1986). Manual for the Self-Perception Profile for College Students . Denver, CO: University of Denver.
Neeman, J., & Harter, S. (2012). The self-perception profile for college students: Manual and questionnaires . Unpublished manuscript, Colorado, USA:
University of Denver.
Nuttin, J. M., Jr. (1985). Narcissism beyond Gestalt and awareness: The name letter effect. European Journal of Social Psychology ,15, 353/C0361.
Oakes, M. A., Brown, J. D., & Cai, H. (2008). Implicit and explicit self-esteem: Measure for measure. Social Cognition ,26, 778/C0790.
Orth, U., Robins, R. W., & Meier, L. L. (2009a). Disentangling the effects of low self-esteem and stressful events on depression: Findings from
three longitudinal studies. Journal of Personality and Social Psychology ,97, 307/C0321.
Orth, U., Robins, R. W., & Roberts, B. W. (2008). Low self-esteem prospectively predicts depression in adolescence and young adulthood.
Journal of Personality and Social Psychology ,95, 695/C0708.
Orth, U., Robins, R. W., Trzesniewski, K. H., Maes, J., & Schmitt, M. (2009b). Low self-esteem is a risk factor for depression across the lifespan.
Journal of Abnormal Psychology ,118, 472/C0478.
Orth, U., Robins, R. W., & Widaman, K. F. (2012). Life-span development of self-esteem and its effects on important life outcomes. Journal of
Personality and Social Psychology ,102, 1271/C01288.
Postmes, T., Haslam, S. A., & Jans, L. (2012). A single-item measure of social identification: Reliability, validity, and utility. British Journal of
Social Psychology .
Quilty, L. C., Oakman, J. M., & Risko, E. (2006). Correlates of the Rosenberg Self-Esteem Scale method effects. Structural Equation Modeling ,13,
99/C0117.
Ramsdel, G. H. (2008). Differential relations between two dimensions of self-esteem and the Big Five? Scandinavian Journal of Psychology ,49,
333/C0338.
Raskin, R., & Terry, H. (1988). A principal-components analysis of the Narcissistic Personality Inventory and further evidence of its construct
validity. Journal of Personality and Social Psychology ,54, 890/C0902.
Riketta, M., & Zieglet, R. (2006). Self-ambivalence and self-esteem. Current Psychology ,25, 192/C0211.
Robins, R. W., Hendin, H. M., & Trzesniewski, K. H. (2001a). Measuring global self-esteem: Construct validation of a single-item measure and
the Rosenberg Self-Esteem Scale. Personality and Social Psychology Bulletin ,27, 151/C0161.
Robins, R. W., Tracy, J. L., Trzesniewski, K., Potter, J., & Gosling, S. D. (2001b). Personality correlates of self-esteem. Journal of Research in
Personality ,35, 463/C0482.
Robins, R. W., Tracy, J. L., & Trzesniewski, K. H. (2008a). Naturalizing the self. In O. P. John, R. W. Robins, & L. A. Pervin (Eds.), Handbook of
personality: Theory and research (3rd ed., pp. 421 /C0447). New York: Guilford.
Robins, R. W., Trzesniewski, K. H., & Schriber, R. A. (2008b). Assessing self-esteem. In F. T. L. Leong (Ed.), Encyclopedia of Counseling .
Thousand Oaks, CA: Sage.
Rose, E., Hands, B., & Larkin, D. (2012). Reliability and validity of the self-perception profile for adolescents: An Australian sample. Australian
Journal of Psychology ,64,9 2/C099.
Rosenberg, M. (1989). Society and the adolescent self-image (Revised ed.). Middletown, CT: Wesleyan University Press.155 REFERENCES
II. EMOTIONAL DISPOSITIONS |
Rosenthal, S. A., Matthew Montoya, R., Ridings, L. E., Rieck, S. M., & Hooley, J. M. (2011). Further evidence of the Narcissistic Personality
Inventory’s validity problems: A meta-analytic investigation /C0Response to Miller, Maples, and Campbell (this issue). Journal of Research in
Personality ,45, 408/C0416.
Salafia, E. H. B., Gondoli, D. M., Corning, A. F., Bucchianeri, M. M., & Godinez, N. M. (2009). Longitudinal examination of maternal psycho-
logical control and adolescents’ self-competence as predictors of bulimic symptoms among boys and girls. International Journal of Eating
Disorders ,42, 422/C0428.
Sasaki, T., Hazen, N. L., & Swann, W. B., Jr. (2010). The supermom trap: Do involved dads erode moms’ self-competence? Personal
Relationships ,17,7 1/C079.
Scheff, T. J., & Fearon, D. S. (2004). Cognition and emotion? The dead end in self-esteem research. Journal for the Theory of Social Behaviour ,34,
73/C090.
Schimmack, U., & Diener, E. (2003). Predictive validity of explicit and implicit self-esteem for subjective well-being. Journal of Research in
Personality ,37, 100/C0106.
Schmitt, D. P., & Allik, J. (2005). Simultaneous administration of the Rosenberg Self-Esteem Scale in 53 nations: Exploring the universal and
culture-specific features of global self-esteem. Journal of Personality and Social Psychology ,89, 623/C0642.
Schwerdtfeger, A. R., & Scheel, S. M. (2012). Self-esteem fluctuations and cardiac vagal control in everyday life. International Journal of
Psychophysiology ,83, 328/C0335.
Silvera, D. H., Neilands, T., & Perry, J. A. (2001). A Norwegian translation of the self-liking and competence scale. Scandinavian Journal of
Psychology ,42, 417/C0427.
Sinclair, S. J., Blais, M. A., Gansler, D. A., Sandberg, E., Bistis, K., & LoCicero, A. (2010). Psychometric properties of the Rosenberg Self-Esteem
Scale: Overall and across demographic groups living within the United States. Evaluation & The Health Professions ,33,5 6/C080.
Slocum-Gori, S. L., Zumbo, B. D., Michalos, A. C., & Diener, E. (2009). A note on the dimensionality of quality of life scales: An illustration
with the Satisfaction With Life Scale (SWLS). Social Indicators Research ,92, 489/C0496.
Song, H., Thompson, R. A., & Ferrer, E. (2009). Attachment and self-evaluation in Chinese adolescents: Age and gender differences. Journal of
Adolescence ,32, 1267/C01286.
Sowislo, J. F., & Orth, U. (2013). Does low self-esteem predict depression and anxiety? A meta-analysis of longitudinal studies. Psychological
Bulletin ,139, 213/C0240.
Supple, A. J., Su, J., Plunkett, S. W., Peterson, G. W., & Bush, K. R. (2012). Factor structure of the Rosenberg Self-Esteem scale. Journal of Cross-
Cultural Psychology ,44, 748/C0764.
Tafarodi, R. W., & Milne, A. B. (2002). Decomposing global self-esteem. Journal of Personality ,70, 443/C0483.
Tafarodi, R. W., & Swann, W. B., Jr. (2001). Two-dimensional self-esteem: Theory and measurement. Personality and Individual Differences ,31,
653/C0673.
Tafarodi, R. W., Tam, J., & Milne, A. B. (2001). Selective memory and the persistence of paradoxical self-esteem. Personality and Social
Psychology Bulletin ,27, 1179/C01189.
Tafarodi, R. W., Wild, N., & Ho, C. (2010). Parental authority, nurturance, and two-dimensional self-esteem. Scandinavian Journal of Psychology ,
51, 294/C0303.
Tafarodi, RW, & Swann, WB, Jr. (1995). Self-liking and self-competence as dimensions of global self-esteem: Initial validation of a measure.
Journal of Personality Assessment ,65, 322/C0342.
Thomson, N., & Zand, D. (2002). The Harter Self-Perception Profile for Adolescents: Psychometrics for an early adolescent African-American
sample. International Journal of Testing ,2, 297/C0310.
Trzesniewski, K. H., Donnellan, M. B., Moffitt, T. E., Robins, R. W., Poulton, R., & Caspi, A. (2006). Low self-esteem during adolescence pre-
dicts poor health, criminal behavior, and limited economic prospects during adulthood. Developmental Psychology ,42, 381/C0390.
Trzesniewski, K. H., Donnellan, M. B., & Robins, R. W. (2003). Stability of self-esteem across the lifespan. Journal of Personality and Social
Psychology ,84, 205/C0220.
Trzesniewski, K. H., Donnellan, M. B., & Robins, R. W. (2008). Is ‘Generation Me’ really more narcissistic than previous generations? Journal of
Personality ,76, 903/C0918.
Valentine, J. C., DuBois, D. L., & Cooper, H. (2004). The relation between self-beliefs and academic achievement: A meta-analytic review.
Educational Psychologist ,39, 111/C0133.
Van Dongen-Melman, J. E. W. M., Koot, H. M., & Verhulst, F. C. (1993). Cross-cultural validation of Harter’s Self-Perception Profile for
Children in a Dutch sample. Educational and Psychological Measurement ,53, 739/C0753.
Vandromme, H., Hermans, D., Spruyt, A., & Eelen, P. (2007). Dutch translation of the Self-Liking/Self-Competence Scale /C0Revised: A confir-
matory factor analysis of the two-factor structure. Personality and Individual Differences ,42, 157/C0167.
Vazire, S., Naumann, L. P., Rentfrow, P. J., & Gosling, S. D. (2008). Portrait of a narcissist: Manifestations of narcissism in physical appearance.
Journal of Research in Personality ,42, 1439/C01447.
Verschueren, K., Buyck, P., & Marcoen, A. (2001). Self-representations and socioemotional competence in young children: A 3-year longitudi-
nal study. Developmental Psychology ,37, 126/C0134.
Vispoel, W. P. (1995). Self-concept in artistic domains: An extension of the Shavelson, Hubner, and Stanton (1976) model. Journal of Educational
Psychology ,87, 134/C0153.
Ware, J. E., Kosinski, M., Dewey, J. E., & Gandek, B. (2001). A manual for users of the SF-8 Health Survey . Lincoln, RI: Quality Metric
Incorporated.
Watson, D., Suls, J., & Haig, J. (2002). Global self-esteem in relation to structural models of personality and affectivity. Journal of Personality and
Social Psychology ,83, 185/C0197.
Wichstrom, L. (1995). Harter’s Self-Perception Profile for Adolescents: Reliability, validity, and evaluation of the question format. Journal of
Personality Assessment ,65, 100/C0116.156 6. MEASURES OF SELF-ESTEEM
II. EMOTIONAL DISPOSITIONS |
Wilkinson, R. B. (2010). Best friend attachment versus peer attachment in the prediction of adolescent psychological adjustment. Journal of
Adolescence ,33, 709/C0717.
Worth Gavin, D. A., & Herry, Y. (1996). The French Self-Perception Profile for Children: Score validity and reliability. Educational and
Psychological Measurement ,56, 678/C0700.
Zeigler-Hill, V. (2010). The interpersonal nature of self-esteem: Do different measures of self-esteem possess similar interpersonal content?
Journal of Research in Personality ,44,2 2/C030.
Zeigler-Hill, V. (Ed.), (2013). Self-esteem. Psychology Press.157 REFERENCES
II. EMOTIONAL DISPOSITIONS |
CHAPTER
7
Measures of the Trait of Confidence
Lazar Stankov1, Sabina Kleitman2and Simon A. Jackson2
1Australian Catholic University, Strathfield, NSW, Australia
2University of Sydney, Sydney, NSW, Australia
There are two main kinds of assessments in contemporary studies of individual differences in confidence:
(1) Personality-like, self-report questionnaires designed to assess one’s belief in his/her ability to accomplish
different tasks; and (2) Judgments of accuracy, or likelihood of success, after the completion of a task. Like
personality items, the first class includes general measures of self-perceptions that assess one’s views of habitual
tendencies or dispositions to do something in a given field (e.g., academic activity of some kind). The second
class of measures closely follows a particular cognitive or behavioral act. Importantly, their veracity can be exam-
ined by a comparison with this act. Therefore, self-report measures reflect one’s view of himself/herself without
the need to provide proof, whereas measures that follow a cognitive act are said to be ‘online’ (see Moore &
Healy, 2008 ;Koriat, 2000 ). These two classes of confidence measures have evolved independently and empirical
studies relating them directly are scarce.
Both kinds of confidence capture cognitive aspects (i.e., the probability of being correct, which is higher if the
person has high ability or the task is easy), personality (self-beliefs about the competencies related to perfor-
mance), and, implicitly, motivation (i.e., make accurate self-appraisals in a given situation and the intention to
initiate action). Thus, the difference is largely in terms of the relative emphasis on either personality-like or cogni-
tive aspects of confidence.
Prior to the development of self-report scales specifically designed to assess confidence, the construct was
inferred from traditional personality inventories. For example, confidence scale may be derived from the Big Five
personality inventory by assessing facets of Emotional Stability (i.e., the obverse of Neuroticism) and, say,
Extraversion. The first self-report scales specifically designed to measure confidence appeared in 1980s and will
be described in the first part of this chapter.
Online confidence measures were employed in studies of early psychophysicists (see Stankov, Lee, Luo, &
Hogan 2012 ).Johnson (1939) introduced a percentage scale to obtain confidence ratings. This is the approach fol-
lowed henceforth in virtually every empirical study of online performance-based measures of confidence.
Interestingly, Johnson’s (1939) factor analyses indicated the existence of a ‘confidence trait’ across different ability
measures: a finding supported by our own research, including many recent studies (see Stankov, 1999 ;Stankov,
Pallier, Danthiir, & Morony 2012 ;Stankov et al., 2012 ).
The main impetus for more contemporary work on confidence can be traced to a study reported by
Lichtenstein and Fischhoff (1977; Do those who know more also know more about how much they know? ), which initi-
ated psychological interest in individuals’ ability to monitor their judgments of accuracy. The study of confidence
was linked to the emerging field of decision-making and subsequently to research on metacognition. Within this
tradition, confidence is seen as an important dependent measure similar, in principle, to measures of accuracy
and speed. That is, confidence, accuracy and speed provide different kinds of information about the just-
completed cognitive act (see Stankov, 2000 ).
With regard to confidence versus self-efficacy judgments, as mentioned above, confidence overlaps with self-
beliefs and this overlap is particularly pronounced with self-efficacy judgments. For item-based measures of self-
efficacy, participants are simply asked to state how confident they are that they can solve a particular problem.
158Measures of Personality and Social Psychological Constructs.
DOI: http://dx.doi.org/10.1016/B978-0-12-386915-9.00007-3 ©2015 Elsevier Inc. All rights reserved. |
Thus, both self-efficacy and confidence measures use the term ‘confidence’ within the item stems. Indeed, in the
Morony, Kleitman, Lee, and Stankov (2013) study, the correlation between self-efficacy and online measure of
confidence was .54. Clearly, the correlation between these two constructs is far from perfect and confidence judg-
ments tend to have a considerably higher predictive validity than self-efficacy, suggesting that the differences are
not trivial.
There is a major distinction between self-efficacy and confidence in terms of their definitions, broadness, pre-
dictive power and practical applications. Self-efficacy refers to the belief that if one is engaged in a particular
behavior, s/he will achieve a positive outcome within that specific task/domain ( Bandura, 1997 ). Thus, self-
efficacy is domain specific /C0i.e., it is limited to a particular task/domain (e.g., mathematics or verbal). In con-
trast, confidence measures define a broad factor that extends across different tasks/domains. It follows that there
are important differences with respect to the generality of predictions each construct makes and their practical
applications. While predictions based on self-efficacy judgments are constrained to a particular domain, predic-
tions/applications of self-confidence (and its accuracy) extend to include broad educational, and social psycho-
logical realms ( Kleitman & Moscrop, 2010; Schraw, 2006 ).
Some ongoing debates in the field of decision-making concern the question of whether biases people show
while performing cognitive tasks result from researchers’ lack of attention to ecological factors such as the type
of questions asked such as misleading/tricky questions ( Gigerenzer, Hoffrage, & Kleinbo ¨lting 1991; Juslin, 1994 ).
The issue of particular interest was whether normative theory should be used to interpret confidence ratings
(Bayesian or Thurstonian; see Gigerenzer, 1996) or whether biases reflect general tendencies of human irrational-
ity (Kahneman & Tversky, 1996 ). Irrespective of the researcher’s stance research has demonstrated that, while
manipulating item selection can affect bias, there are pronounced individual differences in levels of confidence
(Kleitman & Stankov, 2001; Pallier et al., 2002; Soll, 1996 ). Some people tend to be more rational than others irre-
spective of environmental influences ( Kleitman, 2008 ).
MEASURES REVIEWED HERE
This chapter is organized in three subsections. First, we overview self-report measures of confidence which, on
the basis of their item content and intended use, can be grouped into two broad categories: Cognitive and
Physical Confidence. The cognitive measures relate to academic and vocational tasks, and the physical scales
relate to sports performance. Empirical studies based on these scales, as well as results based on judgments of
accuracy described in the second section suggest that confidence for cognitive and physical activities may be
unrelated. The Cognitive/Physical distinction is therefore, both conceptually and empirically, a useful one to
make. Second, we briefly describe the use of confidence in studies of self-efficacy. Third, we focus on the online,
performance-based, measurement of confidence and consider different indices of the calibration of these and
accuracy. Given the scope of this book, we focus on individual differences and psychometric perspectives. A list
of all reviewed measures is provided below.
Self-Report /C0Cognitive Confidence
1.Personality Evaluation Inventory ( Shrauger & Schohn, 1995 )
2.Individual Learning Profile ( Pulford & Sohal, 2006 )
3.Academic Behavioral Confidence Scale ( Sander & Sanders, 2003 )
4.CAPA Confidence Inventory ( Betz & Borgen, 2010 )
Self-Report /C0Physical Confidence
1.Trait-Robustness of Self-Confidence Inventory ( Beattie, Hardy, Savage, Woodman, & Callow 2011 )
2.Trait Sport-Confidence Inventory ( Vealey, 1986 )
Confidence and Item Level Measures of Self-Efficacy
1.Mathematics Self-Efficacy Scale ( OECD, 2005 )
Online, Performance Based, Measures
1.Proverbs Matching Test (Stankov, unpublished)
2.Future Life Events Scale ( Kleitman & Stankov, 2007 )159 MEASURES REVIEWED HERE
II. EMOTIONAL DISPOSITIONS |
OVER VIEW OF THE MEASURES
Each of the first four scales was developed to assess a sense of confidence in one’s abilities /C0typically cogni-
tive in nature. However, they differ in important ways making their selection a relatively complex process.
The Personality Evaluation Inventory (PEI) assesses confidence in terms of a sense of competence/skill, across
domains important to college students. Unlike the other cognitive-based measures described in this chapter, the
PEI assesses confidence in a range of behaviors, such as academic, social, and athletic confidence. With the addi-
tion of a general confidence scale, it is probably the best self-report scale for assessing confidence across student
relevant behaviors. For this reason, the PEI is likely to be frequently used in future.
Assessing confidence in educational settings is an area of much interest, and the Individual Learning Profile
(ILP) has been constructed for this purpose. However, unlike the PEI, the ILP is dedicated to only assessing stu-
dent confidence in academic abilities. That is, the ILP is a much more specific measure when compared to the
PEI. This allows the ILP to tap deeper into skills concerned with academic outcomes such as achievement or
dropout rates. Unfortunately, despite its appeal and apparently robust internal structure, the ILP has not been
utilized much. We recommend the use of this scale in educational settings when the assessment of academic-
related competencies is of interest.
The Academic Behavioral Confidence Scale (ABC), as its name would suggest, assesses confidence in the ability to
conduct and plan behaviors relevant to academic success. It is notably shorter than the PEI or ILP and, by assessing
confidence in Grades, Studying, Verbalizing and Attendance, the ABC sits between the PEI and ILP in terms of the
range of behaviors it assesses. However, the scale’s overall validity, and particularly its reliability, do not compare
well. At this stage, we would recommend using the ABC in educational settings when a shorter scale is preferred.
Similar to the other three scales, the CAPA Confidence Inventory (CCI) is intended for use in educational settings.
However, unlike the others, the CCI is a component of a larger testing system /C0the CAPA System /C0intended to
help college students select college majors. In-line with its purpose, the CCI assesses confidence across Holland’s
(1997) six confidence themes with items targeting activities (e.g., ‘Write articles about pets or nature’) and/or
school subjects (e.g., ‘Pass a course in algebra’) across 27 vocational domains. The CCI is perhaps the broadest self-
report measure available in terms of its item content. This range of domains affords it great utility for applied pur-
poses. However, considering that much research has shown that confidence is a general construct, this range might
also lead to redundancy in certain cases. We would therefore recommend that the CCI be used as intended /C0for
educational/career guidance /C0and when fine-grained behavioral distinctions are important.
The following two scales assess confidence in physical or athletic abilities. Such confidence has largely been
studied as distinct from cognitive and academic confidence. Indeed, validation studies using the PEI and confi-
dence judgments described later suggest that this distinction is appropriate.
Unlike the other scales described here, the Trait-Robustness of Self-Confidence Inventory (TROSCI) was designed
to assess an athlete’s ‘ability to maintain confidence in the face of adversity’ ( Beattie et al., 2011 , p. 184). That is,
rather than assessing confidence in one’s skill, the emphasis is on the ability to maintain that confidence. The
scale’s novelty means that there is little to say in terms of its utility as yet. However, despite being very short,
results have found it to be internally stable, and the item content seems sound. Indeed, the item stems can be
used to build short and reliable questions to assess confidence in domains outside of athletic ability. We believe
that this scale will be of much use in the future.
The Trait Sport /C0Confidence Inventory (TSCI) is also a unique scale, as it assesses athletes’ sense of confidence in their
ability to perform successfully in their sport under adverse conditions, relative to the most confident athlete they
know. That is, it is the only scale that asks individuals to compare their own confidence to another person’s confi-
dence. A parallel state version is readily accessible, it has been around for a lot longer than the TROSCI, and it is used
frequently. We recommend using the TSCI when scales with well-documented psychometric properties are preferred.
The next scale, the Mathematics Self-efficacy Scale (MSS), consists of eight items from Question 31 of the PISA
(Programme for International Student Assessment ) 2003 Student Questionnaire. Each item assesses students’ self-
perceived ability to solve a maths problem. It differs from the above questionnaire measures in that it is item-
specific. However, it also differs from the judgment of accuracy measures (or online confidence) described below
in that participants are not asked to provide answers to any of the questions. Thus, the scale measures one’s
belief that she/he will be able to solve a particular problem. The MSS item stem provides a different, albeit
robust, approach to the study of confidence/self-efficacy.
The final section of this chapter covers online confidence judgments of accuracy and its calibration indices.
This is not a typical self-report scale, but rather a broadly applicable methodology for studying confidence and
its calibration that has been used extensively in research spanning numerous domains. The confidence rating that160 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
is most common, and afforded the most discussion, is the use of a percentage scale accompanied by verbal
anchors. Here, individuals indicate their confidence in the accuracy of a cognitive act as a percentage, say from
0% being absolutely uncertain, to 100% being completely certain. This method has demonstrated excellent reli-
ability and validity in the research to date. While this methodology has not been compared with the self-report
measures, we believe that it should be the preferred method for measuring confidence when individuals are
asked to carry out cognitive activities.
The scales and methodological approaches described in this chapter have been selected for being reliable and
valid measures of the trait of confidence. While we believe that certain scales should be preferred, each may be
most useful given certain circumstances. It is important that researchers consider the utility of each option for
their purposes.
SELF-REPORT COGNITIVE CONFIDENCE SCALES
Personality Evaluation Inventory (PEI)
(Shrauger & Schohn, 1995 ).
Variable
Shauger and Schohn (1995) defined confidence as a self-perceived sense of competence and/or skill to deal
with various situations effectively.
Description
The PEI was designed to assess confidence, or sense of competence/skill, across a range of domains important
to college students. The scale consists of 54 self-report items that are grouped into eight subscales. Six subscales
are content specific, assessing self-perceived confidence to perform in the following domains: Academic,
Appearance, Athletics, Romantic, Social, and Speaking. These domains were selected after being most frequently
reported by 483 students as important determinants of self-confidence. Additionally, a general subscale assesses
one’s confidence to perform competently in general, and a mood subscale is included to assess and account for
mood states that might affect confidence at the time of testing. Scores across the content specific subscales can be
summed to give an alternate measure of general confidence. Each subscale is assessed with seven items except
Athletic confidence, which has five items. Each item is rated on a 4-point Likert-type scale ( A5Strongly Agree ;
B5Mainly Agree ;C5Mainly Disagree ; and D5Strongly Disagree ). Scores are summed across items, with higher
scores indicating greater perceived confidence.
Sample
Shauger and Schohn (1995) reported means separately for 106 female and 105 male introductory psychology
students, respectively, as shown below.
PEI SUBSCALE MEANS AND STANDARD DEVIATIONS
Female Male
Mean SD Mean SD
General 18.50 3.17 19.38 3.24
Speaking 18.34 4.49 18.27 4.16
Romantic 18.69 4.43 17.26 4.69
Athletics 12.89 3.75 15.74 3.29
Social 20.42 3.58 19.52 3.63
Appearance 18.54 3.73 19.67 3.82
Academic 18.42 3.88 19.52 3.31
Mood 17.75 4.26 18.48 3.64
Combined 107.28 13.05 109.97 13.75161 SELF-REPORT COGNITIVE CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
In this sample, males scored significantly higher than females on the Athletics, Appearance, Academic, and
General subscales. Females scored significantly higher on the Romantic subscale. In terms of means and standard
deviations, Cheng and Furnham (2002) reported similar results with 90 adolescents. However, the only statistically
significant gender difference they found was that males were more confident than females in their athletic ability.
Reliability
Internal Consistency
Cronbach alpha coefficients for all subscales ranged from .71 to .90 (Shauger & Schohn, 1995).
Test/C0Retest
One-month test /C0retest reliabilities ranged from .73 to .90, with the exception of the mood subscale ( r5.49;
Shauger & Schohn, 1995). This was expected, given that the items of the mood subscale refer to specific contents
and mood itself is supposed to be state-like. The one-month test /C0retest reliability for the combined content spe-
cific score was also high ( r5.87; Shauger & Schohn, 1995). Test /C0retest reliabilities did not differ significantly
after correcting for mood.
Validity
Convergent/Concurrent
Shauger and Schohn (1995) demonstrated convergent validity via a consistent pattern of intercorrelations
between the combined content specific score, and the subscales, with Self-esteem, Mood, and Personality mea-
sures. Specifically, combined PEI scores shared significant and positive correlations with measures of self-esteem
(rs5.59 and .58), Life Orientation ( r5.53), Extraversion and Conscientiousness ( r5.55 and .20), and experienc-
ing happy moods ( r5.23). Openness has correlated significantly with only Speaking and Social confidence
(rs5.30 and .33). In general, the PEI subscales demonstrated a similar pattern with these variables with the
exception of the Athletics subscale, which shared weaker relationships overall.
Cheng and Furnham (2002) similarly reported PEI subscale and total scores correlating significantly and posi-
tively with positive affect ( r5.58) and happiness ( r5.52). Cramer, Neal, DeCoster, and Brodsky (2010) , also
found that the general subscale correlated significantly and positively with measures of general, social, and wit-
ness self-efficacy ( rs5.76, .52, and .34 to .32 respectively), self-esteem ( r5.70), and Extraversion ( r5.40).
Canter (2008) found that the academic subscale was positively and significantly correlated with self-esteem scale
(.48). He also reported significant positive correlations with student perceptions of their maximum attainable
Grade Point Average (GPA; r5.44), student and parent satisfaction with final GPA ( rs5.42 and .43), final stu-
dent GPA ( r5.22), and whether the student was eligible to apply for honors ( r5.26). Furthermore, academic
confidence was one of only four significant predictors of students’ adaptive perfectionism after accounting for all
these variables, gender and race.
Shauger and Schohn (1995) also reported various external indicators of convergent validity. For example, con-
tent subscales generally correlated positively and significantly with peer PEI ratings ( rs5.23 to .66) and peer
behavioral ratings of domain competence, comfort, and involvement ( rs5.09 to .66). The academic subscale
shared positive and significant correlations with three course examinations, final grades, and self-reported SAT
scores ( rs5.37 to .45). Individuals higher in academic than social confidence were more likely to select an intel-
lectual problem-solving task than to meet with and talk to someone new, and vice versa. Furthermore, highly
confident individuals were more certain than low confident individuals (combined scores within the top and bot-
tom third of the distribution) that positive events would happen to themselves in the future. However, these
groups did not differ significantly in terms of their certainty that such events would occur to others.
Moderate and mostly significant intercorrelations between the PEI subscales have been observed (ranging
between .03 and .42) and described by the authors as support for an ‘additive model of self-confidence, rather
than a hierarchical structure’ ( Shrauger & Schohn, 1995 , pp. 262).
Divergent/Discriminant
Combined PEI scores significantly and negatively correlated with Depression in three studies ( r5/C0.35,/C0.35
and/C0.52) (respectively, Cheng & Furnham, 2002 ;Cramer et al., 2010 ;Shrauger & Schohn, 1995 ). Similarly,
PEI scores correlated negatively with Anxiety ( r5/C0.50), Hopelessness ( r5/C0.49), and experiencing sad
moods ( r5/C0.43) ( Shrauger & Schohn, 1995 ), with measures of loneliness ( r5/C0.53 and /C0.48) and negative affect162 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
(r5/C0.46) ( Cheng & Furnham, 2002 ), and with feelings of shame ( r5/C0.38), and procrastination ( r5/C0.43)
(Canter, 2008 ). PEI scores did not share significant relationships with socioeconomic status, degree of religious
affiliation and involvement, whether people had lived with their parents at the age of 15, family size, birth order,
or measures of social desirability ( Shrauger & Schohn, 1995 ). Furthermore, no significant correlations were evi-
dent with Agreeableness ( r5.05).
Construct/Factor Analytic
Shauger and Schohn (1995) reported the results of a principal components analysis with varimax rotation
(N5211 undergraduates). They found that the content specific items were loaded appropriately by their
intended components, with only two of the 200 possible non-target loadings marginally exceeding .30. It was
found that the general subscale correlated .63 and .68 with the combined score from the six content domains for
men and women, respectively, and within the principal components analysis, items from the general subscale
did not account for any additional variance.
Criterion/Predictive
Cheng & Furnham (2002) reported a series of path analytic models in which total PEI scores significantly
and negatively predicted two Loneliness factors of intimacy with others ( β5/C0.33) and socializing with others
(β5/C0.28), but did not significantly predict happiness ( β5.18), after controlling for demographics, personality
dimensions measured by the Eysenck Personality Questionnaire, school grades, and friendship.
Location
Shrauger, J.S., & Schohn, M. (1995). Self-confidence in college students: Conceptualization, measurement, and
behavioral implications. Assessment, 2 (3), 255 /C0278.
Results and Comments
The PEI has a number of desirable properties: The number of items is not excessive and the scale therefore
does not require a significant amount of time to complete; it lends itself to domain specific and domain general
assessment of confidence; and the scale appears to have demonstrated adequate reliability and validity thus far.
Furthermore, while this scale was designed for college populations, it seems reasonable to use the general sub-
scale as a short and reliable measure for any other population considering its strong relationship with the com-
bined content specific scores.
The research conducted with the PEI suggested that: (1) Self-reported confidence shared robust positive corre-
lations with positive and sociable self-report attributes and negative correlations with negative attributes; (2)
Athletic (physical) confidence diverged from confidence related to cognitive activities; and (3) Significant and
positive subscale intercorrelations were indicative of the existence of a general confidence factor.
PEI-LIKE ITEM
Indicate your degree of agreement with the following
statement:
Each question is accompanied by the following
rating scale:
A5Strongly agree
B5Mainly agreeC5Mainly Disagree
D5Strongly disagree
‘I feel more confident in my abilities than most
people.’
Note: Contrived example of an item from the General
Subscale.
Individual Learning Profile (ILP)
(Pulford & Sohal, 2006 ).
Variable
Pulford and Sohal defined confidence as the level at which individuals assess their own skill or ability.163 SELF-REPORT COGNITIVE CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
Description
The ILP assesses how often students feel confident in their academic abilities across six domains: Reading and
Writing, Hard Information Technology (Hard IT), Numeracy, Time Management, Speaking, and Easy Information
Technology (Easy IT) assessed with 12, 5, 8, 6, 4, and 5 self-report items respectively. The complete scale consists of
40 items scored on a 4-point rating scale (1 5never ;25sometimes ;35mostly ; and 4 5always ). Scores are summed
across items in each domain and higher scores are indicative of greater self-perceived confidence.
Sample
Although the ILP was constructed using a sample of 3003 first-year undergraduate students, means were only
reported for 53 undergraduate students completing the ILP in their first and second year of university. These
means are reproduced below.
ILP SUBSCALE MEANS AND STANDARD DEVIATIONS
1st Year 2nd Year
Mean SD Mean SD
Reading and writing 33.56 4.13 34.97 5.14
Hard IT skills 12.07 3.95 14.77 3.84
Numeracy skills 21.99 4.61 21.83 5.80
Time management 17.38 2.56 16.53 3.27
Speaking 10.00 2.32 9.08 2.02
Easy IT skills 17.23 2.78 18.91 1.77
Students were significantly more confident in their IT skills, and significantly less confident in their Speaking
and Time-management ability, in their second year than their first. Furthermore, males were significantly more
confident in their Speaking ability than females.
Reliability
Internal Consistency
Cronbach alpha coefficients for each domain ranged from: .88, .87, .93, .74, .74, and .80, respectively ( Pulford &
Sohal, 2006 ). Alpha coefficients for each domain were almost identical in a follow up study with 130 first and
second year undergraduate students ( Pulford & Sohal, 2006 ).
Test/C0Retest
One year test /C0retest coefficients for 53 undergraduate students were as follows ( Pulford & Sohal, 2006 ):
Reading and Writing 5.61; Hard IT Skills 5.50; Numeracy Skills 5.73; Time Management 5.67; Speaking 5
.75; and Easy IT Skills 5.49.
Validity
Convergent/Concurrent
The only Convergent/Concurrent validity findings available for the ILP must be derived from regression analyses
conducted by Pulford and Sohal (2006) . These are discussed in detail in the Predictive validity section below. After
controlling for a range of self-report variables, it is possible to conclude that the following related positively: Reading
and Writing with Openness and Conscientiousness; Numeracy Skills with Agreeableness; Time Management with
Conscientiousness, Extraversion and Organization; and Speaking with Conscientiousness and Extraversion.
Divergent/Discriminant
The only Divergent/Discriminant validity findings available for the ILP must be derived from regression analyses
conducted by Pulford and Sohal (2006) . These are discussed in detail in the Predictive validity section below. After164 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
controlling for a range of self-report variables, it is possible to conclude that the following related negatively:
Numeracy Skills with Organization and Concerns over Mistakes and Doubts; and Speaking with Organization.
Construct/Factor Analytic
The domains assessed by the ILP were identified via principal components analysis with oblique rotation, rather
than predetermined sets, with a sample of 3003 first-year undergraduate students. The six components accounted for
54.06% of the variance. However, a pattern of significant correlations between these components emerged. To clarify
these relationships, the authors performed a second-order principal components analysis with oblique rotation. At
the second-order level, Hard and Easy IT skills converged on one component, and the remaining first-order compo-
nents converged on another. These two higher-order components correlated r5.17 with each other. The authors
interpreted these components as level of experience with computers, and a general sense of academic confidence.
Criterion/Predictive
Each factor, along with gender and age, was entered into a regression analysis as simultaneous predictors of
the four first year psychology module grades, and first year Grade Point Average (GPA). Hard IT skills signifi-
cantly and negatively predicted Introduction to Psychology grades; Numeracy Skills significantly and positively
predicted the Introduction grades and Methods in Psychology grades; and Time management significantly and
positively predicted Introduction grades, Psychology Practicals, and overall GPA. However, no significant predic-
tions were made when running these same analyses with the second year cohort.
Pulford and Sohal (2006) regressed each factor simultaneously on gender, the big five personality dimensions,
perfectionism, and self-esteem scales. After controlling for the effect of each independent variable, Reading and
Writing confidence was significantly and positively predicted by Openness and Conscientiousness ( β5.24 and
.34 respectively); Numeracy Skills confidence by Agreeableness ( β5.22) and three perfectionism subscales of
Organization ( β5/C0.36), Personal Standards ( β5.29) and Concerns Over Mistakes and Doubts ( β5/C0.36); and
Time Management and Speaking by Conscientiousness, Extraversion, and Organization ( β5.43, .27 and .25; and
β5.31, .47 and /C0.28 respectively).
Location
Pulford, B.D., & Sohal, H. (2006). The influence of personality on HE students’ confidence in their academic
abilities. Personality and Individual Differences, 41(8) , 1409/C01419.
Results and Comments
The ILP has undergone adequate scale refinement and appears internally robust. The authors note, however, that
larger samples of male students will be required for adequate validation. This scale has promising utility for identify-
ing students self-perceived strengths and weaknesses for assessment and development purposes. The results of the
convergent and discriminant validity examinations described above are synonymous with the other cognitive self-
report scales. Specifically, confidence measured with the ILP was positively correlated with desirable personality
dimensions and subscales intercorrelated to the degree that a more general confidence factor may emerge.
INDIVIDUAL LEARNING PROFILE
This is not a test. It is confidential and will be seen by a restricted number of people. Please answer honestly. Do
not worry if any section does not seem to apply to you. Please complete it anyway. Please cross the appropriate box.
Each Question is accompanied by the following rating scale:
Always Mostly Sometimes Never
&& & &
Section 1: Speaking
Are you confident about talking* to people you don’t know?
Do you join in class or group discussions?
Do you ask questions when you don’t understand something?
Do you feel comfortable giving a ‘talk’ or presentation to a group?165 SELF-REPORT COGNITIVE CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
Section 2: Numeracy Skills
Are you confident working with:
Numbers
Fractions
Decimals
Percentages
Ratios
Statistics
Graphs
Charts
Section 3: Reading and Writing
Are you confident about your reading skills?
Are you able to read fast and understand what you are reading?
Are you confident in the use of punctuation and grammar?
Are you able to make sense of a text on first reading?
Are you confident about your spelling?
Can you find information easily by reading?
Can you get you own ideas onto paper easily, and find the right words?
Can you put information into your own words without copying big chunks?
Are you confident about taking notes in lectures?
Are you confident about using a dictionary and/or thesaurus?
Do you enjoy writing?
Do you find it easy to explain what you mean (e.g. find the right words*)?
Section 4: Time Management
Do you consider yourself well organized?
Do you work to deadlines or hand work in on time?
Do you know when you study best (e.g. early morning, evening etc)?
Do you complete tasks before your friends?
Do you use a diary/timetable to help you plan your work?
Do you leave time to check and/or proof read your work?
Section 5: IT skills
Do you have access to a computer outside of the university?
Have you used computers to support your studies or at work?
Are you confident using computers for:
Word processing
Email
Internet information (the Web)
Spreadsheets
Accessing library catalogues and stock
Databases
Presentations (e.g. PowerPoint)
Statistics
Note: Easy IT 5first 5 items of section 5; Hard IT 5last 5 items of section 5; * 5or signs/signing if BSL is your
preferred language.166 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Academic Behavioral Confidence Scale (ABC)
(Sander & Sanders, 2003 ).
Variable
Sander and Sanders defined confidence as the strength of one’s belief, trust, or expectation, related to task
accomplishment.
Description
The ABC (originally ACS) assesses students’ global academic confidence, or confidence in the ability to con-
duct and plan behaviors relevant to academic success. The scale consists of 24 self-report items, each preceded
with ‘How confident are you that you will be able to ...’, scored on a 5 point Likert-type scale (from 1 5Not at all
confident to 55Very confident ). Total score is an average across all items, ranging from 1 to 5, with higher scores
indicating greater confidence. Factor analytic methods have produced four-factor solutions, yielding factors
labeled Grades, Studying, Verbalizing, and Attendance.
Sample
The ABC was developed on a sample of 102 undergraduate psychology students, and 182 undergraduate med-
ical students. The overall mean for these two samples was 3.84, standard deviation of .43. The mean of the psy-
chology students was 3.78, standard deviation .39, and the mean of the medical students was 3.87, standard
deviation .46. This difference was statistically significant based on the results of a one-tailed z-test. Descriptive
statistics for each factor have also been reported based on large University samples, with mean scores ranging
from 3.52 /C04.18 for Grades, 3.76 /C03.87 for Studying, 3.10 /C03.52 for Verbalizing and 4.19 /C04.43 for Attendance
(Ochoa & Sander, 2012). Sander (2009) also summarized the results of two doctoral theses ( Asquith, 2008; Barrett,
2005), which showed that dyslexic university students reported significantly lower ABC scores, particularly for
Grades, Verbalizing and Studying, than their non-dyslexic peers.
Reliability
Internal Consistency
An overall Cronbach alpha coefficient of .88 has been reported ( Sanders & Sander, 2007 ). For each factor, the
alpha coefficients were .78 for Grades, .72 for studying, .78 for Verbalizing and .74 for Attendance ( Sanders &
Sander, 2009 ). Slightly lower Cronbach alpha coefficients were obtained using a Spanish translation of the scale
in a sample of Mexican psychology undergraduate students (Ochoa & Sander, 2012).
Test/C0Retest
One year test /C0retest correlations have been reported on an item level, with only three items demonstrating
significant positive correlations. These correlations ranged from r5.23 to .31 ( Sander & Sanders, 2003 ).
Validity
Convergent/Concurrent
Sander (2009) provided a comprehensive overview of the scale’s convergent validity based on a doctoral thesis
(Berbe ´n, 2008 ), and reanalysed data from Sander and Sanders (2003) ,a n df r o m Sanders, Sander, and Mercer (2009) .
Although no correlation c oefficients reported, Sander (2009) reported that the Grade, Studying and Verbalizing subscales
correlated positively with a Deep approach to learning as measured by the R-SPQ-2F ( Biggs, Kember, & Leung 2001 ).
Divergent/Discriminant
Sander (2009) reported that the ABC total score correlated negatively with Vinegrad’s dyslexia scale using
unpublished data from Sanders et al. (2009) . Again, this coefficient was not reported.
Construct/Factor Analytic
Principal components analysis with oblique rotation has suggested six components on two occasions, and confir-
matory factor analysis has verified at least four factors after dropping seven of the original items ( Sander & Sanders,
2003, 2009 ). All these results have been obtained with reasonably large samples ( N.400). When submitted to
Confirmatory Factor Analysis, Ochoa & Sander (2012) found the four-factor model fit best based on a Spanish
translation of the scale administered to Mexican undergraduates ( N597), and after dropping 10 of the original items
(χ2
71596.231, CFI 5.933, TLI 5.914, RMSEA 5.061, ECVI 52.002). Ochoa & Sander also present model167 SELF-REPORT COGNITIVE CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
comparisons across various University samples ( Ns597 to 1468) suggesting that 14 of the original 24 items converge
the most optimal solution. The results of these confirmatory factor analyses yielded the factors labeled: Grades,
Studying, Verbalizing and Attendance.
Criterion/Predictive
In a summary report, total ABC scores were reported to have correlated positively and significantly ( p,.05)
with students’ predicted exam marks ( N588) ( Sanders & Sanders, 2006 ).Sander (2009) also reported that
Berbe ´n found that ABC scores correlated significantly /C0and presumably positively /C0with aspects of student
learning, like approach to learning and self-regulation, and satisfaction with teaching and final results. However,
the magnitude of these correlation coefficients were not reported.
Location
Sander, P., & Sanders, L. (2003). Measuring confidence in academic study: A summary report. Electronic
Journal of Research in Educational Psychology and Psychopedagogy, 1(1) ,1/C017.
Results and Comments
The item content of the ABC lends itself well to targeting specific academic behaviors and student confidence
associated with achieving them. Furthermore, provided refinement, its factorial structure appears stable.
However, the scale’s test /C0retest reliability and validity require further investigation and careful, independent,
scrutiny. In its current form, the ABC may be of greatest use as a measure of confidence related to discrete aca-
demic behaviors rather than as a combined scale of general academic confidence.
ACADEMIC BEHAVIORAL CONFIDENCE SCALE
How confident are you that you will be able to:
Each question is accompanied by the following rating scale:
Very confident Not at all confident
&&&&&
Study effectively on your own in independent/private study.
Produce your best work under examination conditions.
Respond to questions asked by a lecturer in front of a full lecture theater.
Manage your work load to meet coursework deadlines.
Give a presentation to a small group of fellow students.
Attend most taught sessions.
Attain good grades in your work.
Engage in profitable academic debate with your peers.
Ask lecturers questions about the material they are teaching, in a one-to-one setting.
Ask lecturers questions about the material they are teaching, during a lecture.
Understand the material outlined and discussed with you by lecturers.
Follow the themes and debates in lectures.
Prepare thoroughly for tutorials.
Read the recommended background material.
Produce coursework at the required standard.
Write in an appropriate academic style.
Ask for help if you don’t understand.
Be on time for lectures.
Make the most of the opportunity of studying for a degree at university.
Pass assessments at the first attempt.
Plan appropriate revision schedules.
Remain adequately motivated throughout.
Produce your best work in coursework assignments.
Attend tutorials.168 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
CAPA Confidence Inventory (CCI)
(Betz & Borgen, 2010 ).
Variable
The authors ( Betz et al., 2003 ) defined confidence as a self-perceived ability to accomplish tasks /C0definition
derived from Bandura’s (1986) self-efficacy theory.
Description
The CCI is a component of the CAPA system /C0an online Questionnaire system for college students that sug-
gests major clusters based on confidence and interest across a broad range of domains ( Betz & Borgen, 2010 ). In-
line with the above definition, the CAPA Confidence Inventory (CCI) assesses confidence across Holland’s (1997)
six confidence themes with items targeting activities (e.g., ‘Write articles about pets or nature’) and/or school
subjects (e.g., ‘Pass a course in algebra’) across 27 vocational domains. These domains, and their associated
Holland theme in brackets, are: Mechanical Activities, Information Technology, Protective Services, and
Outdoors (Realistic); Science, Medical Science, and Math (Investigative), Visual Arts and Design, Dramatic Arts,
Music, Writing, and Artistic Creativity (Artistic); Helping, Teaching, Cultural Sensitivity, Human Resources and
Training, and Medical Service (Social); Marketing and Advertising, Sales, Management, Entrepreneurship, Public
Speaking, Politics, and Law (Enterprising); and Accounting and Finance, Office Management, and Personal
Computing (Conventional). In addition to these, the CCI includes items to assess confidence in activities related
to six Engagement Styles: Extraversion, Leadership, Teamwork, Motivation, Academic Achievement, and Risk
Taking. Each self-report item is preceded with the phrase ‘Indicate your confidence in your ability to ...’ and
scored on a standard 5-point Likert-type Scale (1 5No confidence at all ;55Complete confidence ), with 190 items in
total. Scores are averaged for each domain, ranging from 1 to 5, with higher scores suggestive of greater
confidence.
The CCI is the product of a lengthy investigation by the authors and their colleagues into career confidence
and self-efficacy. The preceding scales from which the CCI has been derived include the Skills Confidence
Inventory ( Betz, Harmon, & Borgen 1996 ) and the Expanded Skills Confidence Inventory ( Betz et al., 2003 ). All
were designed, tested and have evolved with an applied utility in mind: to guide career decision-making in tan-
dem with measurements of vocational interests.
Sample
The CCI has been studied in a number of large university student samples (e.g., N5960; Betz & Borgen,
2009 ) and descriptive statistics can be found in the scale manual ( Betz & Borgen, 2006 ). However, Borgen
and Betz (2008) reported gender differences for 160 psychology students in the overall Holland Themes, spe-
cific domains and Engagement style s. Males scored significantly high er than females in Realistic and
Investigative confidence, as well as confidence in Mechanical activities, Information technology, Protective
Services, Science, Math, Accounting, Personal Computing and Risk Taking. Furthermore, females scored sig-
nificantly higher than males in Social confidence as wel l as confidence in Helping and Cultural Sensitivity,
and Teamwork.
Reliability
Internal Consistency
Based on a sample of 644 university students, Cronbach alpha coefficients for the Holland Themes, Domains
and Engagement Styles ranged between .76 and .94 ( Betz & Borgen, 2009 ).
Test/C0Retest
Test/C0retest reliability for the CCI has not been assessed. However, the stability of the scale’s predecessor, the
Expanded Skills Confidence Inventory, was assessed in a student sample ( N5160), with three-week test /C0retest
coefficients ranging from .77 to .89 ( Robinson & Betz, 2004 ).169 SELF-REPORT COGNITIVE CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
Validity
Convergent/Concurrent
Betz and Borgen (2008) reported correlations ( rs5/C0.13 to .65) based a sample of 160 university students for
the Holland Theme and Engagement Style scores with 17 content-based personality scales that form the Healthy
Personality Inventory (HPI): Trustworthy, Generous, Confident, Organized, Detail Oriented, Goal Directed,
Outgoing, Energetic, Adventurous, Assertive, Relaxed, Happy, Decisive, Rested, Creative, Intellectual and
Analytical. These HPI scales are intended to capture healthy personality traits and to ‘identify strengths and
adaptive personal dispositions’ (Betz & Borgen, pp. 27). Correlations were apparent for 11 of the 17 HPI scales
with the Motivation and Academic Achievement Style /C0e.g., the Styles respectively correlated .58 and .56 with
being Goal Directed, or both Styles correlated .40 with being Organized. These results demonstrated that Social,
Enterprising and Conventional confidence scores respectively correlated significantly ( p,.001) and positively
with nine, ten and eight of the HPI scales, together correlating significantly with all HPI scales except Relaxed.
For example, Social confidence correlated with being Trustworthy, Generous and Energetic; Enterprising confi-
dence with being Outgoing, Decisive and Creative; and Conventional confidence with being Organized, Detail
Oriented and Analytical. In contrast, however, Realistic and Artistic confidence correlated significantly ( p,.001)
with only the Analytical ( r5.35) and Creative ( r5.61) HPI scales respectively. Similarly, Investigative corre-
lated significantly ( p,.001) with only the Confident ( r5.29), Intellectual ( r5.35), and Analytical ( r5.65) HPI
scales. For the Engagement Styles, Extraversion, Leadership, Teamwork, Motivation and Academic Achievement
correlated significantly ( p,.001) and positively with 9, 12, 12, 15 and 15 of the HPI scales. Overall, the strongest
of these correlations were with being Confident, Outgoing, Energetic, Assertive and Happy ( rs5.29 to .68). The
Motivation and Academic Achievement Styles also shared notably strong correlations with being Organized,
Detail Oriented, Goal Directed, Intellectual and Analytical ( rs5.38 to .65). Risk Taking, however, correlated sig-
nificantly with being Analytical only ( r5.37).
Divergent/Discriminant
Realistic, Investigative, Artistic and Risk Taking Style confidence appeared largely unrelated to healthy person-
ality traits. No significant negative correlations with the HPI scales have emerged (Betz & Borgen, 2008).
Construct/Factor Analytic
The factorial structure has not been fully described outside of the scale manual. Yet research on the CCI’s pre-
decessor, the Expanded Skills Confidence Inventory (ESCI; Betz et al., 2003 ) is available and cited by Betz and
Borgen (2009) as contributing data. Betz et al. demonstrated that the factor structure of the ESCI was somewhat
supported using exploratory factor analysis, principal axis factoring with varimax rotation ( N5934 undergradu-
ate psychology students). They found that items readily defined nine of the intended subscales, while the remain-
ing items were loaded by a large single factor. With the exception of the Teamwork and Leadership dimensions,
these items separated reasonably well into their expected factors when submitted to a separate exploratory factor
analysis with an oblique rotation method (principal axis factoring with promax rotation). As expected, however,
these factors shared moderate to strong positive intercorrelations ( rs5.13 to .72).
Location
Betz, N.E., & Borgen, F.H. (2010). The CAPA Integrative Online System for College Major Exploration. Journal
of Career Assessment ,18(4), 317 /C0327.
Results and Comments
The CCI offers the advantage of assessing multiple domains, but there is evidence of redundancy in making
such fine distinctions. Although there is no published report about subscales’ factorial structural (excluding the
Manual), shared covariance with related constructs suggests that the CCI might benefit from future refinement.
Nonetheless, the CCI is the only confidence scale reported in the literature that specifically addresses vocational
competences relevant to adult populations. In its present state, the CCI may be appropriate to use with adult
populations. Future work should focus on factor analysis of the available data instead of relying on predeter-
mined subscale scores.170 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
CAPA CONFIDENCE INVENTORY (SAMPLE ITEMS)
Indicate your confidence in your ability to ...
Each question is accompanied by the following rating scale:
No confidence at all Complete confidence
12 3 4 5
Artistic Theme, Writing domain example item
Communicate your ideas through writing
Enterprising Theme, Marketing & Advertising domain example item
Develop a clever TV commercial
Academic Achievement Engagement Style example item
Concentrate for several hours on a difficult topic
Note: Reproduced with permission.
SELF-REPORT PHYSICAL CONFIDENCE SCALES
Trait-Robustness of Self-Confidence Inventory (TROSCI)
(Beattie et al., 2011 ).
Variable
Beattie et al. defined confidence in terms of Bandura’s (1997) self-efficacy theory as a self-belief related to one’s
ability to perform tasks which can vary on three dimensions: level, generality, and strength.
Description
The TROSCI was designed to assess an athlete’s ‘ability to maintain confidence in the face of adversity’
(Beattie et al., 2011 , p. 184) with eight self-report items such as, ‘If I perform poorly, my confidence is not badly
affected.’ Each item is rated on a 9-point Likert-type scale (ranging from 1 5Strongly Disagree ;55Neutral ; and
95Strongly Agree ).
Sample
The TROSCI was developed with samples of 268 university athletes involved in their respective sport for an
average of 6.42 years (148 male; Mage519.2 years), and 176 male adult athletes involved in their respective sport
for an average of 9.8 years ( Mage520.4 years). Item means ranged between 3.88 and 4.71, with standard devia-
tions between 1.87 and 2.19.
Reliability
Internal Consistency
A Cronbach alpha coefficient of .88 has been reported ( Beattie et al., 2011 ).
Test/C0Retest
One-week test /C0retest reliability has been reported with r5.90 (Beattie et al., 2011 ).
Validity
Convergent/Concurrent
TROSCI scores correlated significantly and positively with the Trait Sports Confidence Inventory (.44), a mea-
sure of athletes’ general sense of confidence described below.171 SELF-REPORT PHYSICAL CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
Divergent/Discriminant
Divergent validity has not been assessed to-date.
Construct/Factor Analytic
Separate confirmatory factor analyses were conducted on the two samples described above ( Beattie et al.,
2011). Both demonstrated that a single latent factor provided a good fit to the data and did not differ significantly
with respect to model fit from a two-factor model. Sample 1 fit: S-B χ2
545115.00, CFI 5.97, RMSEA 5.07,
SRMR 5.05; Sample 2 fit: S-B χ2
20529.36, CFI 5.97, RMSEA 5.05, SRMR 5.04.
Criterion/Predictive
TROSCI scores obtained five days prior to sporting competition were significantly and negatively correlated
with state confidence variability leading up to competition ( r5/C0.37,p,.01). Furthermore, these TROSCI scores
predicted post competition state confidence incrementally over state confidence measured one hour prior to com-
petition and a measure of performance experience measured immediately post competition (from /C035perform
much worse than usual ;t o135perform much better than usual ).
Location
Beattie, S., Hardy, L., Savage, J., Woodman, T., & Callow, N. (2011). Development and validation of a trait
measure of robustness of self-confidence. Psychology of Sport and Exercise, 12(2) , 184/C0191.
Results and Comments
The TROSCI is a short, well-constructed scale that lends itself to adaptation for the study of confidence in
other specific domains. Given its novelty, further investigation regarding its incremental utility over existing
scales, such as the Trait Sports Confidence Inventory (below) is recommended before using it in isolation.
Furthermore, considering the large proportion of male athletes used in its construction, a larger female sample
will be needed to examine gender differences.
TRAIT-ROBUSTNESS OF SELF-CONFIDENCE INVENTORY
Please read the instructions carefully before responding to the statements.
Think about your confidence and how your performance may affect your confidence generally .
The statements below describe how you may feel generally about your confidence, answer each statement by cir-
cling the number that corresponds to how strongly you agree or disagree generally . Please try and respond to each
item separately.
The terms competition refers to matches, tournaments or other competitive events.
Please answer the item as honestly and accurately as possible. There are no right or wrong answers. Your
response will be kept confidential.
Each question is accompanied by the following rating scale:
Strongly
disagreeNeutral Strongly
agree
1 234 5 678 9
A bad result in competition has a very negative effect on my self- confidence.*
My self-confidence goes up and down a lot.*
Negative feedback from others does not affect my level of self-confidence.
If I perform poorly, my confidence is not badly affected.
My self-confidence is stable; it does not vary much at all.
My self-confidence is not greatly affected by the outcome of competition.
If I make a mistake it has quite a large detrimental effect on my self-confidence.*
My self-confidence remains stable regardless of fluctuations in fitness level.
Note: *Reverse scored item.172 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Trait Sport-Confidence Inventory (TSCI)
(Vealey, 1986 ).
Variable
Vealey (1986) defined sport-confidence as ‘the belief or degree of certainty individuals possess about their abil-
ity to be successful in sport’.
Description
The TSCI is intended to measure sport-specific trait self-confidence (sport-confidence), rather than a general
sense of confidence. The TSCI therefore assesses athletes’ sense of confidence in their ability to perform success-
fully in their sport under adverse conditions, relative to the most confident athlete they know. The scale consists
of 13 items, such as, ‘compare your confidence in your ability to be successful to the most confident athlete you
know’, rated on a 9-point Likert-type scale (1 5Low;55Medium ; and 9 5High ). Scores are summed, ranging
from 13 to 117, with higher scores indicative of greater confidence.
This scale was developed along with the State Sport-Confidence Inventory (SSCI). The SSCI consists of identi-
cal item content to the TSCI, worded in terms of one’s present sport-confidence state. Although not included
here, the SSCI can be found at the same location below.
Sample
Vealey (1986) reported means and standard deviations for three athlete samples: 92 High school students 5
77.66 (14.81); 91 College students 577.77 (17.09); and 48 Elite Gymnasts 599.79 (13.65). In total, these three
groups had a mean score of 82.30, and standard deviation 17.88.
Reliability
Internal Consistency
Vealey (1986) reported item-total correlations exceeding .50, and a Cronbach alpha coefficient of .93.
Test/C0Retest
Test/C0retest coefficients for high school and university student athletes were found to be .86 after one day, .89
after one week, and .83 after one month ( Vealey, 1986 ).
Validity
Convergent/Concurrent
Vealey (1986) reported that TSCI scores shared significant positive correlations with two measures of state
sport-confidence ( r5.64 and .48), physical self-presentation confidence ( r5.30) and self-esteem ( r5.31). It also
shared low, yet statistically significant, correlations with perceived physical ability ( r5.18) Furthermore, TSCI
scores correlated significantly and positively with the Trait-robustness of Self-confidence Inventory ( r5.44), a
measure of athletes’ confidence in the face of adversity described above ( Beattie et al., 2011 ).
Divergent/Discriminant
Vealey (1986) reported significant negative correlations with external locus of control ( r5/C0.18) and trait com-
petitiveness ( r5/C0.28,/C0.30, and /C0.18).
Criterion/Predictive
Gayton and Nickless (1987) found that TSCI scores of marathon runners, collected prior to running, correlated
significantly and negatively with their actual marathon finishing times ( r5/C0.43) in a small sample ( N525).
That is, more confident runners were faster, as expected.
Location
Vealey, R.S. (1986). Conceptualization of sport-confidence and competitive orientation: Preliminary investiga-
tion and instrument development. Journal of Sport Psychology, 8(2) , 221/C0246.173 SELF-REPORT PHYSICAL CONFIDENCE SCALES
II. EMOTIONAL DISPOSITIONS |
Results and Comments
The TSCI differs from the other scales mentioned here as its scoring method involves a self-comparison to the
most confident athlete the test taker knows. Whether this alters the construct being measured remains an empiri-
cal question. From a psychometric perspective, it would be of benefit to further investigate this scales convergent
validity with other measures of sport-confidence. Furthermore, the TSCI is likely to benefit from larger validation
studies and factor analytic investigation of the structure. Nonetheless, the TSCI is short, internally consistent, has
demonstrated excellent test /C0retest reliability, and a state confidence version is readily available for comparisons.
Moreover, its relationships with other self-report measures, such as self-esteem and locus of control, mirror those
of the cognitive confidence scales. In its present state, the TSCI is therefore likely to be of considerable use for the
measurement of athletic confidence.
TSCI-LIKE ITEM
Think about how self-confident you are when you compete in sport.
Answer the questions below on how confident you generally feel when you compete in sport. Compare your self-
confidence to the most self-confident athlete you know.
Please answer as you really feel, not how you would like to feel. Your answers will be kept completely
confidential.
When you compete, how confident do you generally feel ? (circle number)
Each question is accompanied by the following rating scale:
Low Medium High
1 2345678 9
Compare you confidence in your ability to perform successfully to the most confident athlete you know.
Note: Contrived example of a TSCI item.
CONFIDENCE AND ITEM LEVEL MEASURES OF SELF-EFFICACY
Mathematics Self-Efficacy Scale (MSS)
(OECD, 2005 ).
Variable
For the purpose of this book, the items described below are titled the Mathematics Self-Efficacy Scale (MSS);
however no official title exists. Similar to some measures considered in the preceding section, the MSS defined
confidence, or self-efficacy, in terms of Bandura’s (1977) theory: as one’s perceived ability to complete or solve a
specific task, such as an item on a test (see Schunk & Pajares, 2002 ).
Description
The MSS is intended to target a sense of confidence/self-efficacy in one’s ability to solve mathematical pro-
blems. It differs from the other questionnaire measures in that, like the judgment of accuracy paradigm consid-
ered in the next section, it is item-specific. However, it also differs from the judgment of accuracy measures in
that participants are not asked to provide answers to any of the questions. Thus, the scale measures one’s belief
that she/he will be able to solve a particular problem.
The scale consists of eight items from Question 31 of the PISA (Programme for International Student
Assessment) 2003 Student Questio nnaire (SQ). Each question describe s a mathematical problem preceded
with ‘how confident do you feel about,’ rated on a 4-point scale (1 5Not Very Confident ;45Very
Confident ). Scores are summed across all eight items with hi gher scores indicative of greater maths self-
efficacy. Total scores are frequently divided by the n umber of items to obtain an average reflective of the
4-points Likert scale.174 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Sample
The MSS was employed and tested in the PISA 2003 study. Five out of eight items from MSS were also used
inStankov, Lee et al. (2012) study based on 7167 Secondary school students (15-year-olds) from four Confucian
and five European countries. The mean and standard deviations for the overall sample were 3.73 and .78
respectively.
Reliability
Internal Consistency
Morony et al. (2013) reported a mean Cronbach alpha coefficient of .85, ranging from .78 (Latvia) to .91
(Taiwan).
Test/C0Retest
Stankov and Lee (2008) reported test /C0retest reliabilities ranging from .68 to .85.
Validity
Convergent/Concurrent
The self-efficacy factor correlated positively with factors measuring Self-concept ( r5.57) and Confidence as
assessed in the judgment of accuracy paradigm described below ( r5.61) ( Stankov, Lee et al., 2012 ).Lee (2009)
also found evidence for the existence of a separate self-efficacy factor defined by these same items.
Divergent/Discriminant
The self-efficacy factor correlated negatively with Mathematics Anxiety ( r5/C0.39) ( Stankov, Lee et al., 2012 ).
Stankov, Lee et al. (2012) also reported the results of principal components analysis with PROMAX rotation ( N5
1605) based on a broader selection of constructs. These included the self-efficacy and mathematics self-concept
and anxiety, as well as measures of self-concepts related to memory, reasoning, accuracy and self-evaluation. The
results of this analysis led to a solution where self-efficacy split its variance between a self-beliefs component and
another component with loadings on Accuracy, Self-evaluation and Confidence measures. Morony et al. (2013)
found correlations close to zero between self-efficacy measures and Big Five personality scores.
Construct/Factor Analytic
When submitted to exploratory factor analysis, all eight items defined a single Self-efficacy factor ( Morony
et al., 2013 ).
Criterion/Predictive
Stankov, Lee et al. (2012) reported that Mathematics Self-Efficacy correlated .45 with Mathematics perfor-
mance, higher than any other self-beliefs construct with the exception of Confidence that is assessed with the
judgment of accuracy paradigm described below. The standardized beta coefficient for predicting mathematics
accuracy from this self-efficacy measure was .38 without Confidence included in the predictor set, and reduced
to .18 when Confidence was included.
Location
OECD (2005) .PISA 2003 Technical Report, PISA , OECD Publishing. doi: 10.1787/9789264010543-en .
Results and Comments
The MSS is a short scale that has demonstrated adequate reliability and good predictive validity. Given that
self-efficacy assessed at the item level is one of the best non-cognitive predictors of cognitive ability, followed
closely by mathematics anxiety ( Morony et al., 2013; Stankov, Lee et al., 2012 ), the MSS approach to the measure-
ment of self-efficacy is likely to be employed successfully in other content areas. For example, Stankov, Lee et al.
reported similar findings with English achievement test items. Presently, however, the scale is limited by the use
of a sample drawn from a specific age bracket and the lack of investigation into its test /C0retest reliability.
However, the MSS appears to be a useful cross-cultural scale for the measurement of Mathematics Self-Efficacy/
Confidence in Secondary school students.175 CONFIDENCE AND ITEM LEVEL MEASURES OF SELF-EFFICACY
II. EMOTIONAL DISPOSITIONS |
MATHEMATICS SELF-EFFICACY SCALE
How confident do you feel about:
Each Question is accompanied by the following rating scale:
Not very
confidentVery confident
&&& &
Calculating how many square meters of tile you need to cover a floor.
Calculating how much cheaper a TV would be after a 30% discount.
Using a train timetable to work out how long it would take to get from one place to another.
Understanding graphs presented in newspapers.
Finding the actual distance between two places on a map with a 1:100 scale.
Calculating the petrol consumption rate of a car.
Solving an equation like 3x 15517.
Solving an equation like 2(x 13)5(x13)(x/C03).
ONLINE PERFORMANCE BASED MEASURES
Proverbs Matching Test (PMT)
(Subtest from Stankov’s Test of Cognitive Abilities /C0STOCA)
and
Future Life Events Scale (FLES)
(Kleitman & Stankov, 2007 ).
Variable
Confidence judgments (including online measures such as the PMT and FLES) are an integral part of metacog-
nitive self-monitoring and experience processes, as they reflect one’s belief in the accuracy of their decision fol-
lowing a particular cognitive act (e.g., Keren, 1991 ;Kleitman, 2008 ;Schraw, Dunkle, Bendixen, & Roedel 1995 ;
Stankov, 2000 ). Self-monitoring is defined as the ability to watch, check and appraise the quality of one’s own
cognitive work in the course of doing it ( Schraw & Moshman, 1995 ).Allwood and Granhag (2000) referred to
these confidence judgments as deliberately derived feelings of confidence that occur in connection with decision
making and action regulation (see also Koriat, 2012 ;Koriat & Goldsmith, 1996 ).Efklides (2006, 2008) and Stankov
(2000) pointed out that these confidence ratings capture key metacognitive experiences closely tied with decision-
making and self-regulation.
Description
Measures of subjective confidence in one’s own judgments and knowledge have been employed in many stud-
ies (e.g., Crawford & Stankov, 1996 ;Dunlosky & Metcalfe, 2009 ;Koriat, 2012 ;Moore & Healy, 2008 ;Perfect &
Schwartz, 2002 ;Howie & Roebers, 2007 ; Stankov & Crawford, 1996a,b; 1997;). Moore and Healy (2008) reviewed
the different types of confidence judgments, including online confidence judgments carried out immediately after
responding to a test item. They reported that immediate confidence judgments could be given as: (1) discrete esti-
mates, such as unique probabilistic numbers along a ‘confidence scale’ and/or as a verbal category along a typi-
cal Likert-type scale (e.g., ranging from ‘Unsure’ to ‘Very Sure’); and (2) interval, or confidence interval
estimates, asking participants to estimate, for instance, 90% confidence intervals around their answers. Discrete
confidence judgments have been more prevalent than confidence intervals around answers ( Moore & Healy,
2008) and are the main focus here.
Participants in this paradigm rate how confident (or ‘sure’) they are that their chosen answer is correct imme-
diately after responding to an item in a test. Confidence levels are usually expressed in terms of percentages176 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
(numerical method) and/or verbally. When percentages are used with multiple-choice items, the lowest rating
point (the starting point) depends on the number of alternatives ( k) given with a question. Specifically, the lowest
point is defined by 100/k. For example, in a 4-option multiple-choice question, the probability of answering the
question correctly by chance is 25%. Similarly, in a 2-option multiple-choice question, 50% is the probability of
answering the question correctly by chance. For open-ended questions, or ‘constructed’ answers, the starting
point is 0%.
Confidence scales may employ specified intervals (e.g., 10%, 20%) or be open-ended (‘Assign any level
between 0% and 100%’). To assist with comprehension of a numerical scale, verbal anchors may also be used.
This is especially helpful with younger participants. For instance, the starting point of a scale is marked as
‘Guessing’ or ‘Not Certain’, while the end of the scale typically contains ‘Absolutely Certain’ and/or ‘Sure’
anchors.
Use of verbal anchors on their own is problematic. First, the translation of such cues into numerical values is
arbitrary and rests on the assumption that all participants understand those cues to mean the same level of cer-
tainty. Second, there are no universally accepted verbal anchors to express different levels of certainty. Verbal
expressions vary across different research conditions, making it difficult to draw comparisons. Males and females
may react differently to verbal and numerical scales. It is therefore important to combine numerical and verbal
anchors when assessing online confidence levels and to explain clearly the reasons for using the lowest confi-
dence point as well as the correspondence between different certainty levels and anchors (see examples below).
Allwood, Granhag, & Jonsson (2006, Allwood, Innes-Ker, Homgren, and Fredin; 2008) examined four different
types of confidence scales with children aged 11 /C012 years and 8 /C09 years, incorporating several theories of proba-
bility and different numerical and verbal anchors. In the so-called ‘picture scale’, pictures of ‘frowning’ or
‘smiley’ faces were accompanied by verbal and numerical expressions (e.g., I’m very unsure, just guessing, 20%
up to I’m very sure/Absolutely sure, 100%). In the ‘line’ scale, participants had to make a mark reflective of their
level of certainty on the non-shaded (50% in Figure 7.1 ) area of a line. Figure 7.1 provides an example of the line
scale for a binary (Yes/No) multiple-choice question. For this scale, the area marked is translated into the rele-
vant confidence level (here 80%).
How sure are you that this answer is right?
The authors reported no differences between these scales, suggesting equivalence in the ability to capture con-
fidence levels and their biases. Recent research with younger children (aged 5 /C07 years) used a new child-
friendly numerical and verbal confidence scale accompanied by cartoon-like stimuli.
Using online assessment , participants rate how confident they are that their answer is correct immediately after
performing a cognitive act, such as responding to a test item. This assessment differs from self-report measures,
as well as prospective judgments, where the individual is asked to make a prediction about their performance
prior to the task. Online confidence ratings are averaged over attempted test items to give an overall confidence
score. It appears that this online method /C0probing the actual cognitive act rather than relying on subjective self-
report questionnaire measures /C0is a good way to assess the Confidence trait in an individual (see Kleitman,
2008;Stankov et al., 1999 ;Stankov & Kleitman, 2008 , for reviews).
Sample
Much online confidence assessment has been carried out with undergraduates. Additionally, Stankov and
Crawford (1996) have studied people over the age of 65 years and studies with adolescents and children have
been reported ( Allwood et al., 2006; Buratti, Allwood, & Johanson, 2013; Buratti, Allwood, & Kleitman 2013;
Kleitman et al., 2010; Kleitman & Gibson, 2011; Kleitman et al., 2013; Morony et al., 2013; Roebers, Krebs, &
Roderer 2014; Stankov, Lee et al., 2012 ; Stankov, Morony, & Lee, 2013; Stankov et al., 2008, 2014).
Reliability
There is overwhelming empirical evidence showing pronounced individual differences in confidence ratings
(see Kleitman, 2008 ;Stankov et al., 1999 ;Stankov, 2000 ;Stankov, Lee et al., 2012 ;Stankov, Pallier et al., 2012 ).Absolutely
unsure
50%Absolutely
sure
100%
xFIGURE 7.1 Example of a Line confidence rating scale
(Allwood et al., 2006 ).177 ONLINE PERFORMANCE BASED MEASURES
II. EMOTIONAL DISPOSITIONS |
Internal Consistency
Cronbach alpha coefficients have ranged between .75 and .90, usually being closer to the upper estimates
(e.g., Jonsson & Allwood, 2003 ;Kleitman & Stankov, 2007 ;Stankov & Crawford, 1996a, b ;Stankov et al., 2008 ;
Stankov & Lee, 2008 ). These tend to be higher than estimates for the corresponding accuracy scores and slightly
lower than for speed measures. These results have been replicated across different cognitive domains and despite
variations in the number of test items employed.
Reliability
Test/C0Retest
Test/C0retest reliability coefficients between confidence judgments acquired from parallel tests completed at
two and four weeks ranged from .85 to .87 ( Jonsson & Allwood, 2003 ).Kleitman and Costa (2014) reported a
test/C0retest coefficient for confidence scores used in formative assessments across the semester to be .94.
Validity
Convergent/Concurrent
Correlations between accuracy and confidence scores from the same test tend to range between .40 and .60
(see Stankov, 1999; 2013 , for a review). In other words, on average, smarter people tend to be more confident
about their performance. These findings support Koriat et al. (2000) suggestion that both variables depend, at
least in part, on the online feedback from the cognitive process of answering a question.
Self-concept about competencies of one’s own memory and reasoning abilities ( Kleitman, 2008; Kleitman &
Stankov, 2007; Stankov & Lee, 2008 ) also predicts confidence levels aft er controlling for accuracy of perfor-
mance. Recent studies with both adolescents and 9 to 11- year-old children clearly show that confidence has
variance in common with metacognitive self-beliefs an d, in particular, with academic beliefs, self-efficacy
and domain-relevant anxiety ( Kleitman & Gibson, 2011; Kleitman & Costa , 2014; Stankov et al., 2012 ;
Morony et al., 2013 ). There is some recent evidence that these me asures of self-beliefs, together with mea-
sures of confidence, tend to define a separate Self-beliefs factor ( Morony et al., 2013; Stankov et al., 2012 ). To
our knowledge, however, there have been no investigat ions into the relationships between online metacogni-
tive Confidence and self-belief Confidence measured by the self-report measures described earlier in this
chapter.
Divergent/Discriminant
Empirical studies have shown only small or non-existe nt correlations between the Confidence trait and all
but the Openness to Experience ( r5.30) personality dimension ( Buratti et al., 2013; Dahl, Allwood,
Rennemark, & Hagberg 2010; Kleitman, 2008; Pallier e t al., 2002; Schaefer, Williams, Goodie, & Campbell
2004 ). Openness to Experience, however, also tends to corr elate about .30 with cognitive performance. Given
the robust relationship between Confidence and cogni tive performance, it is not surprising that people who
score high on Openness tend to have somewhat higher l evels of confidence. Thus, the Confidence factor is
not a part of the personality taxonomy but lies on the ‘no- man’s-land’ between personality and cognitive abil-
ities ( Stankov et al., 1999 ).
Construct/Factor Analytic
When measured across different items, cognitive tests, and knowledge domains, a Confidence factor tends to
emerge /C0using exploratory and confirmatory factor analytic techniques /C0reflecting the stability of those confi-
dence judgments (see Stankov, Lee et al., 2012 ;Stankov, Pallier et al., 2012 ). Some studies also included atypical
tasks/C0e.g., the Sureness scale of Kleitman and Stankov (2007) described above. That is, despite this diversity of
tasks, whether it is to solve a task, predict future events, or simply state one’s views, a Confidence factor tends to
appear signifying the habitual nature and consistency of peoples’ confidence. This factor has been equally pro-
nounced among children aged 9 /C012 years ( Kleitman & Gibson, 2011; Kleitman et al. 2010; Kleitman et al., 2013 ),
adolescents ( Stankov, Lee et al., 2012; Morony et al., 2013 ) and adults (e.g., Kleitman, 2008 ;Stankov & Crawford,
1996a,b ;Pallier et al., 2002 ;Stankov, 2000 ;Stankov & Lee, 2008 ).
Criterion/Predictive
School Achievement: Higher confidence, measured with confidence ratings, has been a strong predictor of
academic achievement. For example, in a sample of primary-school children (9 to 12-year-olds; N5183), higher178 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
levels of confidence predicted higher school grades after controlling for age, gender, intelligence, school fees and
parent/C0child family dynamics ( Kleitman et al., 2010 ). That is, teachers who were naı ¨ve to research objectives
assigned higher grades to more confident children as compared with children who were less confident in their
performance.
Three large-scale studies carried out in Singapore, o ne of the best performing countries in PISA surveys,
found similar results. In the Stankov, Lee et al. (2012) study, the raw correlation b etween Mathematics Exam
scores at the end of the year and confidence was .55, which was higher than the correlation with mathematics
anxiety ( /C0.39), self-concept (.25) and self-efficacy (.16). In the Morony et al. (2013) study, mathematics
achievement scores correlated .60 with confidence, /C0.27 with anxiety, .35 with self-concept and .45 with self-
efficacy, and a very similar pattern of correlation s was obtained in every country. Finally, in another
Singaporean sample ( N5600) the correlation between confidence and achievement was .68, whereas self-
efficacy (.41), self-concept (.30) and anxiety ( /C0.33) were all lower in size (Stankov et al., 2014). There can be
no doubts that confidence is the best known non-cognit ive predictor of academic achievement in education.
Furthermore, in all three studies, regression and SEM analyses have shown that confidence captured most of
the predictive variance of the other three self-beliefs co nstructs. This suggests that, in many instances, captur-
ing online confidence levels ‘absolves the researcher f rom employing separate scales of Self-efficacy, Self-
concept and Anxiety’ ( Kleitman et al., 2013 ). In our studies with University students and adults, we
employed measures of personality, thinking dispositions and social attitudes in addition to measures of confi-
dence. In all instances, confidence proved to be the best non-cognitive predictor of cognitive performance
(Kleitman, 2008; Crawford & Stankov, 1996 ).
In contrast, there is limited evidence for the predi ctive validity of Confidence for different types of mal-
adjusted behavior .Want and Kleitman (2006) reported a discrepancy between confidence and accuracy
levels for people suffering from imposterism feelings (intense feelings of phoniness experienced by some
individuals who have achieved a certain level of success; Clance & Imes, 1978 ), such that confidence rat-
ings, but not accuracy, shared a negative relationshi p with detrimental self-evaluations. In other words,
people high on imposterism feelings showed a pronounced ‘gap’ between their confidence and actual per-
formance levels.
Confidence levels are also central to many real-life decision-making processes ( Bruine de Bruin, Parker &
Fischhoff 2007; DeMarree, Petty, & Brinol 2007; Koriat & Goldsmith, 1996; Slovic, Fischhoff & Lichtenstein 1977 ).
For example, Jackson and Kleitman (2014) found confidence levels to be a strong incremental predictor of
decision-making tendencies in their novel Medical Decision-making Test (MDMT) using an undergraduate psy-
chology sample ( N5193). In this test participants diagnosed patients with fictitious, yet supposedly fatal, ill-
nesses and indicate their confidence in the accuracy of each diagnosis. For each patient, participants decide
whether to administer a treatment matching their diagnosis (direct) or request a blood test to make an accurate
diagnosis. Based on Koriat and Goldsmith’s (1996) model, the MDMT captured individual differences in the way
people make decisions based on their own levels of confidence. This allowed for the assessment of five novel
individual decision-making tendencies : optimal (patients cured outright); re alistic (patients cured outright or
tested appropriately); hesitant (patients risking death due to unnecessary testing); incompetent (patients dying
due to incorrect diagnosis and treatment); and congruen t (proportion of patients tr eated). Confidence was a
strong incremental predictor of these t endencies, after taking diagnostic acc uracy, intelligence, personality, cog-
nitive styles, gender, and age into account. In support of this finding, Parker, De Bruin, Yoong, and Willis
(2012) reported that confidence judgments measured with fou r different tests of financial knowledge, financial
sophistication, a hypothetical investment task and a gene r a lk n o w l e d g et e s ta l lp r e d i c t e da c t u a lf i n a n c i a lr e t i r e -
ment planning behavior after accounting for test accuracy , age, gender, whether participants had a Bachelor’s
degree, and income (aged 18 to 88 years; N5491). These findings implied that confidence judgments are valid
measures of the confidence construct that, in turn, shar es a meaningful relationship with habitual decision-
making tendencies.
Results and Comments
Much of the work employing confidence ratings to asse ss online judgments of accuracy has been carried out
with cognitive tests. In our empirical work, we have employed virtually all types of cognitive tests used in
studies of fluid (Gf) and crystallized (Gc) intelligence (see Carroll, 1993 ). These included measures of higher
mental processes, such as memory, creative and critical thinking, and perceptual tests from visual, auditory,
tactile, kinesthetic and movement/sport, olfactory and gustatory modalities. Perceptual tests, like the Line
Length test ( Kleitman & Stankov, 2001; Stankov, Pallier et al., 2012 ) can be used in studies of developmental179 ONLINE PERFORMANCE BASED MEASURES
II. EMOTIONAL DISPOSITIONS |
changes during childhood since measures of Gf and Gc may be much more sensitive to age-related changes
during childhood.
Although psychometric properties of online measures of confidence are satisfactory, some educational psycholo-
gists have been reluctant to embrace their use. A common reason appears to be the perceived close temporal
proximity between the cognitive activity of solving a problem and confidence in the accuracy of the solution
itself. It seems that those holding such views fail to appreciate empirical evidence showing that typical correla-
tions of .40 to .60 between accuracy and confidence are similar in size to the correlations between measures of
fluid and crystallized intelligence and that a separate confidence factor has been repeatedly reported (e.g.,
Stankov, 2000 ).
Below we present two scale examples employing different online measures of confidence. The first, the
Proverbs Matching Test /C0a subtest of the Stankov Test of Cognitive Ability (STOCA) battery that measures both
crystallized and fluid intelligence /C0uses a discrete, categorical, numerical scale ( Stankov & Dolph, 2000 ). The
second, the Future Life Events Scale (Kleitman, 2008; Kleitman & Stankov, 2007 ), employs a discrete verbal scale
based on Sureness, rather than confidence itself. The sureness scale is presented here in order to illustrate that
cognitive processing may be minimal, and yet the validity of the Sureness scale is comparable to the online
assessment used in the Proverbs Matching Test .
PROVERBS MATCHING TEST
Directions:
In this test you will be given proverbs. Your task is to choose a proverb that is the closest in meaning to the first.
Here is an example:
‘Birds of a feather flock together.’
(a)Opposites attract
(b) Tell me what company you keep and I will tell you who you are
(c)There is little friendship in the world and least of all between equals
(d)To check an elephant, inspect its tail
(e)Shared joy is doubled joy
In this example the correct answer is (b) since ‘Tell me what company you keep and I will tell you who you are’ is
closer in meaning to the ‘Birds of a feather flock together’ than any other alternative answer.
After each item you will be asked to state how confide nt you are that your answer is correct. A guess corre-
sponds closely to 0% confidence so you should give this as your rating. Absolute certainty corresponds to 100%
confidence. Please make your choice from the ratings provided on the sheet. Please work as quickly and accurately
as you can.
1.The truth is immortal, but the man who tells the truth will become dead.
Truth lies at the bottom of a well.
Better a lie that heals than a truth that wounds.
One is always wrong, but with two, truth begins.
Truth is mighty and will prevail.
The truth of a word depends on how you understand it.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
2.A fisherman of shallow seas uses a short line; a fisherman of deeper seas uses a long line.
No bird soars too high, if she soars with her own wings.
Those who say it cannot be done are usually interrupted by others doing it.
You will only reach as far as you aim and prepare yourself to reach.
Vision is not seeing things as they are, but as they will be.
One can never consent to creep when one feels an impulse to soar.180 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
3.Empty vessels make the most sound.
Tall trees often have shallow roots.
Still waters run deep.
A tiger hides its claws.
Better to remain silent and be thought a fool than to speak out and remove all doubt.
If the beard were all, goats might preach.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
4.Virtue is its own reward
Some rise by sin, others by virtue fall.
There are no fans in hell.
In social life, we please more often by our vices than our virtues.
Be good and you will be lonesome.
Virtue is goodness, not material or money.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
5.The journey of a thousand miles begins with one step.
To travel hopefully is better than to arrive.
Traveler, there is no trail: you blaze the trail as you travel.
A man travels the world over in search of what he needs and returns home to find it.
One may not reach the dawn save by the path of night.
He who is outside the door already has a good part of the trip behind him .
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
6.Better to understand little than to misunderstand a lot.
The difference between genius and stupidity is that genius has its limits.
The opinion of the intelligent is better than the certainty of the ignorant.
A great many people think they are thinking when they are merely rearranging their prejudices.
What he doesn’t know would make a library anybody would be proud of.
It isn’t what a man doesn’t know that makes him a fool, but what he does know that isn’t so.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
7.A careless watch invites the thief.
A full cup must be carried steadily.
A greedy eye never got a good bargain.
He that shows his purse longs to be rid of it.
Everyone carries a fool under his coat, but some hide it better than others.
Great possessions depend on fate; small possessions come from diligence.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%181 ONLINE PERFORMANCE BASED MEASURES
II. EMOTIONAL DISPOSITIONS |
8.Silence is one great art of conversation
Silence is the only thing that can’t be misquoted.
When the mouth stumbles, it is worse than the foot.
When you are arguing with an idiot, make sure the other person isn’t doing the same thing.
Silence is the ultimate weapon of power.
You can win more friends with your ear than you can with your mouth .
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
9.The smallest leak sinks the largest ship.
A chain is only as strong as its weakest link.
Do not draw your sword to kill a gnat.
The fish which you did not catch is always big.
The pitcher goes so often to the well that it is broken at last.
The last straw breaks the camel’s back.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
10.In prosperity our friends know us; in adversity we know our friends.
Never speak ill of yourself; your friends will always say enough on that subject.
A real friend is one who walks in when the rest of the world walks out.
He who whips the dog of a friend whips the friend himself.
A good friend is worth more than money in your pocket.
A friend is someone that won’t begin to talk behind your back the minute you leave the room.
How confident are you that your answer is correct?
20% 30% 40% 50% 60% 70% 80% 90% 100%
FUTURE LIFE EVENTS SCALE
This scale will ask you to state what you believe the chance of a particular thing happening in future to be. You
will also be asked to indicate how sure you are about your opinion.
The following statements describe various events that may or may not happen. On a scale between 0 and 100,
please indicate how likely each event is to occur. Thus, if you felt that an event was very likely, you should write anumber close to 100; if you felt an event was very unlikely, you’d write a number close to 0; and you felt an eventwas about equally likely and unlikely, you’d write a number close to 50.
We also want you to indicate how sure you are of your opinion. Please circle one of the options next to the sen-
tence after you completed it.
Each question is accompanied by the following rating scale:
Not sure at all Slightly sure Moderately sure Quite sure Very sure&& & & &
The chances that you’ll be successful in your chosen career are about_______ in 100.
The probability that a cure for cancer will be eventually found is about ________ in 100.
The chance that if you put some effort into mathematical training, you’d be able to do well in mathematics is
about _______ in 100.182 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
On average, the chance of passing a driving test at the first attempt is about _______ in 100.
The chances that the problem of terrorism will be solved are about _______ in 100.
The chance that if you put your mind into something, your goals would come true is about ______ in 100.
The chances that virtual reality will become the main entertainment in the future are about _______ in 100.
The probability that the human race will survive for another thousand years is about _____ in 100.
The chance that if you put your mind and effort into solving a problem, you would succeed is about _______
in 100.
The chance that if you open a business it would succeed is about _______ in 100.
CALIBRATION AND THE JUDGMENT OF ACCURACY
Current interest in confidence ratings was motivated by a desire to compare confidence and accuracy of per-
formance. This comparison can be expressed either graphically, in the form of calibration curves, or using several
different scoring formulas ( Stankov & Crawford, 1996a,b ; also see Keren, 1991 ;Harvey, 1997 ; and Yates, 1990 , for
reviews), and provides a powerful window into metacognition and cognitive biases.
A comprehensive overview of different calibration indices is available in Schraw’s (2009) review. Since many
of these (e.g., measures of resolution, calibration, etc.) have split-half reliability coefficients lower than .50 (see
Stankov & Crawford, 1996a ), we focus only on two derived scores: (a) The over/underconfidence bias score (or
simply Bias); and (b) The Discrimination score.
Bias
The Bias score has been used extensively in calibration research. It is calculated as the difference between the
average of the confidence ratings over all attempted items and the percentage of items that were answered cor-
rectly. Thus,
Bias5Average Confidence over all items /C0Percentage of correctly solved items
The resulting score indicates an individual’s tendency to judge the accuracy of his/her performance, on aver-
age. Over-confidence is reflected via a positive bias score, and under-confidence by a negative bias score.
Confidence judgments are considered to be more realistic when Bias approaches zero. As a rule of thumb, if Bias
lies within a 610 limit, it is assumed to have little psychological significance and to reflect reasonably good cali-
bration (Stankov, 1999).
Discrimination Score
The Discrimination score is traditionally calculated as the difference between the average of the confidence rat-
ings assigned to correctly solved items and the average of the confidence ratings assigned to incorrect items.
Thus,
Discrimination 5Confidence for correct items /C0Confidence for incorrect items
The obtained score indicates to what degree an individual has discriminated between correct and incorrect
answers. Positive scores indicate that an individual has discriminated appropriately (i.e., greater confidence for
correct rather than incorrect items), with an increasing magnitude indicative of greater discrimination. Negative
scores, although possible, are seldom seen, as they indicate that greater confidence has been assigned to incorrect
rather than correct items.
A ubiquitous finding, known as the Hard/Easy effect, i s the presence of over-confidence for difficult tasks
and either under-confidence or good calibration for e asy tasks. In analyses based on Item Response Theory
(IRT), the Hard/Easy effect has shown an interact ion between ability level and the size of Bias scores
(see Paek, Lee, Stankov & Wilson 2013 ;Stankov, Lee & Paek 2009 ;Stankov, Lee et al., 2012 :Stankov and
Lee, 2014 ).183 CALIBRATION AND THE JUDGMENT OF ACCURACY
II. EMOTIONAL DISPOSITIONS |
PSYCHOMETRIC EVIDENCE FOR DERIVED SCORES
Bias Scores
Reliability
Internal Consistency
No internal reliability evidence is currently available.
Test/C0Retest
Jonsson and Allwood (2003) reported test /C0retest coefficients for bias scores collected over three time intervals,
each two weeks apart, which correlated .53 (T1&T2), .59 (T2&T3) and .53 (T1&T3) with 79 high school students.
Similarly, Buratti and Allwood (2013) reported significant ( p,.001) overall test /C0retest Goodman-Kruskal
Gamma Correlations of bias scores collected over three time intervals, each one week apart, of .38 (T1&T2), .38
(T2&T3) and .35 (T1&T3) with 30 adults and 61 children (8 to 11 years old).
Parallel Forms/Odd/Even
Stankov and Crawford (1996a,b) reported Parallel Forms and Odd/Even reliabilities for the Bias scores
(labeled ‘overconfidence scores’ in their paper) for five cognitive tests with undergraduate students ( N5114).
The lowest reliability (corrected by Spearman-Brown formula) was .70.
Validity
Convergent/Concurrent
No convergent validity evidence is currently available.
Divergent/Discriminant
No divergent validity evidence is currently available.
Construct/Factor Analytic
The available findings clearly point out that, as with confidence ratings, bias scores from different cognitive
tasks obtained with undergraduate samples ( N.150) converged to define a broad Bias dimension using a prin-
cipal components analysis with promax rotation ( Jackson & Kleitman, 2014; Kleitman, 2008 ). Irrespective of the
nature of the tasks (e.g., Gf, Gc, MDMT) and their difficulty levels, people who were more under- or over-
confident on one type of task tended to be more under- or over-confident on any other type of task relative to
others. This finding strongly supports the importance of taking into account habitual individual differences in
the realism of confidence judgments.
Results and Comments
Although the reliability of Bias scores is adequate and tends to be higher than any other score derived from
the calibration paradigm, we do not recommend the use of these scores in correlational studies. Bias scores, how-
ever, are a very convenient way to depict group differences /C0e.g., overall, females tend to be better calibrated
(have bias scores close to zero) than males.
Discrimination Scores
Reliability
Internal Consistency
No internal reliability evidence is currently available.
Test/C0Retest
Test/C0retest reliability estimates for discrimination have been poor. Buratti and Allwood (2013) reported non-
significant ( p..05) overall test /C0retest Goodman-Kruskal Gamma Correlations of slope scores collected over
three time intervals, each one week apart, of .00 (T1&T2), .11 (T2&T3) and .02 (T1&T3) with 30 adults and 61 chil-
dren (aged 8 to 11 years).184 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Parallel Forms and Odd/Even
Stankov and Crawford (1996a,b) also reported Parallel Forms and Odd/Even reliabilities for the
Discrimination scores labeled ‘Slope’ scores following the work of Ronis and Yates (1987) for five cognitive tests.
They found that the Odd/Even reliability coefficient for Discrimination scores for the Raven’s Progressive Matrices
test was .65. The other four tests exhibited coefficients lower than .55.
Validity
Convergent/Concurrent
No convergent validity evidence is currently available.
Divergent/Discriminant
No divergent validity evidence is currently available.
Construct/Factor Analytic
Few studies have investigated the factorial structure of discrimination scores. The available evidence, con-
ducted with undergraduate student populations using principal components analyses, does not adequately sug-
gest whether discrimination scores generalize across cognitive domains. For example, two studies have
demonstrated that discrimination scores converge only when cognitive requirements of the tasks they are derived
from are similar ( N5134 to 192; Schraw et al., 1995 ;Schraw & Nietfeld, 1998 ). In contrast, Jackson and Kleitman
(2014) found that discrimination scores derived from various cognitive domains (Gf, Gc, MDMT) all defined a
single latent dimension ( N5193).
Results and Comments
Given that Discrimination scores had lower reliability coefficients than Bias scores, we had focused on the lat-
ter in calibration studies carried out over the past 15 years. However, discrimination scores remain popular in
experimental studies that focus on the overall group differences. The work of Jackson and Kleitman (2014) indi-
cates that these scores may still hold promise in correlational studies of individual differences.
FUTURE RESEARCH DIRECTIONS
In this chapter we have elaborated on two types of measures of confidence /C0six questionnaire based self-
report assessments that tap confidence in the academic and sport domains; and online on-task assessments that
have been used in studies of test-taking and decision making. These two areas have different origins and there is
no information at present about their mutual relationship.
Overall, most questionnaire measures have satisfactory psychometric properties although the evidence for
their validity is sketchy. They can be used profitably in the specific areas for which they were intended.
However, much additional work needs to be carried out to establish their usefulness within a broader context. Of
particular importance will be (a) the Examination of the relationships among the six questionnaire measures
themselves; (b) Study of the relationship between questionnaire measures of confidence and other non-cognitive
psychological constructs such as personality, social attitudes self-beliefs; (c) Examination of their predictive valid-
ity within a broader context; and (d) Exploration of the relationship between questionnaire measures and online
assessment of confidence.
There is overwhelming psychometric evidence that online confidence judgments that follow a cognitive act or
decision-making, such as an answer to a test item, are good measures of the confidence trait. In this chapter we
have summarized findings from the studies of individual differences in online measures of confidence that were
carried out over the past 15 years. In this line of research, confidence ratings are treated as an assessment in their
own right and it is clear that the psychometric properties of confidence scores are excellent. For example, their
reliability surpasses reliability of accuracy scores themselves. Also, in studies that used accuracy scores obtained
from typical achievement and intelligence tests as criteria, their predictive validity has been second to no other
non-cognitive measure. It approached the predictive validity of ability measures themselves. Their predictive
validity for school grades and decision acts in general has also been established. Importantly, confidence scores
in several studies of ours define a confidence factor, and it appears that there is a general confidence factor simi-
lar to Spearman’s ‘g’. This trait reflects the habitual way in which people assess the accuracy of their cognitive185 FUTURE RESEARCH DIRECTIONS
II. EMOTIONAL DISPOSITIONS |
performance. Some of our recent work indicates that confidence is related to self-beliefs measures /C0anxiety,
self-concept and self-efficacy. Self-efficacy in particular is both conceptually and empirically the closest to
confidence.
Contemporary interest in the online confidence measures was sparked by a desire to compare confidence and
accuracy. This led to proliferation of calibration studies. Several indices were proposed to assess correspondence
between accuracy and confidence but the most commonly used have been Bias and, less frequently,
Discrimination scores. Being derived measures, both tend to have lower reliability estimates than ability and
achievement tests themselves. Their use in studies of individual differences is therefore limited. While they may
be useful in studies that focus on group differences, future research is needed to clarify the reliability and gener-
ality of discrimination scores.
Future work with online confidence measures should focus on further studies of their predictive validity, bio-
logical markers, developmental changes and the examination of the effects of intervention. It should also address
the relationship between confidence and new measures that are suggested by those working in the areas of judg-
ment and decision-making. These new measures are related to real-life behaviors faced not only by professionals
such as business managers and entrepreneurs, medical doctors and lawyers, but also touch upon many decisions
we all make in everyday life.
References
Allwood, C. M., & Granhag, P. A. (2000). Realism in confidence judgments of performance based on implicit learning. European Journal of
Cognitive Psychology ,12(2), 165 /C0188.
Allwood, C. M., Granhag, P. A., & Jonsson, A. -C. (2006). Child witnesses’ metamemory realism. Scandinavian Journal of Psychology ,47(6),
461/C0470.
Allwood, C. M., Innes-Ker, A ˚., Homgren, J., & Fredin, G. (2008). Children’s and adults’ realism in their event-recall confidence in responses to
free recall and focused questions. Psychology, Crime & Law ,14(6), 529 /C0547.
Asquith, C. (2008). Dyslexia: Academic confidence, self-esteem and support in higher education (Unpublished doctoral dissertation) . United
Kingdom: University of Wales.
Bandura, A. (1997). Self-efficacy . New York, NY: Freeman.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review ,84(2), 191 /C0215.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory . Englewood Cliffs, NJ: Prentice-Hall.
Barrett, A. (2005). Dyslexia and confidence in university undergraduates (Unpublished doctoral dissertation) . United Kingdom: University of
Wales.
Beattie, S., Hardy, L., Savage, J., Woodman, T., & Callow, N. (2011). Development and validation of a trait measure of robustness of self-
confidence. Psychology of Sport and Exercise ,12(2), 184 /C0191.
Berbe ´n, A. (2008). Proceso de ensen ˜anza y aprendizaje en educacio ´n superior (Unpublished doctoral dissertation). La Universidad de Granada,
Spain.
Betz, N. E., & Borgen, F. H. (2006). Manual for the Career Confidence Inventory . Ames, IA: CAPA.
Betz, N. E., & Borgen, F. H. (2009). Comparative effectiveness of CAPA and FOCUS online: Career assessment systems with undecided college
students. Journal of Career Assessment ,17(4), 351 /C0366 . Available from http://dx.doi.org/doi:10.1177/ 1069072709334229.
Betz, N. E., Borgen, F. H., Rottinghaus, P., Paulsen, A., Halper, C. R., & Harmon, L. W. (2003). The Expanded Skills Confidence Inventory:
Measuring basic dimensions of vocational activity. Journal of Vocational Behavior ,62(1), 76/C0100.
Betz, N. E., Harmon, L. W., & Borgen, F. H. (1996). The relationships of self-efficacy for the Holland Themes to gender, occupational group
membership, and vocational interests. Journal of Counselling Psychology ,43(1), 90/C098.
Betz, N. E., & Borgen, F. H. (2010). The CAPA integrative online system for college major exploration. Journal of Career Assessment ,18(4),
317/C0327.
Biggs, J., Kember, D., & Leung, D. (2001). The revised two-factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational
Psychology ,71(1), 133 /C0149.
Borgen, F., & Betz, N. (2008). Career self-efficacy and personality: Linking the Career Confidence Inventory and the Healthy Personality
Inventory. Journal of Career Assessment ,16(1), 22/C043.
Bruine de Bruin, W., Parker, A. M., & Fischhoff, B. (2007). Individual differences in adult decision-making competence. Journal of Personality
and Social Psychology ,92(5), 938 /C0956.
Buratti, S., Allwood, C. M., & Johanson, M. (2014). Stability in metamemory realism of eyewitness confidence judgments. Cognitive Processing ,
15(1), 39/C053.
Buratti, S., Allwood, C. M., & Kleitman, S. (2013). On the relation between personality variables and first- and second-order
judgments of the correctness of recalled semantic memory information. Metacognition and Learning ,8(1), 79/C0102 doi 10.1007/s11409-013-
9096-5.
Buratti, S., MacLeod, S., & Allwood, C. M. (2013). The effects of question format and co-witness peer discussion on the confidence accuracy of
children’s testimonies. Social Influence doi 10.1080/15534510.2013.804434.186 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Canter, D. E. (2008). Self-Appraisals, Perfectionism, and Academics in College Undergraduates . Richmond, VA: Doctoral dissertation, Virginia
Commonwealth University.
Carroll, J. B. (1993). Human cognitive abilities: a survey of factor-analytic studies . New York, NY, US: Cambridge University Press.
Cheng, H., & Furnham, A. (2002). Personality, peer relations, and self-confidence as predictors of happiness and loneliness. Journal of
Adolescence ,25(3), 327 /C0339.
Clance, P. R., & Imes, S. A. (1978). The imposter phenomenon in high achieving women: Dynamics and therapeutic intervention.
Psychotherapy: Theory, Research and Practice ,15(3), 241 /C0247.
Cramer, R. J., Neal, T. M., DeCoster, J., & Brodsky, S. L. (2010). Witness self-efficacy: development and validation of the construct. Behavioral
Sciences and the Law ,28(6), 784 /C0800.
Crawford, J., & Stankov, L. (1996). Age differences in the realism of confidence judgements: A calibration study using tests of fluid and crystal-
lized intelligence. Learning and Individual Differences ,8(2), 82/C0103.
Dahl, M., Allwood, C. M., Rennemark, M., & Hagberg, B. (2010). The relation between personality and the realism in confidence judgements
in older adults. European Journal of Ageing ,7(4), 283 /C0291.
DeMarree, K. G., Petty, R. E., & Brinol, P. (2007). Self-certainty: Parallels to attitude certainty. International Journal of Psychology & Psychological
Therapy ,7(2), 159 /C0188.
Dunlosky, J., & Metcalfe, J. (2009). Metacognition . Thousand Oaks, CA: Sage.
Efklides, A. (2008). Metacognition. European Psychologist ,13(4), 277 /C0287.
Efklides, A. (2006). Metacognition and affect: What can metacognitive experiences tell us about the learning process? Educational Research
Review ,1(1), 3/C014.
Gayton, W. F., & Nickless, C. J. (1987). An investigation of the validity of the trait and state sport-confidence inventories in predicting mara-
thon performance. Perceptual and Motor Skills ,65(2), 481 /C0482.
Gigerenzer, G., Hoffrage, U., & Kleinbo ¨lting, H. (1991). Probabilistic mental models: a Brunswikian theory of confidence. Psychological Review ,
98(4), 506.
Harvey, N. (1997). Confidence in judgment. Trends in Cognitive Sciences ,1(2), 78/C082.
Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed). Odessa, FL: Psychological
Assessment Resources.
Howie, P., & Roebers, C. M. (2007). Developmental progression in the confidence-accuracy relationship in event recall: insights provided by a
calibration perspective. Applied Cognitive Psychology ,21(7), 871 /C0893.
Jackson, S. A., & Kleitman, S. (2014). Individual differences in decision-making and confidence: capturing decision tendencies in a fictitious
medical test. Metacognition and Learning ,9(1), 25/C049.
Johnson, D. M. (1939). Confidence and speed in the two-category judgement. Archives of Psychology ,241,1/C052.
Jonsson, A. C., & Allwood, C. M. (2003). Stability and variability in the realism of confidence judgments over time, content domain, and gen-
der. Personality and Individual Differences ,34(4), 559 /C0574.
Juslin, P. (1994). The overconfidence phenomenon as a consequence of informal experimenter-guided selection of almanac items.
Organizational Behavior and Human Decision Processes ,57(2), 226 /C0246.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review ,103(3), 582 /C0591.
Keren, G. (1991). Calibration and probability judgments: Conceptual and methodological issues. Acta Psychologica ,77(3), 217 /C0273.
Kleitman, S. (2008). Metacognition in the rationality debat e: self-confidence and its calibration .S a a r b u ¨c k e n ,G e r m a n y :V D MV e r l a g
Dr. Mu ¨ller.
Kleitman, S., & Costa, D. (2014). The role of a novel formative assessment tool (stats-miq) and individual differences in real-life academic per-
formance. Learning and Individual Differences , 29, 150 /C0161.
Kleitman, S., & Gibson, J. (2011). Metacognitive beliefs, self-confidence and primary learning environment of sixth grade students. Learning
and Individual Differences ,21(6), 728 /C0735.
Kleitman, S., & Moscrop, T. (2010). Self-confidence and academic achievements in primary-school children: Their relationships and links to
parental bonds, intelligence, age, and gender. In A. Efklides, & P. Misailidi (Eds.), Trends and prospects in metacognition research
(pp. 293 /C0326). New York, US: Springer.
Kleitman, S., & Stankov, L. (2001). Ecological and person-oriented aspects of metacognitive processes in test-taking. Applied Cognitive
Psychology ,15(3), 321 /C0341.
Kleitman, S., & Stankov, L. (2007). Self-confidence and metacognitive processes. Learning and Individual Differences ,17(2), 161 /C0173.
Kleitman, S., Stankov, L., Allwood, C. M., Young, S., & Mak, K. (2013). Metacognitive self-confidence in school-aged children. In M. M. Mok
(Ed.), Self-directed Learning Oriented Assessments in the Asia-Pacific (pp. 139 /C0153). Springer.
Koriat, A. (2000). Control processes in remembering. In E. Tulving, & F. Craik (Eds.), The Oxford handbook of memory (pp. 333 /C0346). New York:
Oxford University Press.
Koriat, A. (2012). The Self-Consistency Model of Subjective Confidence. Psychological Review ,119(1), 80/C0113.
Koriat, A., & Goldsmith, M. (1996). Monitoring and control processes in the strategic regulation of memory accuracy. Psychological Review ,103
(3), 490 /C0517.
Lee, J. (2009). Universals and specifics of math self-concept, math self-efficacy, and math anxiety across 41 PISA 2003 participating countries.
Learning and Individual Differences ,19(3), 355 /C0365.
Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behavior and
Human Performance ,20(2), 159 /C0183.
Moore, D., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review ,115(2), 502 /C0517.
Morony, S., Kleitman, S., Lee, Y. P., & Stankov, L. (2013). Predicting achievement: Confidence vs. self-efficacy, anxiety, and self-concept in
Confucian and European countries. International Journal of Educational Research ,58,7 9/C096. Available from http://dx.doi.org/10.1016/
j.ijer.2012.11.002 .187 REFERENCES
II. EMOTIONAL DISPOSITIONS |
OECD (2005). PISA 2003 Technical Report . PISA: OECD Publishing.
Paek, I., Lee, J., Stankov, L., & Wilson, M. (2013). Rasch modeling of accuracy and confidence measures from cognitive tests. Journal of Applied
Measurement ,14(3), 232 /C0248.
Pallier, G., Wilkinson, R., Danthir, V., Kleitman, S., Knezevic, G., & Stankov, L. (2002). The role of individual differences in the accuracy of
confidence judgments. Journal of General Psychology ,129(3), 257 /C0299.
Parker, A. M., De Bruin, W. B., Yoong, J., & Willis, R. (2012). Inappropriate confidence and retirement planning: Four studies with a national
Sample. Journal of Behavioral Decision Making ,25(4), 382 /C0389.
Perfect, T. J., & Schwartz, B. L. (2002). Applied Metacognition . Cambridge, UK: Cambridge University Press.
Pulford, B. D., & Sohal, H. (2006). The influence of personality on HE students’ confidence in their academic ability. Personality and Individual
Differences ,41(8), 1409 /C01419.
Robinson, C. H., & Betz, N. E. (2004). Test /C0Retest reliability and concurrent validity of the Expanded Skills Confidence Inventory. Journal of
Career Assessment ,12(4), 407 /C0422.
Ronis, D. L., & Yates, J. F. (1987). Components of probability judgment accuracy: Individual consistency and effects of subject matter and
assessment method. Organizational Behavior and Human Decision Processes ,40(2), 193 /C0218.
Roebers, C. M., Krebs, S. S., & Roderer, T. (2014). Metacognitive monitoring and control in elementary school children: Their interrelations and
their role for test performance. Learning and Individual Differences ,29, 141/C0149.
Sander, P. (2009). Current developments in measuring academic behavioral confidence. Psychology Teaching Review ,15(1), 32/C044.
Sander, P., & Sanders, L. (2009). Measuring academic behavioral confidence: the ABC revisited. Studies in Higher Education ,34(1), 19/C035.
Sander, P., & Sanders, L. (2003). Measuring confidence in academic study: A summary report. Electronic Journal of Research in Educational
Psychology and Psychopedagogy ,1(1), 1/C017.
Sanders, L., & Sander, P. (2007). Academic behavioral confidence: A comparison of medical and psychology students. Electronic Journal of
Research in Educational Psychology ,5(3), 633 /C0650.
Sanders, P., & Sanders, L. (2006). Understanding academic confidence. Psychology Teaching Review ,12(1), 29/C042.
Sanders, L., Sander, P., & Mercer, J. (2009). Rogue males? Perceptions and performance of male psychology students. Psychology Teaching
Review ,15(1), 3/C017.
Schaefer, P. S., Williams, C. C., Goodie, A. S., & Campbell, W. K. (2004). Overconfidence and the Big Five. Journal of Research in Personality ,38
(5), 473 /C0480.
Schraw, G. (2006). Knowledge: Structures and processes . Mahwah, NJ: Erlbaum.
Schraw, G. (2009). Measuring metacognitive judgments. Handbook of metacognition in education (pp. 415 /C0429). New York, NY: Routledge/Taylor &
Francis Group; US.
Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychological Review ,7(4), 351 /C0371.
Schraw, G., & Nietfeld, J. (1998). A further test o f the general monitoring skill hypothesis. Journal of Educational Psychology ,90(2),
236/C0248.
Schraw, G., Dunkle, M. E., Bendixen, L. D., & Roedel, T. D. (1995). Does a general monitoring skill exist? Journal of Educational Psychology ,87
(3), 433 /C0444.
Schunk, D. H., & Pajares, F. (2002). The Development of Academic Self-Efficacy /C0Chapter 1 .Development of achievement motivation (pp. 15 /C031).
Elsevier.
Shrauger, J. S., & Schohn, M. (1995). Self-confidence in college students: Conceptualization, measurement, and behavioral implications.
Assessment ,2(3), 255 /C0278.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1977). Behavioral decision theory. Annual Review of Psychology ,28(1), 1/C039.
Soll, J. B. (1996). Determinants of overconfidence and miscalibration: The roles of random error and ecological structure. Organizational
Behavior & Human Decision Processes ,65(2), 117 /C0137.
Stankov, L. (2000). Complexity, metacognition, and fluid intelligence. Intelligence ,28(2), 121 /C0143.
Stankov, L. (1999). Mining on the ‘No Man’s Land’ between intelligence and personality. In P. L. Ackerman, P. C. Kyllonen, & R. D. Roberts
(Eds.), Learning and Individual Differences: Process, Trait, and Content Determinants (pp. 315 /C0338). Washington, DC: American Psychological
Association.
Stankov, L. (2000). Structural extension of a hierarchical view on human cognitive abilities. Learning and Individual Differences ,12(1), 35/C051.
Stankov, L. (2013). Noncognitive predictors of intelligence and academic achievement: An important role of confidence. Personality and
Individual Differences ,55, 727/C0732.
Stankov, L., & Crawford, J. (1996a). Confidence judgments in studies of individual differences. Personality and Individual Differences ,21(6),
971/C0986.
Stankov, L., & Crawford, J. (1996b). Confidence judgments in studies of individual differences support the ‘confidence/frequency effect ’.At once scientific
and philosophic: A festschrift in honour of JP Sutcliffe 215/C0239.
Stankov, L., & Crawford, J. (1997). Self-confidence and performance on tests of cognitive abilities. Intelligence ,25(2), 93/C0109.
Stankov, L., & Dolph, B. (2000). Metacognitive aspects of test-taking and intelligence. Psychologische Beitrage ,42, 213/C0227.
Stankov, L., & Kleitman, S. (2008). Processes on the borderline between cognitive abilities and personality: Confidence and its realism. In G. J.
Boyle, G. Matthews, & D. H. Saklofske (Eds.), The Handbook of Personality Theory and Testing (pp. 541 /C0555). Thousand Oaks, CA: Sage.
Stankov, L., & Lee, J. (2008). Confidence and cognitive test performance. Journal of Educational Psychology ,100(4), 961 /C0976.
Stankov, L., & Lee, J. (2014). Overconfidence across world regions. Journal of Cross-cultural Psychology ,45, 821/C0837.
Stankov, L., Lee, J., & Paek, I. (2009). Realism of confidence judgements. European Journal of Psychological Assessment ,25(2), 123 /C0130.
Stankov, L., Lee, J., Luo, W., & Hogan, D. J. (2012). Confidence: A better predictor of academic achievement than self-efficacy, self-concept and
anxiety?. Learning and Individual Differences ,22(4), 747 /C0758.
Stankov, L., Morony, S., & Lee, Y. P. (2013). Confidence: The best non-cognitive predictor of academic achievement?. Educational Psychology. ,
34(1), 1/C08.188 7. MEASURES OF THE TRAIT OF CONFIDENCE
II. EMOTIONAL DISPOSITIONS |
Stankov, L., Pallier, G., Danthiir, V., & Morony, S. (2012). Perceptual underconfidence: A conceptual illusion? European Journal of Psychological
Assessment ,28(3), 190 /C0200.
Vealey, R. S. (1986). Conceptualization of sport-confidence and competitive orientation: Preliminary investigation and instrument develop-
ment. Journal of Sport Psychology ,8(3), 221 /C0246.
Want, J., & Kleitman, S. (2006). Imposter phenomenon and self-handicapping: Links with parenting styles and self-confidence. Personality and
Individual Differences ,40(5), 961 /C0971.
Yates, J. F. (1990). Judgment and decision making . Englewood Cliffs, NJ: Prentice-Hall.189 REFERENCES
II. EMOTIONAL DISPOSITIONS |
CHAPTER
8
Measures of Affect Dimensions
Gregory J. Boyle1, Edward Helmes2, Gerald Matthews3and Carroll E. Izard4
1University of Melbourne, Parkville, Victoria, Australia;2James Cook University, Townsville, Queensland, Australia;
3University of Central Florida, Orlando, FL, USA;4University of Delaware, Newark, DE, USA
One can capture different forms of affect dependi ng on the instructions provided about timeframe.
For example, ‘How you feel right now’ would measure m omentary or fleeting emotional states, ‘How you
have been feeling for the past week or past few week s’ would measure longer-lasting mood states, and
‘How you feel in general’ would measure a disposition/tr ait construct. While transi ent emotional states are
relatively brief episodes with clear onset and offset, mood states persist over a somewhat longer timeframe
and tend to fluctuate within a narrower margin of intensity ( Ekman, 1994 ). Mood states fall in-between transi-
tory emotional states and more enduring dispositions/traits ( Fisher, 1998 ). In the English lexicon, anger for
example, is regarded as an emotional state, irritability /irascibility its longer-lasting mood equivalent, and
hostility its enduring trait equivalent ( Fernandez & Kerns, 2008 ). But these words are mere approximations of
meaningful phenomena.
Moods appear relatively stable because they are relatively longer in duration and lower in intensity than their
emotion equivalents. How should the continuum ranging from phasic to tonic affectivity be described and what
terms should be used to describe affective phenomena? Does it make sense to distinguish these phenomena in
terms of the words used in the English lexicon? Perusal of the literature shows there is much confusion over the
terms ‘emotional states’, ‘mood states’ and ‘dispositional states’, such that these terms are often used interchange-
ably, suggesting greater clarity of definition is urgently needed. As just one example of this circularity,
Cox (2002, p. 178) asserted that a mood state refers to ‘ a situation specific, somewhat transient, psychological response
to an environmental stimulus ’. Likewise, Stirling and Kerr (2006, p. 15) defined a mood state as ‘ an emotional state
in response to an environmental stimulus ’. However, this definition does not acknowledge that moods are ‘tonic’
and emotions are ‘phasic’ (to use psychophysiological terminology).
Actually, any single, brief measurement of a transient emotional state is also providing a static cross-sectional
‘snapshot’ of a longer lasting mood or even a trait dimension. Asking respondents to rate ‘ How you feel right now ’
may tap into a momentary emotional episode or it may actually be a ‘snapshot’ of what they have been
experiencing for some time. It appears from the literature on affect measurement that the use of differing affect
terms is rather arbitrary when it comes to distinguishing between fleeting, transient, phasic emotional states,
versus longer lasting, tonic moods, versus motivational dynamic traits versus relatively stable personality traits
versus highly stable, enduring personality traits. In most measures of emotions and mood states, only a few time-
frames are specifically targeted (i.e., state vs. trait /C0e.g., MCI, STPI; or emotional state vs. mood state vs. trait /C0
e.g., DES-IV). There is a distinction between immediate transitory/fleeting states (emotions), versus lingering
states (moods) (cf. Aganoff & Boyle, 1994 ). One measure which specifically provides three separate sets of
instructions designed to tap into each of these forms of affect is the Differential Emotions Scale (e.g., Izard, 1991 ;
Izard, Libero, Putnam, & Haynes 1993 ).
However, it is an oversimplification to regard affective variables as categorical (e.g., state /C0trait distinction),
when in fact there is a continuum of affectivity ranging all the way from fleeting emotional states to relatively
stable enduring traits. Indeed, the PANAS-X ( Watson & Clark, 1999 ) provides instructions related to several
190Measures of Personality and Social Psychological Constructs.
DOI: http://dx.doi.org/10.1016/B978-0-12-386915-9.00008-5 ©2015 Elsevier Inc. All rights reserved. |
different timeframes (e.g., ‘[at this] Moment’, ‘Today’, ‘Past Few Days’, ‘Past Week’, ‘Past Few Weeks’, ‘Past
Month’, ‘Past Year’, etc.).
It is transitory emotional states, rather than longer-lasting moods that are more likely to be related to particular
events/stimuli ( Fisher, 1998 ). The claim that mood states are responses to situational stimuli ( Cox, 2002; Stirling &
Kerr, 2006 ) overlooks the clinical observation that many individuals fail to attribute their moods to external events
or situations ( Fernandez & Kerns, 2008 ). As Izard (2001) pointed out, ‘trait’ emotions (mood states) incorporate dis-
positional aspects of emotions. According to Izard (p. 254):
‘Individuals dispositionally prone to experience shame and anger tend to experience these emotions at a higher level of intensity
than individuals with a different disposition, and the different levels of emotional intensity have consequences for behavior ( Tangney,
Wagner, Barlow, Marschall, & Gramzow 1996 ). Characteristically happy people tend to engage in more social interactions ( Diener &
Larsen, 1993 )...Frequency of experiencing particular emotions relates significantly to particular traits of personality (e.g., interest and
joy positively relate to extraversion /C0sociability, anger and contempt negatively to agreeableness). A broad pattern of negative emotions
virtually defines the trait of neuroticism.’
Most trait dimensions exhibit only relative stability over the lifespan ( Cattell, Boyle, & Chant 2002; Fraley &
Roberts, 2005; Roberts, Walton, & Viechtbauer 2006a,b; Specht, Egloff, & Schmukle 2011; Watson & Walker,
1996). Moods appear more stable than emotional states because they are longer lasting in duration and lower in
intensity than their emotion equivalents. So is the state /C0trait distinction too crude? The answer depends on the
level of specificity desired (e.g., to use a factor analytic analogy, the preference for higher-order versus primary
factors). The current ‘Big Five’ personality literature, for instance, focuses on a small number of broad second-
order dimensions (cf. Boyle, 2008; Boyle et al., 1995), each of which can be broken down into more specific
dimensions (e.g., Costa & McCrae’s facet scales or Cattell’s 16PF primary factors /C0Cattell & Kline, 1977 ).
However, affects may range all the way from transient states to enduring dispositions (lasting just a few seconds,
a few minutes, a few hours, a few days, a few weeks, a few months, a few years, or many years). For example,
Spielberger’s State/C0Trait Anxiety Inventory (STAI), State/C0Trait Anxiety Inventory for Children (STAIC), State/C0Trait
Curiosity Inventory (STCI), State/C0Trait Anger Scale (STAS), State/C0Trait Anger Expression Inventory (STAXI),
State/C0Trait Depression Scale (STDS) and State/C0Trait Personality Inventory (STPI) attempt to measure state
(emotional) versus trait (dispositional) aspects, but overlook mood states of intermediate duration that are more
stable than transitory emotional states but less stable than enduring personality traits. Although a simple dichoto-
mous state /C0trait distinction and the subsequent extension to a tripartite distinction (e.g., Izard, 1991 ) may have
been reasonable developments at the time, there appear to be discernible affect phenomena ranging all the way
from transitory/fleeting emotional episodes/states, through longer-lasting mood states, dynamic motivational
traits, to relatively stable enduring personality dispositions/traits. In this chapter, we review 10 of the most
important measures of affect dimensions as follows:
MEASURES REVIEWED HERE
1.Melbourne Curiosity Inventory ( Naylor, 1981 /2011)
2.State/C0Trait Personality Inventory ( Spielberger, Ritterband, Sydeman, Reheiser, & Unger 1995 )
3.Positive and Negative Affect Schedule /C0Expanded Form ( Watson & Clark, 1999 )
4.Differential Emotions Scale ( Izard, 1991; Izard et al., 1993 )
5.Profile of Mood States ( Heuchert & McNair, 2012 )
6.Multiple Affect Adjective Check List /C0Revised ( Zuckerman & Lubin, 1985; Lubin & Zuckerman, 1999 )
7.Multidimensional Mood-State Inventory (Boyle, 2012)
8.Activation-Deactivation Adjective Check List ( Thayer, 1989 )
9.UWIST Mood Adjective Checklist (Matthews et al., 1990)
10.Dundee Stress State Questionnaire ( Matthews, Hillyard, & Campbell 1999, 2002 )
OVER VIEW OF THE MEASURES
Historically, Cattell and Scheier (1963) first distinguished between state and trait constructs. State anxiety is
viewed as an emotional state while trait anxiety is an ongoing tendency to react more frequently and with greater191 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
elevations in state anxiety. Subsequently, Spielberger constructed the State/C0Trait Anxiety Inventory (STAI) /C0
(e.g., see Spielberger, Gorsuch, Lushene, Vagg, & Jacobs 1983 ) comprising an A-Trait scale of 20 items
(with instructions to respond as to ‘ How you generally feel ’), and a corresponding A-State scale of 20 items
(with instructions to respond as to ‘ How you feel right now, that is, at this very moment ’). The trait scale measures
frequency , whereas the state scale measures intensity . Spielberger also constructed the State/C0Trait Personality
Inventory (STPI), comprising state and trait measures of anxiety, anger, depression, and curiosity (see Spielberger,
Reheiser, Owen, & Sydeman 2004 ;Spielberger & Reheiser, 2009 ).
Construction of the Melbourne Curiosity Inventory (MCI; Naylor, 1981 /2011) was based on Spielberger’s STAI
model (e.g., Gaudry, Vagg, & Spielberger, 1975 ;Spielberger et al., 1983 ). The MCI (derived from earlier versions
of the Melbourne State/C0Trait Curiosity Inventory (cf.Boyle, 1977 , 1989; Devlin, 1976 ) consisted of two separate
subscales each of 20 self-report items intended to measure curiosity either as a transitory, situationally-sensitive
emotional state or as an enduring personality disposition. These scales have served as useful measures of state
and trait curiosity in many research studies conducted over recent decades.
The Positive and Negative Affect Schedule /C0Extended (PANAS-X; Watson & Clark, 1994) measures both positive
(PA) and negative affect (NA), as well as 11 primary affect dimensions (labeled: Fear, Sadness, Guilt, Hostility,
Shyness, Fatigue, Surprise, Joviality, Self-assurance, Attentiveness, and Serenity). This instrument provides eight
separate measurement timeframes, ranging from momentary emotional states, through intermediate mood states,
dynamic traits, and finally to enduring personality traits, rather than providing a dichotomous timeframe only
(as with state /C0trait measures). The Differential Emotions Scale (DES-IV; Izard, 1991 ) aims to measure 12 separate
fundamental emotions (labeled: Interest, Joy, Surprise, Sadness, Anger, Disgust, Contempt, Self-hostility, Fear,
Shame, Shyness, and Guilt) purported to be universally discernible in facial expressions of infants ( Izard et al.,
1993). Instructions provided with the DES-IV allow measurement of affect dimensions either as relatively
stable dispositional traits, fluctuating mood states, or as transitory emotional states.
The Profile of Mood States (POMS-2; Heuchert & McNair, 2012 ) is an adjective checklist intended to measure
affects either over the past week (mood states), or right now (emotional states). Compared with the PANAS-X time-
frame instructions, the utility of the POMS 2 instrument is somewhat restricted. Likewise, the Multiple Affect
Adjective Checklist (MAACL-R; Zuckerman & Lubin, 1990) is an adjective checklist that measures of Anxiety,
Depression, Hostility, and Sensation Seeking, as state or trait dimensions.
Although not reviewed here, the Eight State Questionnaire (8SQ; Curran & Cattell, 1976 ), based on P- and
dR-technique factor analyses (cf. Boyle, 1987a , 1988, 1989), provides measures of eight clinically important states
labeled: Anxiety, Stress, Depression, Regression, Fatigue, Guilt, Extraversion, and Arousal. Instructions are to
respond as to ‘ How you feel at this moment ...how you feel right here and now ’ on a 4-point scale. Although used
extensively (e.g., Boyle, 1984a, 1985a, 1986a,b ,1987a,c,f ,1988a,b, 1989b,c ,1991b ;Boyle & Cattell, 1984 ;Boyle,
Stanley & Start 1985 ;Boyle & Katz, 1991 ), the 8SQ (currently out of print) was instrumental in the subsequent
development of the MMSI (see below). Cronbach alphas for the subscales ranged from .47 to .89 ( Boyle, 1983b )/C0
(Kline, 1986 , recommended alpha coefficients be kept below 0.7 to minimize item redundancy and provide
greater breadth of measurement of constructs/factors /C0cf.Boyle, 1991a ). High dependability coefficients
(Mean5.96) suggest the eight subscales are reliable measures (cf. Cattell, 1973 , pp. 353 /C0355). Boyle (1984)
reported stability coefficients (after 3 weeks) ranging from .38 to .76, as would be expected for situationally-
sensitive measures.
The Multidimensional Mood State Inventory (MMSI; Boyle, 2012) which was derived from several factor analyses
of the 8SQ, DES-IV, and POMS intercorrelations (e.g., Boyle, 1983c, 1985a, 1986a, 1987a,e,f ,1988a,b, 1989b, 1991b ),
includes five separate 15-item self-report subscales labeled: Arousal /C0Alertness, Anger /C0Hostility, Neuroticism,
Extraversion, and Curiosity. Used with instructions as to how one feels ‘ right now, at this very moment ’ the MMSI
measures transitory emotional states. However, depending on instructions provided, as with the PANAS-X, the
MMSI can measure affect dimensions, ranging all the way from fleeting emotional states, through longer-lasting
mood states, to relatively stable personality dispositions.
The Activation-Deactivation Adjective Check List (AD-ACL; Thayer, 1989 ), is a self-report adjective checklist pro-
viding unipolar measures of four affect dimensions labeled: Energy, Tiredness, Tension, and Calmness), as well
as bipolar dimensions of Energetic Arousal (energy vs. tiredness) and Tense Arousal (tension vs. calmness).
Derived from the AD-ACL scales, the UWIST Mood Adjective Checklist (UMACL; Matthews et al., 1990) measures
three bipolar dimensions of Energetic /C0Arousal, Tense /C0Arousal and Hedonic Tone, as well as a unipolar dimen-
sion of Anger/Frustration. Finally, the Dundee Stress State Questionnaire (DSSQ; Matthews et al., 1999, 2002 )
includes the three basic UMACL mood scales (see above), two motivational scales labeled: Intrinsic Interest;192 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Success Striving, and six cognitive scales labeled: Self-Focus, Self-Esteem, Concentration, Confidence and Control,
Task-related Cognitive Interference, and Task-irrelevant Cognitive Interference.
All of the scales/measures of affect dimensions reviewed in this chapter are either multidimensional, or com-
prise at the very least two (state versus trait) dimensions, thereby allowing a more comprehensive assessment
of transitory emotional states, longer-lasting mood states, motivational dynamic traits, and relatively
stable dispositional trait constructs. In line with Leary (1991, p. 165), all of the scales reviewed here ‘have demon-
strated reliability and validity as measures ...However, they are by no means interchangeable, and researchers
should exercise care to select appropriate instruments for their particular research purposes’ (cf. Boyle, 1987d ).
The authors of the present chapter concur completely with Leary’s sage advice.
Melbourne Curiosity Inventory (MCI)
(Naylor, 1981 /2011).
Variable
Curiosity is an important construct that motivates approach behaviors in a multitude of real-life settings
(seeBoyle, 1983a , for review of the state /C0trait curiosity model). As Boyle (1983a, p. 383) stated:
‘By constructing global C-State and C-Trait scales, Naylor and Gaudry not only attempted to avoid the particularities of previous
measures (limited to measuring, say, epistemic or perceptual aspects of specific curiosity) but also aimed to simplify curiosity conceptu-
alization in accord with the research suggestions arising from the studies of earlier theorists, such as Berlyne, Day, Beswick, and
Leherissey.’
Several early studies into Spielberger’s STAI model (e.g., Gaudry & Poole, 1975 ;Gaudry et al., 1975 ) had
encouraged development of similar scales in the curiosity domain (Boyle, p. 383). As with Spielberger’s construc-
tion of the State/C0Trait Curiosity Inventory (e.g., Spielberger, Peters, & Frain 1981 ) in Florida, Naylor and Gaudry’s
parallel studies in Melbourne resulted in construction of the MCI, which measures curiosity both as state and
trait dimensions.
Description
The MCI C-State and C-Trait scales comprise 20 items each, but instructions differ. For the C-State scale,
instructions are: ‘Read each statement and then circle the appropriate number to the right of the statement to
indicate how you feel right now, that is, at this moment.’ Thus, the C-State scale taps into transient elevations in
curiosity (emotional states), measured on a 4-point forced-choice intensity scale as follows: 1. (Not at All);
2 (Somewhat); 3 (Moderately So); 4 (Very Much So). For the C-Trait scale, instructions are: ‘Read each statement
and then circle the appropriate number to the right of the statement to indicate how you generally feel.’ The
C-Trait scale is scored on a 4-point frequency scale as follows: 1. (Almost Never); 2. (Sometimes); 3. (Often);
4 (Almost Always). Individuals high on C-Trait experience more frequent and more intense elevations in curi-
osity states. The MCI has been translated into German ( Saup, 1992 ).
Sample
Working with Eric Gaudry and Frank Naylor at the University of Melbourne, Boyle (1977, 1989) carried out
item and scale factor analyses of early C-State and C-Trait scales using a sample of 300 high school students
(aged 15 /C018 years). Naylor and Gaudry also used large samples of high school students in their construction,
validation and progressive rectification of separate C-State and C-Trait scales, eventually incorporated into the
MCI (see Naylor, 1981 /2011, Table 2).
Reliability
Internal Consistency
Naylor (1981/2011) reported Cronbach alpha coefficients for the MCI and C-State scales ranging from .88
to .92, and from .84 to .93 for the MCI and C-Trait scale. Boyle (1978) had previously reported alpha coefficients
for the earlier C-State scale of .91, and for the C-Trait scale of .92. Likewise, Renner (2006) reported an alpha coef-
ficient of .92 for the MCI C-State scale.193 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Test/C0Retest
Stability coefficients for the MCI (C-Trait) scale over a 4 /C05 week interval ranged from .77 to .83, and for the
C-State scale (.59), showing less stability for the C-State scale, as expected. Likewise, for the earlier C-State and
C-Trait scales, the test /C0retest coefficients across a brief 15 /C020 minute interval were .56 for the C-State scale, and
.77 for the C-Trait scale ( Boyle, 1977 , 1989). In accord with state /C0trait theory, the C-State scales exhibited greater
situational sensitivity than did the C-Trait scales.
Validity
Convergent/Concurrent
Naylor (1981/2011, p. 180) reported positive correlations between the MCI scales and Holland’s RIASEC occu-
pational interest categories, as measured by the Strong/C0Campbell Interest Inventory (SCII). The MCI C-State and
C-Trait scales correlated positively with Realistic (.01 and .04), Investigative (.36 and .42), Artistic (.25 and .32),
and Social (.35 and .22), Enterprising (.09 and .12), Conventional (.06 and 2.08) RIASEC factors, respectively.
Naylor (p. 180) also reported that the MCI C-Trait scale correlated .26 with a measure of verbal ability. Working
with earlier versions of C-State and C-Trait scales, Boyle (1977, 1989) had reported that the C-State and C-Trait
scales correlated .80 and .63 respectively with the State Epistemic Curiosity Scale (SECS). Kashdan, Rose, and
Fincham (2004) reported positive correlations of the MCI (Total score) with the Curiosity Exploration Inventory
(CEI) exploration factor (.71), and with the CEI-absorption factor (.57). Likewise, the MCI (Total score) correlated
.36 with the Novelty Experiencing Scale (Total score), .60 with the STPI (Total score), and .40 with the Workplace
Adaptation Questionnaire (Reio, 1997 ). Also, Renner (2006) reported that the MCI C-Trait scale correlated .39 with
the overall Social Curiosity Scale , .52 with the general SCS dimension, and .16 with the covert SCS dimension.
Divergent/Discriminant
Previously, Boyle (1977, 1989) had reported that the C-State and C-Trait scales correlated 2.25 and 2.36
respectively with the STAI A-State scale. Naylor (1981/2011, p. 180) reported that the MCI C-State and C-Trait
scales correlated weakly with the Realistic (.01 and .04), Enterprising (.09 and .12), Conventional (.06 and 2.08)
RIASEC factors. Naylor (p. 180) also reported that the C-Trait scale correlated weakly with a measure of numeri-
cal ability (.07), while on two separate measurement occasions four weeks apart, the C-State scale correlated
weakly with measures of verbal ability (.07 and .18), and numerical ability ( 2.13 and .01), respectively. Likewise,
the MCI did not correlate with the Sensation Seeking Scale (.02) ( Reio, 1997 ).
Construct/Factor Analytic
A principal components analysis of the C-State and C-Trait item intercorrelations with oblique (promax)
simple-structure rotation ( N5300), showed that reverse-worded and non-reversed items were loaded by separate
components suggesting they measured discrete constructs (Boyle, 1989). Subsequently, Naylor (1981/2011) chose
not to include reversed items in the MCI. In a principal components analysis of the item intercorrelations with
varimax rotation, Naylor (1981) reported that the MCI C-State and C-Trait scales emerged as distinct dimensions.
Criterion/Predictive
Measures of state and trait curiosity appear to have significant predictive validity. Boyle (1979) reported that
curiosity stimulating instructions stimulated elevations in C-State, enhancing performance on academic learning
tasks (recall of prose materials). Reio (1997) reported that 15% of the variance associated with socialization-
related learning was accounted for by the C-State and C-Trait MCI measures and the standardized beta coeffi-
cient predicting job performance from the MCI (Total score) was .23, suggesting curiosity impacts positively on
job performance (see Reio, 1997 , p. 82).
Location
Naylor, F.D. (1981/2011). A State /C0Trait Curiosity Inventory. Australian Psychologist, 16 , 172/C0183. (Also pub-
lished online: 2 February 2011. DOI: 10.1080/00050068108255893.) - (Retrieved 20 June, 2014).
Results and Comments
As with Spielberger’s various state /C0trait measures, both the MCI C-State and C-Trait scales exhibit high
Cronbach alpha coefficients, suggesting they provide somewhat narrow measurement of the curiosity construct
(cf.Boyle, 1991a ). Naylor specifically avoided the problematic inclusion of reverse-worded items, shown194 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
repeatedly in many factor analytic studies to measure something other than curiosity (cf. Boyle, 1989).
Notwithstanding the rather restrictive state /C0trait measurement timeframes, the MCI global C-State and C-Trait
scales appear to provide satisfactory measures of state and trait curiosity, useful for a wide variety of empiricalstudies involving the measurement of curiosity.
MELBOURNE CURIOSITY INVENTORY /C0STATE FORM
Directions: A number of statements which people have used to describe themselves are given below. Read each
statement and then circle the appropriate number to the right of the statement to indicate how you feel right now, that
is, at this moment . There are no right or wrong answers. Do not spend too much time on any statement but give the
answer which seems to describe how you feel right now.
15Not at All; 2 5Somewhat; 3 5Moderately So; 4 5Very Much So.
1.I want to know more 1 2 3 4
2.I feel curious about what is happening 1 2 3 4
3.I am feeling puzzled 1 2 3 4
4.I want things to make sense 1 2 3 4
5.I am intrigued by what is happening 1 2 3 4
6.I want to probe deeply into things 1 2 3 4
7.I am speculating about what is happening 1 2 3 4
8.My curiosity is aroused 1 2 3 4
9.I feel interested in things 1 2 3 4
10.I feel inquisitive 1 2 3 4
11.I feel like asking questions about what is happening 1 2 3 4
12.Things feel incomplete 1 2 3 4
13.I feel like seeking things out 1 2 3 4
14.I feel like searching for answers 1 2 3 4
15.I feel absorbed in what I am doing 1 2 3 4
16.I want to explore possibilities 1 2 3 4
17.My interest has been captured 1 2 3 4
18.I feel involved in what I am doing 1 2 3 4
19.I want more information 1 2 3 4
20.I want to enquire further 1 2 3 4
MELBOURNE CURIOSITY INVENTORY /C0TRAIT FORM
Directions: A number of statements which people have used to describe themselves are given below. Read each
statement and then circle the appropriate number to the right of the statement to indicate how you generally feel .
There are no right or wrong answers. Do not spend too much time on any statement but give the answer which
seems to describe how you generally feel.
15Almost Never; 2 5Sometimes; 3 5Often; 4 5Almost Always.
1.I think learning ‘about things’ is interesting and exciting 1 2 3 4
2.I am curious about things 1 2 3 4
3.I enjoy taking things apart to ‘see what makes them tick’ 1 2 3 4
4.I feel involved in what I do 1 2 3 4
5.My spare time is filled with interesting activities 1 2 3 4
6.I like to try to solve problems that puzzle me 1 2 3 4
7.I want to probe deeply into things 1 2 3 4195 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
8.I enjoy exploring new places 1 2 3 4
9.I feel active 1 2 3 4
10.New situations capture my attention 1 2 3 4
11.I feel inquisitive 1 2 3 4
12.I feel like asking questions about what is happening 1 2 3 4
13.The prospect of learning new things excites me 1 2 3 4
14.I feel like searching for answers 1 2 3 4
15.I feel absorbed in things I do 1 2 3 4
16.I like speculating about things 1 2 3 4
17.I like to experience new sensations 1 2 3 4
18.I feel interested in things 1 2 3 4
19.I like to enquire about things I don’t understand 1 2 3 4
20.I feel like seeking things out 1 2 3 4
Note: Copyright rAustralian Psychological Society. Reproduced with permission.
State/C0Trait Personality Inventory (STPI)
(Spielberger et al., 1995; Spielberger & Reheiser, 2009 ).
Variable
Spielberger identified Anxiety, Depression, Anger and Curiosity as basic emotions that motivate a wide range
of behaviors. The State/C0Trait Personality Inventory (STPI) measures these four constructs as both traits (disposi-
tions) and states (transitory emotions) /C0(e.g., see Spielberger et al., 1995 ;Spielberger & Reheiser, 2009 ).
Description
The 80-item STPI includes 10 items for each of eight state and trait scales ( Spielberger et al., 1995 ). The STPI
was derived from previous unidimensional state /C0trait scales constructed by Spielberger, including the
State/C0Trait Anxiety Inventory (STAI: Spielberger et al. 1983 ), and the State /C0Trait Anger Expression Inventory
(STAXI: Spielberger, 1999 ). For each state item, instructions are to respond as to one’s present feelings . For the cor-
responding trait items, instructions are to respond as to how one g enerally feels. As with each of the single-
dimension scales (STCI, STAI, STAS, STDI), the STPI state items ask respondents to describe various feelings
they are experiencing at this very moment on a 4-point response scale ranging from: 1 5Not At All; 2 5Somewhat;
35Moderately So; 4 5Very Much So. Most items use single words to define the target feeling; a few use phrases
(e.g., ‘hopeful about future’). Sample items require ratings of tension (anxiety), sadness (depression), annoyance
(anger) and inquisitiveness (curiosity). The items involve mainly adjective ratings /C0the items simply add ‘I feel’
or ‘I am’ as a stem, rather than present the adjective alone.
Sample
Numerous samples were used in Spielberger’s initial studies into state /C0trait anxiety ( Spielberger et al., 1983 ),
depression ( Spielberger, Ritterband, Reheiser, & Brunner 2003 ), anger ( Spielberger, 1999 ) and curiosity
(Spielberger et al., 1981 ). The Preliminary Manual for the STPI (which excluded the depression scales) reported
data on samples of 199 college students, 198 navy recruits, and 876 working adults (cf. Spielberger et al., 1995 ).
Reliability
Internal Consistency
Spielberger and Reheiser (2009) , reported Cronbach alpha coefficients for the state and trait anger scales rang-
ing from .87 to .93, and for the state and trait depression scales of .81 or higher (Mdn 5.90). Likewise, they
reported alpha coefficients ranging from .86 to .94 (Mdn 5.93) for the state anxiety scale, while for the trait anxi-
ety scale, the median alpha coefficient was .90.196 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Test/C0Retest
Spielberger and Reheiser (2009) reported stability coefficients (ranging from .73 to .86 for the trait anxiety scale
across intervals of 3 to 15 weeks, whereas stability coefficients for the state scales were low (e.g., Mdn 5.33 for
the state anxiety scale). As expected, two-week stabilities for the state anger scale were found to be .27 (males)
and .21 (females), while for the trait anger scale they were .70 (males) and .77 (females) /C0(Jacobs, Latham, &
Brown 1988 ). Thus, the STPI state scales appear to be sensitive to transitory fluctuations in emotional states, and
the stability coefficients for the state scales are considerably lower than those observed for the trait scales.
Validity
Convergent/Concurrent
Spielberger and Reheiser (2009) reported convergence between the STPI scales and measures of corresponding
constructs (see also Krohne, Schmukle, Spaderna & Spielberger 2002 ;Spielberger et al., 2003 ). For example, the
trait depression scale correlated from .72 to .85 (Mdn 5.78) with other depression measures, including the Beck
Depression Inventory (BDI), the Zung Self-Rating Depression Scale (ZUNG), and the Center for Epidemiological
Studies Depression Scale (CES-D) /C0(for a psychometric review of these scales, see Boyle, 1985b ). As expected,
the corresponding correlations with the state depression scale were lower (Mdn 5.66). Spielberger and Reheiser
also reported that trait anxiety correlated .73 with Taylor’s Manifest Anxiety Scale , and .85 with Cattell and
Scheier’s (1963) Anxiety Scale Questionnaire , respectively (cf. Rossi & Pourtois, 2012 ).
Divergent/Discriminant
Spielberger and Reheiser (2009) reviewed evidence showing that corresponding state and trait STPI scales tend
to be moderately correlated, but distinct. In the Preliminary Manual, Spielberger (1979) reported that the state
scales were not strongly correlated with social desirability, although anxiety and anger correlated negatively with
social desirability ( 2.14 and 2.33, respectively).
Construct/Factor Analytic
Spielberger and Reheiser (2009) reported that separate factor analyses of the state and trait scales included in
the STPI supported the distinction between the various state and trait measures. However, no factor analysis of
the entire STPI has been reported to-date.
Criterion/Predictive
Evidence for criterion validity has been provided for the precursor scales to the STPI and most likely, the STPI
itself would exhibit similar criterion validity to the precursor scales. For example, the STAI scales correlate with
impaired performance and attentional bias ( Eysenck & Derakshan, 2011 ). Trait anger is associated with elevated
blood pressure ( Spielberger & Reheiser, 2009 ).Matthews, Panganiban, and Hudlicka (2011) showed that under
neutral mood conditions ( N560), STPI trait anxiety correlated .40 with viewing frequency of threat stimuli.
Wrenn, Mostofsky, Tofler, Muller, and Mittleman (2013) conducted a prospective cohort study of 1968 survivors
of myocardial infarction using the STPI anxiety and anger scales, and found that anxiety was associated with a
higher mortality risk over 10 years. In a study of 103 overweight adolescents, Cromley et al. (2012) tested whether
STPI trait anxiety and trait anger correlated with a lower body satisfaction (odds ratios for the STPI predictors
were .76 and .90 respectively).
Location
Spielberger, C.D., Ritterband, L.M., Sydeman, S.J., Reheiser, E.C., & Unger, K.K. (1995). Assessment of emo-
tional states and personality traits: Measuring psychological vital signs. In J.N. Butcher (Ed.), Clinical personality
assessment: Practical approaches (pp. 42 /C058). New York: Oxford University Press.
Spielberger, C.D., & Reheiser, E.C. (2009). Assessment of emotions: Anxiety, anger, depression, and curiosity.
Applied Psychology: Health and Well-being, 1 , 271/C0302.
Results and Comments
The STPI is useful for measuring the four emotions of anxiety, depression, anger and curiosity. Trait and state
measures may be employed as outcome variables in evaluation of therapeutic interventions ( Spielberger et al.,
2004). The state scale is also useful in assessing the impact of experimental manipulations in mood research
(Matthews et al., 2011; Rossi & Pourtois, 2012 ), in correlational studies of emotion and performance ( Eysenck &197 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Derakshan, 2011; Matthews et al., 2011 ), and in assessing affective response in various settings ( Zeidner, 1998 ).
In comparison with the more comprehensive mapping of affect dimensions (e.g., Izard et al., 1993 ), the STPI
provides measures of only four affect dimensions.
Note: The STPI is available from: Mind Garden, Inc., 855 Oak Grove Avenue, Suite 215, Menlo Park, CA 94025,
USA. www.mindgarden.com/products/staisad.htm (Retrieved January 5, 2014).
Positive and Negative Affect Schedule /C0Expanded Form (PANAS-X)
(Watson & Clark, 1999 ).
Variable
The original PANAS ( Watson, Clark & Tellegen 1988 ) comprised 10 adjectives for each of the two domains.
The expanded version (PANAS-X) retains these 20 items and adds another 40 items to assess three additional
Positive Affect (PA) scales (labeled: Joviality, Self-assurance, and Attentiveness), and four additional Negative
Affect (NA) dimensions (Fear, Guilt, Sadness, and Hostility).
Description
The PANAS-X measures both Positive Affect (PA) and Negative Affect (NA), as well as 11 primary affects
labeled: Fear, Sadness, Guilt, Hostility, Shyness, Fatigue, Surprise, Joviality, Self-Assurance, Attentiveness, and
Serenity. The PANAS-X includes eight different temporal instructions ranging from: ‘Right Now’, ‘Today’, ‘Past
Few Days’, ‘Past Week’, ‘Past Few Weeks’, Past Month’, ‘Past Year’, and ‘In General’ (Watson & Clark, 1994). As a
measure of transitory emotions, the instructions ask respondents to rate how they feel ‘right now (at the present
moment)’ scored on a 5-point Likert-type intensity scale ranging from ‘Very slightly or not at all’; ‘A little’;
‘Moderately’; ‘Quite a bit’; to ‘Extremely’ . Using the timeframe of ‘Past Few Weeks’, respondents are instructed to
indicate to what extent they have felt this way ‘during the past few weeks’ thereby measuring longer-lasting, mood
states. With instructions to respond as to how they felt ‘during the past year’ the PANAS-X is measuring disposi-
tional affect dimensions. Given this wide range of temporal instructions, the PANAS-X can provide greater flexi-
bility in the measurement of affects, as compared with state /C0trait measures.
Sample
Although extensive data are available for all eight sets of temporal instructions with samples as large as 3622
undergraduates for the original PANAS ( Watson et al., 1988 ), data for the PANAS-X are also provided for all
eight instruction sets with multiple, diverse samples (see Watson & Clark, 1994, Table 3). For example, used as a
state affect measure (with ‘right now’ instructions), samples comprised 2213 Southern Methodist University under-
graduates, 279 Australian National University undergraduates, 158 VA substance abusers, and 56 psychiatric
inpatients. Used as a trait affect measure (with general trait instructions), samples comprised 3,622 SMU under-
graduates, 202 SMU employees, 815 Detroit-area adults, 229 Australian adults, 117 psychiatric inpatients, and a
mixed clinical sample of 107 patients.
Reliability
Internal Consistency
Median Cronbach alpha coefficients for each of the primary scales were reported by Watson and Clark (1994,
p. 11) as follows: Fear (.87), Sadness (.87), Guilt (.88), Hostility (.85), Shyness (.83), Fatigue (.88), Surprise (.77),
Joviality (.93), Self-Assurance (.83), Attentiveness (.78), and Serenity (.76). Alpha coefficients for the broad PA and
NA scales ranged from .83 to .90, and from .84 to .91, respectively (see Watson & Clark, 1994, Table 4). Ready
et al. (2011, p. 786) reported alpha coefficients for the specific PA and NA scales ranging from .70 to .93, and
from .79 to .92 respectively. Likewise, alpha coefficients for the state and trait PA and NA subscales were
reported as being .91 and .87, and .72 and .86, respectively ( Kashdan & Roberts, 2004 ).
Test/C0Retest
Used as a mood state measure (past week instructions), test /C0retest coefficients of .43 for Positive Affect and .41
for Negative Affect over a two-month retest interval ( N5308) were reported. Used as a trait affect measure (gen-
eral trait instructions), stability coefficients ( N5502) increased up to .70 and .71, respectively. For the 11 specific
affect scales, when used with mood state (past week) instructions, test /C0retest coefficients were fairly low, ranging198 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
from .23 to .49, as would be expected for situationally-sensitive measures of transitory emotions. When used with
general trait affect instructions, stability coefficients ranged from .56 to .70, showing that affects can also take the
form of relatively stable trait dimensions (see Watson & Clark, 1994, Table 20). Over an extended retest interval
of six years, stability coefficients were found to be .42 for Positive Affect, and .43 for Negative Affect, showing
that trait affects exhibit moderate stability over a period of many years (cf. Leue & Lange, 2011 ).
Validity
Convergent/Concurrent
Five of the PANAS-X scales measure the same affect dimensions as the Profile of Mood States (POMS; McNair
et al., 1971). Convergent validity correlations range from .85 to .91 for the related scales, with other correlations
ranging up to .74 between the Depression-Dejection POMS scale and the PANAS-X Fear scale. The highest corre-
lation for the Fear scale was obtained with the POMS Tension /C0Anxiety scale (.85). Watson and Clark (1994,
p. 16) also reported correlations of self- and peer-ratings ranging up to .52 for the Sadness scale and up to .44 for
the Self-Assurance scale. Furthermore, Ready et al. (2011, p. 787) reported convergent correlations for PA scales
(Joviality, Self-assurance, and Attentiveness) ranging from .43 to .74, and convergent correlations for NA scales
(Fear, Sadness, Guilt, and Hostility) ranging from .57 to .72, respectively.
Divergent/Discriminant
Evidence of divergent validity includes, for example, the low correlations of Joviality and Self-Assurance with
Sadness ( 2.02 and 2.19, respectively), and also of Fear with Fatigue ( 2.02), and with Surprise (.03) /C0
(see Watson & Clark, 1994, Table 17). The PANAS-X manual reports correlations of self- and peer-ratings as low
as .14 for the Surprise scale, and .15 for the Guilt scale (see Watson & Clark, Table 17). Watson and Clark (1994,
p. 18) pointed out that, ‘the PANAS-X scales showed better discriminant validity /C0that is, they were less highly
intercorrelated than were their POMS counterparts.’ Thus, considering only positive affect dimensions, the inter-
correlations between the POMS scales ranged from .47 to .69 (Mdn 5.64), whereas the intercorrelations between
the PANAS-X scales ranged from .27 to .61 (Mdn 5.45) (see Watson & Clark, Table 15). Ready et al. (2011, p. 787)
reported that the broad PA and NA scales exhibited discriminant validity correlations of 2.42 in a sample of
elderly adults, and 2.30 in a separate sample of undergraduates (see below).
Construct/Factor Analytic
Watson and Clark (1994, Table 7) reported the results of principal axis factor analyses plus varimax rotation of
the PANAS-X item intercorrelations on 10 separate samples (ranging from N5289 to N51657) that supported
the construct validity of the PA and NA dimensions. Using a large non-clinical sample ( N51003; males: N5466;
females: N5537), evidence of construct validity of the broad PA and NA dimensions was provided indepen-
dently by Crawford and Henry (2004) . CFAs based on the PANAS item intercorrelations resulted in a best-fitting
model (with chi-square 5689.8, CFI 5.94, SRM R5.05, and RMSEA 5.06). Likewise, Ready et al. (2011) reported
the results of exploratory and confirmatory factor analyses of the PANAS-X responses of 203 older adults
(M573.5 years) and 349 undergraduates ( M519.1 years). Using principal axis factoring with varimax rotation,
separate EFAs of the facet scale intercorrelations supported the higher-order PA and NA structure across both
age groups. EFAs of the item intercorrelations also provided support for the specific facet scales (Ready et al.,
pp. 788 /C0789). While in the younger sample the four NA facets (labeled: Guilt, Fear, Hostility, Sadness) emerged
clearly, in the older sample the Guilt facet attracted items from the Sadness facet and items reflecting anxiety
and loneliness comprised the fourth facet. The three PA facets (labeled: Joviality, Attentiveness, and Self-
assurance), were replicated in both age samples. CFAs provided support for three of the NA facets (excluding
Sadness) (chi-square 5945.71, RMSEA 5.09, BIC 523,199.78, DIC 523,044.04), and for three of the PA facets
(chi-square 51019.13, RMSEA 5.09, BIC 523,980.56, and DIC 523,826.17).
Criterion/Predictive
Petrie, Chapman, and Vines (2013) reported that among a sample of 91 African-American females, the PA
and NA scales of the PANAS-X predicted anxiety disorder (AUC value 5.76 and .70, respectively; both
p,.001), and social phobia (AC value 5.81 and .84, respectively; both p,.001). DSM-IV-TR diagnoses of anxi-
ety disorder, or social phobia were made using the Anxiety Disorders Interview Schedule (ADIS-IV) (see Petrie
et al., pp. 138 /C0139).199 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Location
Watson, D., & Clark, L.A. (1999). The PANAS-X: Manual for the positive and negative affect Schedule /C0
Expanded form. Cedar Rapids, IA: University of Iowa.
Results and Comments
The original PANAS has been widely used to measure the broad dimensions of positive and negative affect
(cf.Tellegen, Watson, & Clarke, 1999 ). Both the PANAS and PANAS-X versions have almost unrivalled flexibility
in terms of timeframe of the adjective ratings, buttressed by considerable data for all instruction sets. Since the
observed alpha coefficients were all high, the item composition of the PANAS-X scales might be enhanced some-
what by including a greater diversity of items allowing broader measurement of the particular constructs being
measured (cf. Boyle, 1991).
POSITIVE AND NEGATIVE AFFECT SCHEDULE /C0EXPANDED FORM
Sample PANAS-X Protocol Illustrating ‘Past Few Weeks’ Time Instructions
This scale consists of a number of words and phrases that describe different feelings and emotions. Read each
item and then mark the appropriate answer in the space next to that word. Indicate to what extent you have felt this
way during the past few weeks . Use the following scale to record your answers:
12 3 4 5
very slightly a little moderately quite a bit extremely
or not at all
______ cheerful ______ sad ______ active ______ angry at self
______ disgusted ______ calm ______ guilty ______ enthusiastic
______ attentive ______ afraid ______ joyful ______ downhearted
______ bashful ______ tired ______ nervous ______ sheepish
______ sluggish ______ amazed ______ lonely ______ distressed
______ daring ______ shaky ______ sleepy ______ blameworthy
______ surprised ______ happy ______ excited ______ determined
______ strong ______ timid ______ hostile ______ frightened
______ scornful ______ alone ______ proud ______ astonished
______ relaxed ______ alert ______ jittery ______ interested
______ irritable ______ upset ______ lively ______ loathing
______ delighted ______ angry ______ ashamed ______ confident
______ inspired ______ bold ______ at ease ______ energetic
______ fearless ______ blue ______ scared ______ concentrating
______ disgusted ______ shy ______ drowsy ______ dissatisfied
with self with self
Notes :
The PANAS-X includes eight different sets of temporal instructions intended to measure affects ranging all the way
from transitory emotional states, longer-lasting mood states , to relatively stable, enduring personality traits. The above
example provides instructions relating to mood states th at remain relatively stable over a period of some weeks.
Copyright r1994, David Watson and Lee Anna Clark.
Reproduced with permission.
The PANAS-X manual is available from the University of Iowa website located at: http://ir.uiowa.edu/psychology_
pubs/11/ (Retrieved January 5, 2014).
Differential Emotions Scale (DES-IV)
(Izard, 1991; Izard et al., 1993 ).200 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Variable
Differential Emotions Theory has been well explicated (e.g., Izard, 1990, 1991, 1993, 2001, 2002, 2007 , 2008; Izard
et al., 1993, 1995, 2001, Izard, Quinn, & Most; 2008 ). The Differential Emotions Scale (DES-IV) provides a measure
of fundamental emotions universally discernible in the facial expression of infants and young children.
Description
The 36 items are grouped into 12 subscales labeled: Interest, Joy, Surprise, Sadness, Anger, Disgust, Contempt,
Self-Hostility, Fear, Shame, Shyness, and Guilt ( Izard et al., 1993 , p. 851). Youngstrom and Green (2003,
pp. 283 /C0284) stated:
‘The DES-IV consists of 36 items divided into 12 scales. Eleven discrete emotion scales and 1 inner-directed hostility scale consist of
three items, each rating the presence or absence of the target emotion on a 5-point scale ranging from rarely or never to very often.
An aggregate of 9 discrete negative emotion scales of the DES-IV (anger, contempt, disgust, sadness, shyness, shame, guilt, fear, and
self-directed hostility) comprise the index of negative emotions. Surprise, enjoyment, and [interest] ...consistently load on a general
Positive Affect factor ...in both normal (e.g., Boyle, 1984; Izard et al., 1993, 2001 ; Youngstrom et al., 2001) and clinical (e.g., Carey,
Finch, & Carey 1991 ;Kashani, Suarez, Allan, & Reid 1997 ) populations.’
Sample
The DES-IV was constructed through a process of ongoing progressive rectification (earlier versions included
the DES-I, DES-II, DES-III measures), using a wide diversity of samples in many different studies (e.g., a typical
sample comprised 289 10- and 11-year-old public school children; another sample comprised 113 mothers who
had recently given birth /C0seeIzard et al., 1993 , p. 851).
Reliability
Internal Consistency
Cronbach alpha coefficients for the 12 DES-IV subscales were reported by Izard et al. (1993, p. 851) as follows:
Interest (.75), Joy (.83), Surprise (.65), Sadness (.85), Anger (.85), Disgust (.56), Contempt (.82), Fear (.83), Guilt
(.73), Shame (.60), Shyness (.62), Self-Hostility (.75).
Test/C0Retest
Izard et al. (1993, p. 854) reported test /C0retest coefficients for mothers from 2.5 to 6 months after childbirth,
ranging from .50 to .83 (Mdn 5.70). Test /C0retest stability coefficients over a six-month interval were reported by
Ricard-St-Aubin, Philippe, Beaulieu-Pelletier, & Lecours (2010, p. 47) based on a sample of 213 participants aged
from 18 to 72 years of age as follows: Interest (.76), Joy (.78), Surprise (.61), Sadness (.75), Anger (.68), Disgust
(.49), Contempt (.77), Fear (.86), Guilt (.79), Shame (.73), Shyness (.72), Self-Hostility (.68).
Parallel Forms
The median correlation of the DES-IV-A (trait version) with the DES-IV-B (mood-state version) across intervals
from 2.5 months up to three years was found to be .64 (Izard et al., p. 854).
Validity
Convergent/Concurrent
All the DES-IV positive subscales correlate positively with Extraversion, and all the negative subscales corre-
late positively with Neuroticism as measured via the Eysenck Personality Questionnaire (cf.Boyle, 1984b, 1985c,
1986b ).Izard et al. (1993, p. 854) reported the following correlations with the EPQ-R: Positive Affects: Interest
(.35), Joy (.36), Surprise (.21), Negative Affects: Sadness (.44), Anger (.32), Disgust (.34), Contempt (.46), Fear (.40),
Shame (.41), Shyness (.46), Guilt (.41) and Self-Hostility (.46) Several positive correlations ranging up to .49
between the DES-IV subscales and the Personality Research Form subscales were reported by Izard et al. (1993,
Table 10) .
Divergent/Discriminant
Shame correlated 2.33 with Extraversion, while Interest and Joy correlated 2.27 and 2.32 with Neuroticism,
respectively ( Izard et al., 1993 , p. 854). Several zero-order or negative correlations (ranging up to 2.35) between
the DES-IV subscales and the Personality Research Form subscales were reported by Izard et al. (1993, p. 856) .201 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Construct/Factor Analytic
Several studies have contributed evidence for the construct validity of the DES-IV (e.g., Blumberg & Izard,
1985, 1986 ;Fridlund, Schwartz, & Fowler 1984 ;Schwartz, 1982 ). Further evidence of construct validity has been
provided by Akande (2002) ,Youngstrom and Green (2003) , and Ricard-St-Aubin et al. (2010) . For example, Izard
et al. (1993) carried out a principal components analysis with orthogonal varimax rotation and found all
12 subscales emerged as distinct dimensions, supporting the construct validity of the DES-IV instrument. At the
higher-stratum level (cf. Boyle, 1986a, 1987a,e ,1989b ;Boyle & Katz, 1991 ), a principal components analysis with
orthogonal varimax rotation of the DES-IV scale intercorrelations suggested two broad components labeled:
Positive Emotionality and Negative Emotionality ( Izard et al., 1993 , p. 850). In addition, Kotsch, Gerbing, and
Schwartz (1982) reported the results of a confirmatory factor analysis (CFA) of the DES-III, supporting the con-
struct validity of the various subscales. A separate factor analysis of the DES-IV item intercorrelations using a
sample of 289 10 to 11-year-old public school children (see Izard et al., 1993 , p. 851) also supported the DES-IV
subscale structure.
Criterion/Predictive
Izard et al. (1993, Tables 8 & 11) reported several predictive validity coefficients (standardized betas ranging
from2.61 to .52) showing, for example, that the DES-IV scales were significant predictors of the Eysenck
Personality Questionnaire (EPQ) scales labeled: Extraversion ( R5.59), Neuroticism ( R5.63), and Psychoticism
(R5.55). Likewise, Izard et al. reported several predictive validity coefficients (standardized betas ranging from
2.87 to .69) showing that the DES-IV scales were significant predictors of Personality Research Form (PRF) scales
labeled: Affiliation ( R5.41), Aggression ( R5.62), Defendance ( R5.56), Dominance ( R5.31), Endurance
(R5.37), Understanding ( R5.36), Nurturance ( R5.37), Harm Avoidance ( R5.55), and Play ( R5.32).
Location
Izard, C.E., Libero, D.Z., Putnam, P., & Haynes, O.M. (1993). Stability of emotion experiences and their rela-
tions to traits of personality. Journal of Personality and Social Psychology, 64 , 847/C0860.
Izard, C.E. (2009). Emotion theory and research: Highlights, unanswered questions, and emerging issues.
Annual Review of Psychology, 60 ,1/C025.
Results and Comments
The DES-IV appears to be a relatively reliable and valid measure of 12 fundamental emotions universally dis-
cernible in facial expressions. Depending on the instructions provided to respondents, the instrument is flexible
in allowing measurement of these affective dimensions as dispositional affects (persisting over long time periods
of time), less-stable mood-states (persisting over the past week), and transitory emotional states (fluctuating from
moment to moment).
DIFFERENTIAL EMOTIONS SCALE
Trait Instructions (DES-IV-A)
The trait version of the DES-IV includes instructions asking respondents ‘ In your daily life, how often do you ...’
experience a particular emotion. Responses are scored on a 5-point Likert-type frequency scale as follows:
15Rarely or Never; 2 5Hardly Ever; 3 5Sometimes; 4 5Often; 5 5Very Often.
1.Feel regret, sorry about something you did 12345
2.Feel sheepish, like you do not want to be seen 12345
3.Feel glad about something 12345
4.Feel like something stinks, puts a bad taste in your mouth 12345
5.Feel you can’t stand yourself 12345
6.Feel embarrassed when anybody sees you make a mistake 12345
7.Feel unhappy, blue, downhearted 12345
8.Feel surprised, like when something suddenly happens you had no idea would happen 12345
9.Feel like somebody is a low-life, not worth the time of day 12345
10.Feel shy, like you want to hide 12345202 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
11.Feel like what you’re doing or watching is interesting 12345
12.Feel scared, uneasy, like something might harm you 12345
13.Feel mad at somebody 12345
15.Feel happy 12345
16.Feel like somebody is ‘good for nothing’ 12345
17.Feel so interested in what you’re doing that you’re caught up in it 12345
18.Feel amazed, like you can’t believe what’s happened, it was so unusual 12345
20.Feel like screaming at somebody or banging on something 12345
21.Feel sad and gloomy, almost like crying 12345
22.Feel like you did something wrong 12345
23.Feel bashful, embarrassed 12345
24.Feel disgusted, like something is sickening 12345
25.Feel joyful, like everything is going your way, everything is rosy 12345
26.Feel like people laugh at you 12345
27.Feel like things are so rotten you could make you sick 12345
28.Feel sick about yourself 12345
29.Feel like you are better than somebody 12345
30.Feel like you ought to be blamed for something 12345
31.Feel the way you do when something unexpected happens 12345
32.Feel alert, curious, kind of excited about something unusual 12345
33.Feel angry, irritated, annoyed with somebody 12345
34.Feel discouraged, like you can’t make it, nothing’s going right 12345
35.Feel afraid 12345
36.Feel like people always look at you when anything goes wrong 12345
MOOD STATE INSTRUCTIONS (DES-IV-B)
The mood-state version of the DES-IV has instruc-
tions asking respondents how often they have experi-
enced a particular emotion ‘ during the past week ’ (thereby
measuring longer lasting mood states). As with the traitversion, responses are scored on a 5-point Likert-type
frequency scale as follows:
15Rarely or Never; 2 5Hardly Ever; 3 5Sometimes;
45Often; 5 5Very Often.
EMOTIONAL STATE INSTRUCTIONS (DES-IV-C)
As a measure of fleeting/transient emotional states,
the DES-IV has instructions asking respondents, ‘ How
do you feel right now at this very moment ?’ scored on a
5-point Likert-type intensity scale ranging from:
15Not at All; 2 5Slightly; 3 5Somewhat;
45Moderately So; 5 5Very Much.Note: Permission to use the DES-IV can be obtained
from the author Carroll E. Izard at the following
email address: [email protected]
Reproduced with permission.
Profile of Mood States (POMS 2)
(Heuchert & McNair, 2012 ).203 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Variable
As stated by Heuchert and McNair (2012, p. 1) , the Profile of Mood States (POMS 2) ‘allows for the rapid assess-
ment of transient and fluctuating feelings, as well as relatively enduring affect states.’ It provides measures of seven
clinically important mood state dimensions. There are two versions of the POMS 2 instrument: An adult form
(POMS 2-A), and a youth form (POMS 2-Y).
Description
The original POMS ( McNair, Lorr, & Droppleman 1992; Lorr, McNair, & Heuchert 2003 ) which measured six
mood states labeled: Tension /C0Anxiety, Depression /C0Dejection, Anger /C0Hostility, Vigor /C0Activity, Fatigue /C0Inertia,
and Confusion /C0Bewilderment, has been used in about 4000 published studies (see Bourgeois, LeUnes, & Meyers
2010;Heuchert & McNair, 2012 ;McNair, Heuchert, & Shilony, 2003 ). The POMS 2-A and POMS 2-Y retain the six
subscales of the original POMS instrument, but additionally include a scale for Friendliness. The POMS 2
is a 65-item adjective checklist with instructions to respond ‘ How you have been feeling during the PAST
WEEK, INCLUDING TODAY’ on a 5-point Likert-type response scale as follows: 0. (Not at All); 1. (A Little);
2. (Moderately); 3. (Quite a Bit); 4. (Extremely). The feelings rated include nervousness (Tension /C0Anxiety), unhap-
piness (Depression /C0Dejection), fury (Anger /C0Hostility), energy (Vigor /C0Activity), exhaustion (Fatigue /C0Inertia),
and inability to concentrate (Confusion /C0Bewilderment). Used with these instructions, the POMS 2 measures rela-
tively recent mood state elevations. If one changes the instructions to respond as to, ‘how you feel RIGHT NOW’
then the instrument measures emotional states. Indeed, both the POMS and POMS 2 have been used extensively
as measures of transitory emotional states (e.g., Beckers, Wicherts, & Schmidt 2007 ;Boyle, 1987a,b , 1988).
Heuchert and McNair (2012, p. 13) also pointed out that the POMS 2 is ‘adaptable to state and trait assessments of
affect.’ Short-forms of both the adult (POMS 2-A Short) and youth versions (POMS 2-Y Short) are also available.
Sample
The POMS and its short-forms have been used with various populations of medical patients ( Curran,
Andrykowski & Studts 1995; Guadagnoli & Mor, 1989; Wyrwich & Yu, 2011; Baker, Denniston, Zabora, Polland, &
Dudley 2002; Walker, Sprague, Sleator, & Ullmann 1988 ), children ( Walker et al., 1988 ), adolescents ( Terry, Lane,
& Fogarty 2003 ), university students ( Barker-Collo, 2003; Reddon, Marceau, & Holden 1985 ), working adults
(Morfeld, Petersen, Kruger-Bodeker, von Mackensen, & Bullinger 2007 ), athletes ( Bell & Howe, 1988 ) and older
adults ( Gibson, 1997; Shin & Colling, 2000; Nyenhuis, Yamamoto, Luchetta, Terrien, & Parmentier 1999 ).
Construction of the POMS 2-A was based on a normative sample of 1000 North American adults, with stratified
random sampling to approximate the USA 2000 census. Norms for the POMS 2-Y were based on 100 adolescents
at each age level, a total of 500 cases, also weighted to match the USA 2000 census ( Heuchert & McNair, 2012 ).
Reliability
Internal Consistency
Cronbach alpha coefficients for the POMS 2-A ranged from .76 to .95 for the normative sample, and from .83
to .97 for the clinical sample. For the POMS 2-Y, alpha coefficients ranged from .76 to .95 for the normative sam-
ple, and from .78 to .96 for the clinical sample ( Heuchert & McNair, 2012 , p. 37). These findings are similar to
those for the original POMS (e.g., Curran et al., 1995 ;Gibson, 1997 ;O’Halloran, Murphy, & Webster 2004 ;
Wyrwich & Yu, 2011 ).
Test/C0Retest
Test/C0retest reliability coefficients for the POMS 2-A ranged from .48 to .72 at one week and from .34 to .70 at
one month, and for the POMS 2-Y from .45 to .75 at one week and from .02 to .59 at one month ( Heuchert &
McNair, 2012 , p. 37).
Validity
Convergent/Concurrent
Positive correlations between corresponding scales of the POMS 2-A and PANAS-X ranging from .57 to .84
(Mdn5.73) ( Heuchert & McNair, 2012 , p. 47). Specifically, Tension /C0Anxiety correlated with Fear (.57),
Anger/C0Hostility correlated with Hostility (.84), Depression-Dejection correlated with Sadness (.70),
Fatigue /C0Inertia correlated with Fatigue (.73), and Vigor /C0Activity correlated with Positive Affect (.79). However,
correlation coefficients between the POMS 2-Y and corresponding measures have not been reported to-date.204 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Divergent/Discriminant
Heuchert and McNair (2012, pp. 44 /C046)reported that both the POMS 2-A and POMS 2-Y discriminated effectively
between normal individuals and clinical patients suffering primarily from either anxiety or depression. In addition,
the Vigor /C0Activity and Friendliness scales exhibited negative co rrelations with the six clinical disorder scales ranging
from2.21 to2.47 for the POMS 2-A, and from 2.07 to2.28 for the POMS 2-Y (Heuchert & McNair, pp. 43 /C044).
Construct/Factor Analytic
The factor structure of the original POMS instrument was confirmed by Boyle (1987b) in an Australian sample
of 289 undergraduates, using an iterative PAF procedure, with factor extraction number based on Cattell’s Scree
test ( Cattell, 1978; Cattell & Vogelmann, 1977 ), and with oblique simple structure rotation (see Child, 2006 , pp.
77/C078). Terry et al. (2003) carried out a multi-sample CFA analysis of the adolescent version (POMS-A;
N52,549) supporting the subscale structure (best fitting model: chi-square 53966.49, CFI 5.91, TLI 5.90, and
RMSEA 5.03). More recently, Heuchert and McNair (2012, p. 43) reported the results of a CFA (based on 1000
normals and 215 clinical patients for the POMS 2-A) which revealed an NFI 5.92, NNFZI 5.91, CFI 5.93, and
RMSEA 5.10. They also reported a separate CFA for the POMS 2-Y (based on 500 normal individuals and 133
clinical patients) which revealed an NFI 5.92, NNFI 5.92, CFI 5.94, and RMSEA 5.10, thereby providing empiri-
cal support for the structure of the POMS 2-A and POMS 2-Y measures.
Criterion/Predictive
Using a sample of 312 Grade 11 and 12 students, Newcombe and Boyle (1995) reported that the POMS was a
significant predictor of sports participants’ personality profiles and that, ‘univariate tests showed the participants
to be more extraverted and vigorous, and less anxious, neurotic, depressed and confused.’ (p. 277). Sobhanian,
Boyle, Bahr, and Fallo (2006) reported that mean scores on the POMS subscales (except for Vigor) were signifi-
cantly elevated when refugees were incarcerated in the Woomera Detention Centre, and subsequently declined
following their release into the Australian community.
Location
Heuchert, J.P. & McNair, D.M. (2012). Profile of Mood States, 2nd Edition: POMS 2 . North Tonawanda, NY:
Multi-Health Systems Inc.
Results and Comments
The POMS has been commercially available for over 40 years. It has been modified and adapted to different
needs and translated into no fewer than 42 other languages. With the introduction of the POMS 2 ( Heuchert &
McNair, 2012 ), some changes were made to the core of adjectives, as well as updating and expanding the norms,
for example, including a normative sample of over 60-year-olds. Its brevity, even in the longest versions, and flexi-
bility in administration suggests its popularity will continue. The POMS 2 manual ( Heuchert & McNair, 2012 )
does not provide a scoring key or instructions for hand scoring. Users should be aware that scoring of the POMS
2 must be carried out using the publisher’s online scoring service, even for paper-and-pencil administrations.
POMS 2 SAMPLE ITEMS
The following seven adjectives (one for each subscale) are typical of those in the POMS 2.
POMS 2 Scale Sample Adjective
Anger/C0Hostility Furious
Confusion /C0Bewilderment Muddled
Depression /C0Dejection Hopeless
Fatigue /C0Inertia Exhausted
Friendliness Friendly
Tension /C0Anxiety Uneasy
Vigor/C0Activity Energetic
Notes :
Copyright r2012, Juvia P. Heuchert, Ph.D. and Douglas M. McNair, Ph.D., under exclusive license to Multi-
Health Systems Inc. All rights reserved.205 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
In the USA, P.O. Box 950, North Tonawanda, NY 14120-0950, 1-800-456-3003.
In Canada, 3770 Victoria Park Avenue, Toronto, ON M2H 3M6, 1-800-268-6011, 1-416-492-2627, Fax 1-416-492-3343.
Internationally, 11-416-492-2627. Fax, 11-416-492-3343 or (888) 540-4484.
Reproduced with permission.
Multiple Affect Adjective Check List /C0Revised (MAACL-R)
(Zuckerman & Lubin, 1985; Lubin & Zuckerman, 1999 ).
Variable
The MAACL-R assesses five affect dimensions labeled: Anxiety, Depression, Hostility, Positive Affect, and
Sensation Seeking, as well as the two higher-order dimensions of Dysphoria (A 1D1H) and PASS (PA 1SS).
Description
The MAACL-R comprises 132 adjectival measures of affects. Both state and trait instructions are provided
using identical adjective lists. According to Lubin and Zuckerman (1999, p. 2) , ‘the State Form asks subjects to
describe how they feel ‘now-today,’ the Trait Form asks them to check adjectives describing how they ‘generally
feel.’ The adjective content is largely unchanged from the 1985 revision, but the scoring system has been revised
and a measure of response acquiescence included. A series of factor analytic studies identified the five affect
dimensions ( Zuckerman, Lubin & Rinck 1983 ).Lubin, Whitlock, Reddy, and Petren (2001) reported that a short-
form of the MAACL-R showed comparable reliability and validity with the full MAACL-R form.
Sample
The original revision of the MAACL-R was based on a USA sample of 1543 participants using the Trait Form,
while 536 undergraduates were used to derive scoring keys for the State Form. The scoring system was replicated
using a sample of 746 adolescents. Norms are based on groups of adults, adolescents, university students, and
elderly males and females.
Reliability
Internal Consistency
Lubin and Zuckerman (1999) reported Cronbach alpha coefficients ranging from .69 to .95 across nine separate
samples for the State Form and from .69 to .95 across eight samples for the Trait Form.
Test/C0Retest
For the State Form, Lubin and Zuckerman (1999) reported test /C0retest coefficients ranging from .09 to .52 over
an interval of 1 /C05d a y s( N578 normal adults), and ranging from .08 to .49 ( N565 psychiatric inpatients).
For the Trait Form, they reported stability coefficients ranging up to .92, across a time interval of 4 /C08 weeks.
Maloni, Park, Anthony, and Musil (2005) reported mean test /C0retest coefficients for the Dysphoria scale across a
2-week time interval (.57), and across a 4-week retest interval (.43).
Validity
Convergent/Concurrent
Convergent correlations with self-rating scales were reported for the State Form of the MAACL-R (Lubin &
Zuckerman, p. 12) for 110 adolescents and 97 community college students ranging from .11 to .67 for the three
negative scales, and from .32 to .71 for the two positive scales. Positive correlations between the State MAACL-R
and the state and trait scales of the Spielberger STPI ranged from .10 to .62, and with the PANAS scales from .53
to .73 ( Lubin & Zuckerman, 1999 , p. 13). Convergent correlations were also reported with the POMS, the Affect
Balance Scale , the Toronto Alexithymia Scale , and the Affect Intensity Measure (Lubin & Zuckerman, pp. 13 /C014).
Divergent/Discriminant
The MAACL-R manual provides information on discriminant validity of the State Form with self-ratings rang-
ing from 2.08 to2.50 (Lubin & Zuckerman, 1999 , p. 12). Negative correlations between the State Form of the
MAACL-R and the STPI and PANAS scales ranged from 2.11 to2.54 (p. 13). None of the MAACL-R state scales
correlated significantly with the Marlowe /C0Crowne Social Desirability Scale , however, the Anxiety, Depression, and206 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Hostility state and trait scales correlated negatively with the Edwards Social Desirability Scale (ranging from 2.31
to2.52) ( Lubin & Zuckerman, 1999 , pp. 3/C04). Discriminant correlations were also reported with the POMS, the
Affect Balance Scale , the Toronto Alexithymia Scale , and the Affect Intensity Measure (Lubin & Zuckerman,
pp. 13/C014)“—(cf. Zuckerman et al., 1986 ).
Construct/Factor Analytic
Zuckerman et al. (1983) reported factor analyses (principal axis plus varimax rotation) of the MAACL-R item
intercorrelations, producing a 5-factor structure. Subsequently, Hunsley (1990a,b) contrasted 2- and 5-dimensional
solutions using both principal components and principal axis methods (with orthogonal rotation) on a sample of
307 undergraduates, and concluded that a 2-dimensional structure (Positive and Negative Affect) provided a
better solution since the five MAACL-R scales exhibited significant intercorrelations despite use of orthogonal
rotation in their construction. Zuckerman (1990) commented that the factor analyses based on adult samples
reported in the MAACL-R manual used the state instructions whereas Hunsley factor analysed the trait version.
Criterion/Predictive
Lubin and Zuckerman (1999) investigated the validity of the State Form of the MAACL-R in predicting dropout
from Air Force basic training ( N5200). Dropouts from training exhibited h igher scores on Anxiety, Depression,
Hostility, and Dysphoria, and lower scores on Positive Affect and PASS (p. 15). V a nW h i t l o c ka n dL u b i n( 1 9 9 8 )
reported the validity of the MAACL-R scales in predicting which Driving While Intoxicated (DWI) offenders ( N5123)
remained drug/alcohol free following treatment intervention as compared with those who were unsuccessful.
Location
Lubin, B., & Zuckerman, M. (1999). Manual for the MAACL-R: Multiple Affect Adjective Checklist-Revised . San
Diego, CA: Educational and Industrial Testing Service.
Zuckerman, M., & Lubin, B. (1985). Manual for the Revised Multiple Affect Adjective Check List . San Diego, CA:
Educational and Industrial Testing Service.
Results and Comments
The MAACL-R has a long history of research. The latest version has been updated from its venerable forebear.
The method used to control the influence of acquiescence response style provides a means of compensating for
the inevitable differences among individuals in their tendency to endorse few or many adjectives. In addition,
evidence was provided ( Lubin & Zuckerman, 1999 ) that the influence of social desirability is perhaps stronger for
the MAACL-R negative affect dimensions (Anxiety, Depression, Hostility) than for the positive affect ones
(Positive Affect, and Sensation Seeking).
MAACL-R SAMPLE ITEMS
The following five adjectives (one for each subscale) are typical of those in the MAACL-R.
MAACL-R Scale Sample Adjective
Anxiety (A) Nervous
Depression (D) Lonely
Hostility (H) Angry
Positive Affect (PA) Good-natured
Sensation Seeking (SS) Adventurous
Note:
Copyright rEdITS Publishers
Reproduced with permission.
Multidimensional Mood State Inventory (MMSI)
(Boyle, 1992 /2012).207 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Variable
The MMSI includes five separate self-report scales purported to measure Arousal /C0Alertness, Anger/Hostility,
Neuroticism, Extraversion, and Curiosity.
Description
Boyle (2009) constructed the 75-item MMSI which comprises five separate 15-item scales derived from several
higher-order factor analyses of the intercorrelations between various emotional/mood state scales such as the
Profile of Mood States (POMS) , the Differential Emotions Scale (DES-IV) , and the Eight State Questionnaire (8SQ) . The
MMSI has instructions to ‘ Please circle the appropriate response according to how you feel right now at this very moment ’
scored on a 4-point intensity scale ranging from: 1. (Not at All); 2. (A Little); 3. (Moderately So); 4. (Very Much
So). When used with these instructions, it is a measure of transitory emotional states, rather than longer-lasting
mood states. When used with instructions to ‘ Please circle the appropriate response according to how you have been feel-
ing over the past week ’ scored on a 4-point frequency scale ranging from: 1. (Almost Never); 2. (Sometimes);
3. (Often); 4. (Almost Always), it is a measure of persisting mood states. A variety of timeframes could be tapped
by varying the instructions (as per the PANAS-X).
Sample
The original samples comprised University of Queensland undergraduates, as well as 111 Bond University
undergraduate students, ranging in age from 18 to 49 years ( M523.25 years, SD56.73) with 58 (52%) females,
and 53 (48%) males.
Reliability
Internal Consistency
Mean Cronbach alpha coefficients for the MMSI scales based on separate samples ( N563 and N5111) of Bond
University undergraduates ranged from: Arousal/Alertness (.66 to .83; Mdn 5.76), Hostility (.91 to .94; Mdn 5.92),
Neuroticism (.78 to .93; Mdn 5.87), Extraversion (.81 to .87; Mdn 5.83), Curiosity (.68 to .81; Mdn 5.78).
Test/C0Retest
Based on a sample of ( N512) Bond University students, dependabi lity (immediate retest) coefficients were
reported as follows: Arousal/Alertness (.94), Hostility (.96), Neuroticism (.99), Extraversion (.99), Curiosity (.95), indi-
cating the MMSI is a reliable measure. The corresponding stab ility coefficients (30-minute retest) were as follows:
Arousal/Alertness (.55), Hostility (.77), Neuroticism (.85), Extraversion (.96), Curiosity (.92); while stability coefficients
(one week retest) were: Arousal/Alertness (.37), Hostility (.66), Neuroticism (.71), Extraversion (.93), Curiosity (.89).
Validity
Convergent/Concurrent
In a sample of Bond University undergraduates ( N5111), positive correlations between MMSI scales, PANAS
Positive Affect (PA) and Negative Affect (NA), and Locus of Control (LOC) were observed as follows: Arousal/
Alertness correlated with Curiosity (.39), with PA (.50). Hostility correlated positively with Neuroticism (.62), and
with Extraversion (.34).
Divergent/Discriminant
In a sample of Bond University undergraduates ( N5111), negative correlations between MMSI scales, PANAS
Positive Affect (PA) and Negative Affect (NA), and Locus of Control (LOC) were observed as follows: Arousal/
Alertness correlated with NA ( 2.25), with LOC ( 2.20), and with Curiosity (.37) respectively, while Hostility cor-
related with NA ( 2.56).
Construct/Factor Analytic
The MMSI scales were derived from several exploratory factor analytic studies of the subscale intercorrelations
of the POMS, DES-IV, and 8SQ instruments combined (e.g., see Boyle, 1986a, 1987a,b,e,f ,1988a, 1989b, 1991b ). All
factor analyses employed optimal (e.g., iterative maximum-likelihood) factor extraction procedures with squared
multiple correlations (SMCs) as initial communality estimates, factor number assessed via Cattell’s scree test
(Cattell, 1978; Cattell & Vogelmann, 1977 ), followed by oblique (direct oblimin) rotation in accord with
Thurstone’s five simple structure criteria (see Child, 2006 , pp. 77 /C078). The five affect dimensions labeled:
Arousal /C0Alertness, Anger/Hostility, Neuroticism, Extraversion, and Curiosity, emerged repeatedly as higher-
order factors in several of the studies, supporting their construct validity.208 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Location
Boyle, G.J. (1992). Multidimensional Mood State Inventory (MMSI) . Department of Psychology, University of
Queensland, St. Lucia, Queensland, Australia. (Revised 2012).
Results and Comments
The MMSI provides a useful measure of five important emotional/mood states based on higher-order factor
analyses of pre-existing instruments such as the POMS, 8SQ, DES-IV, etc. The five scales can be administeredseparately or conjointly, depending on the user’s requirements. There is no reason why the MMSI needs to beconstrained to measuring only emotional states or mood states, and the provision of appropriate timeframeinstructions would allow measurement of affects ranging all the way from transitory/momentary states throughlonger-lasting mood states, to enduring dispositions/trait dimensions.
MULTIDIMENSIONAL MOOD STATE INVENTORY
Arousal /C0Alertness
Please circle the appropriate response according to how you feel right now at this very moment .
15Not at All; 2 5A Little; 3 5Moderately So; 4 5Very Much So.
Do you feel:
1.You would react quickly to traffic light changes? 1 2 3 4
2.Aware of people’s mood, whether they are happy or irritable? 1 2 3 4
3.As if you can adapt quickly? 1 2 3 4
4.Aware if spelling errors? 1 2 3 4
5.You can recall the position of the furniture in your home? 1 2 3 4
6.You can remember what you ate for breakfast yesterday? 1 2 3 4
7.Like reading quickly? 1 2 3 4
8.Sensitive to smells? 1 2 3 4
9.As if you could react quickly to change? 1 2 3 4
10.Conscious of ordinary noises around you? 1 2 3 4
11.New concepts would be easy to understand? 1 2 3 4
12.You would notice if a friend had a haircut? 1 2 3 4
13.You can remember what the weather is like outside? 1 2 3 4
14.Like concentrating on a difficult task? 1 2 3 4
15.Wide awake 1 2 3 4
MULTIDIMENSIONAL MOOD STATE INVENTORY
Anger/Hostility
Please circle the appropriate response according to how you feel right now at this very moment .
15Not at All; 2 5A Little; 3 5Moderately So; 4 5Very Much So.
Do you feel:
1.Disagreeable? 1 2 3 4
2.Like a spitting cat or a snarling dog? 1 2 3 4
3.You would push in a queue? 1 2 3 4
4.Like fantasizing about attacking people? 1 2 3 4
5.You want to argue with others? 1 2 3 4
6.Like hitting someone? 1 2 3 4
7.You would like to yell and scream? 1 2 3 4
8.Defensive? 1 2 3 4209 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
9.Others are ‘prying’ into your affairs? 1 2 3 4
10.Irritated by people? 1 2 3 4
11.Like disagreeing with someone and saying so? 1 2 3 4
12.You want people to leave you alone? 1 2 3 4
13.Easily frustrated by other people? 1 2 3 4
14.Tense? 1 2 3 4
15.You would abuse someone who bumped into you? 1 2 3 4
MULTIDIMENSIONAL MOOD STATE INVENTORY
Neuroticism
Please circle the appropriate response according to how you feel right now at this very moment .
15Not at All; 2 5A Little; 3 5Moderately So; 4 5Very Much So.
Do you feel:
1.Tired for no apparent reason? 1 2 3 4
2.Life seems full of insurmountable obstacles? 1 2 3 4
3.That others are laughing at you? 1 2 3 4
4.Stressed for no apparent reason? 1 2 3 4
5.Under pressure? 1 2 3 4
6.A need to ‘prove’ yourself? 1 2 3 4
7.‘Life has left you behind’? 1 2 3 4
8.Like hiding yourself away? 1 2 3 4
9.Afraid of failure? 1 2 3 4
10.You are influenced by others’ criticisms? 1 2 3 4
11.Embarrassed? 1 2 3 4
12.Shy? 1 2 3 4
13.Others are leading more interesting lives than you? 1 2 3 4
14.That you give way to people easily? 1 2 3 4
15.People mean something other than what they say? 1 2 3 4
MULTIDIMENSIONAL MOOD STATE INVENTORY
Extraversion
Please circle the appropriate response according to how you feel right now at this very moment .
15Not at All; 2 5A Little; 3 5Moderately So; 4 5Very Much So.
Do you feel:
1.You want to be the centre of attention? 1 2 3 4
2.Like dressing to be noticed? 1 2 3 4
3.You would like to be a movie star? 1 2 3 4
4.You would want to speak to a friend who had apparently not noticed you? 1 2 3 4
5.You would like to be a prominent figure in a public parade? 1 2 3 4
6.Like being one of the first to wear a new fashion? 1 2 3 4
7.You’d like to be immortalized in a public sculpture or painting? 1 2 3 4
8.Like being on the front page of a national newspaper? 1 2 3 4
9.You would enjoy having a surprise party arranged for you? 1 2 3 4210 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
10.Like trying out new advertised products? 1234
11.Curious as to what you would look like with a different haircut? 1234
12.Like you care about your personal appearance and grooming? 1234
13.You would enjoy singing on the radio? 1234
14.Like going to a party? 1234
15.You would accept to give a speech in public? 1234
MULTIDIMENSIONAL MOOD STATE INVENTORY
Curiosity
Please circle the appropriate response according to how you feel right now at this very moment .
15Not at All; 2 5A Little; 3 5Moderately So; 4 5Very Much So.
Do you feel:
1.Like asking questions? 1 2 3 4
2.Curious about new developments in science and arts? 1 2 3 4
3.Like improving your general knowledge? 1 2 3 4
4.Like investigating strange noises? 1 2 3 4
5.You want to experience new things? 1 2 3 4
6.Like reading newspapers? 1 2 3 4
7.You would enjoy learning new skills? 1 2 3 4
8.Like experimenting with ways to get to places? 1 2 3 4
9.Interested in current affairs? 1 2 3 4
10.Like imagining yourself as an investigative reporter? 1 2 3 4
11.Like doing quizzes, crosswords and puzzles? 1 2 3 4
12.You are learning things from life? 1 2 3 4
13.You would take advantage of opportunities for change? 1 2 3 4
14.Like making life challenging? 1 2 3 4
15.You want to concentrate on many things at once? 1 2 3 4
Notes : State instructions shown (instructions also available for longer-lasting mood states, and for enduring trait
dimensions. A variety of timeframes can be tapped by varying the instructions, as required).
Permission to use the MMSI can be obtained from the author Gregory J. Boyle at the following email address:
[email protected]
Reproduced with permission.
Activation /C0Deactivation Adjective Check List (AD-ACL)
(Thayer, 1986, 1989 ).
Variable
General arousal theories (cf. Pfaff, 2006 ) concern transitory arousal states such as energetic arousal (general
activation), tense arousal (high activation), calmness (general deactivation), and tiredness (deactivation-sleep)
(Thayer, 1989 ;Thayer, Takahashi, & Pauli,1988 ).
Description
The AD-ACL is a multidimensional self-report adjective checklist that is purported to measure transitory arousal
states labeled: Energy, Tiredness, Tension, and Calmness. Initial validation studies ( Thayer, 1978 ) were based on a211 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
checklist comprised of 22 activation-related adjectives, and 28 ‘filler’ adjectives related to mood, but not to activation.
The current ‘Short Form’ AD-ACL ( Thayer, 1989 ) contains 20 activation adjectives only. Using a 4-point response
scale, respondents are instructed to ‘ describe your feelings at this moment’ . These ratings assess immediate feelings of
activation and deactivation. As indicated on Thayer’s website (see below), ‘subscale adjectives are as follows:
Energy (active, energetic, vigorous, lively, full-of-pep); Tired (sleepy, tired, drowsy, wide-awake, wakeful); Tension
(jittery, intense, fearful, clutched-up, tense); Calmness (placid, calm, at-rest, still, quiet). Scoring for ‘wakeful’ and
‘wide-awake’ must be reversed for the Tiredness subscale. Tiredness and Calmness scores must be reversed
(but not wakeful and wide-awake in this case) before summing the ten scores.’ The AD-ACL can also be scored for
bipolar dimensions of Energetic Arousal (Energy vs. Tiredness) and Tense Arousal (Tension vs. Calmness).
Sample
The sample used for the initial factor analysis of the AD-ACL (Thayer, 1967) was comprised of 211 male and
female students at the University of Rochester.
Reliability
Internal Consistency
Thayer (1978) estimated Internal Consistency in a student sample ( N5486) by finding the average single-item
communality for each of the four activation dimensions represented in a factor analysis, and then applying the
Spearman-Brown prophecy formula to estimate consistency within the full scale. He reported coefficients as fol-
lows: Energy 5.92; Tension 5.89; Calmness 5.89; Tiredness 5.90. Bartholomew and Miller (2002) reported
Cronbach alpha coefficients ranging from .96 to .97 for Energy, 72 to .85 for Tension, .79 to .86 for Calmness, and
.88 to .91 for Tiredness, respectively.
Test/C0Retest
The immediate test /C0retest (dependability) coefficients reported by Thayer for a sample of students who
completed the Short Form AD-CL were as follows: Energy (.89); Tension (.93); Calmness (.79); Tiredness (.89)
(cf.Thayer, 1989 ).Clements, Hafer, and Vermillion (1976) reported a non-significant one-week test /C0retest
Correlation (.07) for an activation dimension derived through factor analysis of the AD-ACL.
Validity
Convergent/Concurrent
Validation studies ( Thayer, 1989, 1996 ) have used a variety of additional samples. However, there is little evi-
dence available on how the AD-ACL correlates with other adjectival measures of mood, and there is often over-
lap in the adjectives used in different checklist measures. A methodological issue cited by Thayer (1978) is that
associations between activation dimensions may vary with level of activation; for example, energy and tension
may be negatively correlated at high activation levels. Furthermore, there appear to be individual differences in
the intercorrelations of the different AD-ACL dimensions ( Rafaeli, Rogers, & Revelle 2007 ).
Divergent/Discriminant
The AD-ACL has been more commonly used in experimental studies of arousing agents than in individual dif-
ferences research, and so there is rather little divergent evidence available. One issue is the distinctiveness of AD-
ACL scores from personality, given that even matched state and trait measures should be only moderately corre-
lated (Zuckerman, 1992). Thayer et al. (1987) reported associations between the AD-ACL scales and the extraver-
sion, neuroticism and morningness /C0eveningness traits.
Construct/Factor Analytic
Factor analyses, applying oblique rotation, were conducted by Thayer (1978) in two university student samples
(N5486; N5515). Two bipolar activation factors were extracted labeled: Energetic Arousal (General Activation
vs. Deactivation /C0Sleep) and Tense Arousal (High Activation and General Deactivation). Thayer (1989) conceptu-
alized Energetic Arousal and Tense Arousal as biocognitive systems associated with, respectively, vigorous motor
activity and preparedness for threat. The AD-ACL is one of the most widely used measures of subjective arousal
in studies of exercise; consistent with Thayer’s (1989) theory, moderate exercise tends to elevate Energetic Arousal
(Reed & Ones, 2006 ). The AD-ACL has also been used in studies of pharmacological arousing agents, circadian
rhythms, nutrition, sleep loss and other stress factors ( Maridakis, O’Connor, & Tomporowski 2009; Oginska et al.,
2010), and also in studies that demonstrate the role of activation in human performance ( Dickman, 2002 ).212 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Nemanick and Munz (1994) administered both the AD-ACL and the PANAS (Watson et al., 1998) to university
students. They carried out a principal components analysis, with oblique rotation, and extracted two components
suggesting convergence across instruments. One dimension was defined by positive affect, energy and low tired-
ness, the other by negative affect, tension and low calmness. The two dimensional structure may also be repre-
sented as a circumplex ( Huelsman, Furr, & Nemanick 2003 ).Huelsman, Nemanick, and Munz (1998) sampled
adjectives from the AD-ACL, PANAS and other mood scales and found evidence for a four-factor solution.
A confirmatory factor analysis ( Gregg & Shepherd, 2009 ) sampled 20 mood descriptive adjectives in 309 British
respondents. Age range was 17 to 65 years with a mean of 29.0 years and 67% of the sample was female. Fit was
maximized for a four-factor model, with factors of positive energy, tiredness, negative arousal, and relaxation,
corresponding to the four-dimensional structure of the AD-ACL.
Criterion/Predictive
Thayer (1978) cited several studies suggesting individuals scoring high on Energetic Arousal perform better on
cognitive performance tasks requiring memory and attention. Dickman (2002) reported that wakefulness was
associated with greater accuracy on a reading comprehension task, whereas vigor showed a curvilinear relation-
ship with accuracy. The AD-ACL has also been used in research on circadian rhythms. Both Energetic Arousal
and Tense Arousal vary systematically across the course of the day, with peaks around midday ( Thayer, 1978 ).
Koˇs´cec and Rado¸ seviˇc-Vida ˇcek (2004) investigated intra-individual variation in arousal across a 26-hour period.
Energetic Arousal correlated with temperature (.45), and Tense Arousal correlated with faster response latency
(2.17), on a vigilance task.
Location
Thayer, R.E. (1986). Activation-Deactivation Adjective Check List (AD ACL): Current overview and structural
analysis . Psychological Reports, 58 , 607/C0614.
Thayer, R.E. (1989). The biopsychology of mood and arousal . New York: Oxford University Press.
Results and Comments
Although the AD-ACL is based on a factorial model, the selection of adjectives for each scale has been
debated. There is also some uncertainty as to whether two bipolar factors should be preferred over four unipolar
factors ( Dickman, 2002; Gregg & Shepherd, 2009 ). There are also numerous experimental studies demonstrating
the sensitivity of the AD-ACL to various arousing and de-arousing agents as well as to biological circadian
rhythms. The AD-ACL has also been shown to correlate with psychophysiological arousal indices. However, its
nomological network in relation to other arousal and arousability constructs has not been fully explored. While
Thayer’s (1989) theory proposes variation in the relationship between Energetic Arousal and Tense Arousal,
according to context, the psychometric implications of this variation have yet to be determined. The AD-ACL is
also limited in its focus on arousal states, excluding other dimensions of mood.
ACTIVATION /C0DEACTIVATION ADJECTIVE
C H E C KL I S T( S H O R TF O R M )
Each of the words on the back describes feelings or mood. Please use the rating scale next to each word to
describe your feelings at this moment.
Examples
Work rapidly, but please mark all the words. Your first reaction is best. This should take only a minute or two.
relaxed vv v ? no If you circle the double check ( vv) it means that you definitely feel relaxed at
the moment .
relaxed vv v ? no If you circle the single check ( v) it means that you feel slightly relaxed at
the moment .
relaxed vv v ? no If you circled the question mark ( ?) it means that the word does not apply
or you cannot decide if you feel relaxed at the moment .
relaxed vv v ? no If you circled the ( no) it means that you are definitely not relaxed at the moment .
(Back page)213 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Work rapidly, but please mark all the words. Your first reaction is best. This should take only a minute or two.
active vv v ? no
drowsy vv v ? no
placid vv v ? no
fearful vv v ? no
sleepy vv v ? no
lively vv v ? no
jittery vv v ? no
still vv v ? no
energetic vv v ? no
wide-awake vv v ? no
intense vv v ? no
clutched-up vv v ? no
calm vv v ? no
quiet vv v ? no
tired vv v ? no
full-of-pep vv v ? no
vigorous vv v ? no
tense vv v ? no
at-rest vv v ? no
wakeful vv v ? no
Notes : Each item is responded to using the following 4-point scale:
vv5’definitely feel’; v 5’feel slightly’; ? 5’cannot decide’; no 5’definitely do not feel’.
The AD ACL is scored by assigning 4, 3, 2, and 1, respectively to the ‘vv, v, ?’ and ‘no’ scale points, and summing or
averaging the five scores for each subscale.
The AD ACL Short Form is reproduced in Appendix A of Thayer (1989) and online at: www.csulb.edu/ Bthayer/
thayer/adaclnew.htm (Retrieved January 5, 2014).
Copyright rRobert E. Thayer.
Reproduced with permission.
UWIST Mood Adjective Checklist (UMACL)
(Matthews, Jones & Chamberlain 1990a ).
Variable
The UMACL is an adjective checklist that assesses mood, building on three-dimensional, bipolar factor
structures for mood ( Schimmack & Grob, 2000; Sjo ¨berg, Svensson, & Persson 1979 ). It measures three, correlated
bipolar dimensions of Energetic Arousal, Tense Arousal and Hedonic Tone. It also includes a supplementary,
mono-polar dimension of Anger-Frustration. The Energetic Arousal and Tense Arousal scales are modifications
of the corresponding Thayer (1989) AD-ACL scales. Hedonic Tone refers to the overall pleasantness of mood.
Description
The UMACL is comprised of 29 adjectives, eight for each of the scales, and five for Anger-Frustration.
Instructions are: ‘Please indicate how well each word describes how you feel at the moment ’ scored on a 4-point response
scale, showing the UMACL is a measure of transitory emotional states.
Sample
The sample for the initial study of the UMACL (Matthews et al., 1990) was made up of 230 Welsh students,
and 158 members of the general public taking a keyboard training course. The sample included 210 females and
178 males.214 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Reliability
Internal Consistency
Cronbach alpha coefficients for the three principal scales ranged from .86 to .88 ( Matthews et al., 1990a ;
N5388). A further study ( Matthews et al., 2002 ) reported a similar range of alpha coefficients ranging from .82
to .88 ( N5788).
Test/C0Retest
In several subsamples of the Matthews et al. (2002) sample, participants completed the UMACL before and
after performance on one of several cognitive tasks, typically of 10 /C015 min in duration. Correlations between
pre-test and post-test scores ranged from .43 to .47 ( N5517). One subsample ( N5112) performed a working
memory task on occasions separated by three weeks. The test /C0retest correlations for post-task mood ranged from
.14 to .48. An occupational subsample ( N586) performed a work simulation on two occasions approximately six
months apart, whereby test /C0retest correlations ranged from .17 to .39, as expected for a situationally-sensitive
mood-state measure ( Matthews et al., 2002 ).
Validity
Convergent/Concurrent
Matthews et al. (1990) reported that the UMACL Tense Arousal scale correlated positively with the 8SQ
Anxiety scale (.52). Also, the correlation between Eysenck’s (EPI) Neuroticism scale and the UMACL Tense
Arousal scale was found to be .38 ( Matthews et al., 1990a ).Matthews et al. (1990a,b) also reported that the
Energetic Arousal scale correlated significantly with tonic skin conductance level (.32), and the Tense Arousal
scale correlated negatively ( 2.38) with cardiac inter-beat-interval, consistent with Thayer’s evidence that self-
report arousal converges to some extent with autonomic arousal. Matthews et al. (1999) assessed the ‘Big Five’
using Goldberg’s (1992) adjectival measure ( N5229). Energetic Arousal correlated with Conscientiousness (.20);
Tense Arousal correlated with Neuroticism (.20); Hedonic Tone correlated with Agreeableness (.14).
Divergent/Discriminant
Matthews et al. (1990) reported that the UMACL Energetic Arousal scale correlated negatively with the 8SQ
Fatigue scale ( R52 .81); and that the Hedonic Tone scale correlated negatively with the 8SQ Guilt scale ( 2.78).
Matthews and Gilliland (1999) reported correlations between the UMACL and the Eysenck personality dimen-
sions that establish divergence from these traits. Data were reported from two samples ( N5158; N5762).
Neuroticism correlated more highly than extraversion with each of the three UMACL scales. In the larger of the
two samples, correlations between Neuroticism and mood were 2.13 for Energetic Arousal, and 2.28 for
Hedonic Tone. Matthews et al. (1990a) also showed that scales were only weakly related to various demographic
factors and to a social desirability measure.
Construct/Factor Analytic
An exploratory principal factor analysis with oblique (direct oblimin) rotation, was reported by Matthews
et al. (1990a) , using a sample of 388 participants in studies of human performance. Revelle and Rocklin’s (1979)
Very Simple Structure (VSS) procedure indicated three factors should be extracted. These factors corresponded to
the hypothesized dimensions of Energetic Arousal, Tense Arousal and Hedonic Tone, respectively.
Criterion/Predictive
Matthews et al. (1990a) demonstrated that the UMACL Hedonic Tone scale is more sensitive to monetary
reward than either of the arousal scales. The UMACL scales, especially Energetic Arousal, also predict objective
measures of attention in performance studies ( Matthews, Davies, & Lees 1990b ). The UMACL is sensitive to a
range of experimental manipulations of stress and has been used to assess state responses in studies of cardio-
vascular effort-regulation (de Burgo & Gendolla, 2009 ), dietary supplements (Brown et al., 2009), glucose regula-
tion in diabetes (Hermanns et al., 2007), and circadian rhythms ( Martin & Marrington, 2005 ). The UMACL
has also been used in field studies of stressors such as driver stress ( Matthews, 2002 ) and test anxiety
(Matthews et al., 1999 ).
Location
Matthews, G., Jones, D.M., & Chamberlain, A.G. (1990). Refining the measurement of mood: The UWIST
Mood Adjective Checklist. British Journal of Psychology, 81 ,1 7/C042.215 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Results and Comments
The UMACL is an elaboration of Thayer’s (1989) AD-ACL and shares the strengths of that instrument in asses-
sing subjective arousal in a variety of experimental and field settings. The inclusion of a Hedonic Tone scale pro-
vides more comprehensive coverage of mood. The supplementary Anger-Frustration scale is not well
distinguished from low Hedonic Tone psychometrically ( Matthews et al., 1990a ), but it may be useful in studying
certain issues, such as driver aggression ( Matthews, 2002 ).
UWIST MOOD ADJECTIVE CHECKLIST
Instructions: This questionnaire is concerned with your current feelings. Please answer every question, even if
you find it difficult. Answer, as honestly as you can, what is true of you. Please do not choose a reply just because it
seems like the ‘right thing to say’. Your answers will be kept entirely confidential. Also, be sure to answer according
to how you feel AT THE MOMENT . Don’t just put down how you usually feel. You should try and work quite
quickly: there is no need to think very hard about the answers. The first answer you think of is usually the best.
Here is a list of words which describe people’s moods or feelings. Please indicate how well each word describes
how you feel AT THE MOMENT . For each word, circle the answer from 1 to 4 which best describes your mood.
Definitely Slightly Slightly not Definitely not
1. Happy 1 2 3 4
2. Dissatisfied 1 2 3 4
3. Energetic 1 2 3 4
4. Relaxed 1 2 3 4
5. Alert 1 2 3 4
6. Nervous 1 2 3 4
7. Passive 1 2 3 4
8. Cheerful 1 2 3 4
9. Tense 1 2 3 4
10. Jittery 1 2 3 4
11. Sluggish 1 2 3 4
12. Sorry 1 2 3 4
13. Composed 1 2 3 4
14. Depressed 1 2 3 4
15. Restful 1 2 3 4
16. Vigorous 1 2 3 4
17. Anxious 1 2 3 4
18. Satisfied 1 2 3 4
19. Unenterprising 1 2 3 4
20. Sad 1 2 3 4
21. Calm 1 2 3 4
22. Active 1 2 3 4
23. Contented 1 2 3 4
24. Tired 1 2 3 4
25. Impatient 1 2 3 4
26. Annoyed 1 2 3 4
27. Angry 1 2 3 4
28. Irritated 1 2 3 4
29. Grouchy 1 2 3 4
Notes :
Copyright rGerald Matthews.
The UMACL is available from Gerald Matthews at the Institute of Simulation and Training, University of Central
Florida, 3100 Technology Parkway, Orlando, Florida, 32826, USA.
Reproduced with permission.216 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Dundee Stress State Questionnaire (DSSQ)
(Matthews et al., 2002 ).
Variable
The DSSQ aims to assess affective, motivational and cognitive aspects of the states experienced in perfor-
mance settings. It includes the three principal mood scales of the UMACL (see above), two motivational scales
(Intrinsic Interest and Success Striving), and six cognitive scales. These are Self-Focus, Self-Esteem, Concentration,
Confidence and Control, Task-Related Cognitive Interference and Task-Irrelevant Cognitive Interference. These last
two scales are shortened versions of the Cognitive Interference Questionnaire ( Sarason, Sarason, Keefe, Hayes, &
Shearin 1986 ). The DSSQ may also be scored for three second-order factors of task engagement, distress and worry.
A short version of the scale measuring only these three factors is also available ( Matthews & Zeidner, 2012 ).
Description
The DSSQ is made up of four sections. Section I is the UMACL, described above. Section 2 includes 15 motiva-
tion items. Section 3 has 30 items concerning the respondent’s ‘style of thought’. Section 4 has 16 items that ask
respondents to rate how frequently they experienced various thoughts about the task and their personal con-
cerns. Sections 2 and 3 use a 5-point Likert-type response scale (0 /C04), whereas Section 4 uses a 6-point Likert-
type response scale (1 /C05)/C0(see sample items below).
Sample
The initial sample used for scale development ( Matthews et al., 1999 ) comprised 616 undergraduates from the UK,
170 undergraduates from the USA, and 151 British customer service agents. There were 583 females and 354 males.
Reliability
Internal Consistency
Cronbach alpha coefficients for the three principal scales ranged from .76 to .89 in the Matthews et al. (1999,
2002) studies, calculated in the British participants ( N5767).
Test/C0Retest
Matthews et al. (2002) reported test /C0retest stabilities for the DSSQ primary scales for two time intervals.
Across the 10 /C015 min required to perform one of several tasks, stabilities varied from .37 to .66 ( N5517). Across
approximately six months, stabilities ranged from .00 to .46, in an occupational sample ( N586).
Validity
Convergent/Concurrent
Matthews and Campbell (2009) administered both the Spielberger STPI and the DSSQ ( N5144). Task Engagement
correlated most highly with STPI state Curiosity (.40), Distress with Anxiety (.62) and Depression (.53), and Worry
with Anxiety (.45). Matthews, Szalma, Panganiban, Neubauer, and Warm (2013) report correlations with the PANAS
(N596). Task Engagement correlated at .47 with Positive A ffect, Distress at .51 with Negative Affect, and Worry at
.42 with Negative Affect. Thus, the DSSQ factors correlate ap propriately with measures of state affect, but the moder-
ate magnitudes of the correlations suggest the factors are distinct from these affective scales. In psychophysiological
studies, the Engagement and Distress factors have also been found to be modestly correlated with measures of auto-
nomic arousal, electroencephalographic response, and cerebral blood flow velocity ( Fairclough & Venables, 2006;
Matthews et al., 2013 ). Bivariate correlations are typically in the 0.2 to 0.4 range ( Matthews et al., 2010 ).
Divergent/Discriminant
Matthews et al. (2002) found that the Eysenck extraversion and neuroticism dimensions were weakly corre-
lated with post-task state in an occupational sample ( N5328). Matthews et al. (2013) summarized FFM data
from four studies (total N5933). Again, trait-state correlations suggested divergence, with no correlations in
post-task data exceeding 0.4. Traits were somewhat more strongly associated with pre-task state, especially for
associations between Neuroticism and Distress. Studies have investigated various other more narrowly defined
traits, such as those linked to cognitive dysfunction and fatigue ( Shaw et al., 2010 ) and to mood-regulation
(Matthews & Fellner, 2012 ). All these studies suggest divergence of trait and state although some meaningful
associations have been found ( Matthews et al., 2013 ).217 OVERVIEW OF THE MEASURES
II. EMOTIONAL DISPOSITIONS |
Construct/Factor Analytic
A principal components analysis with oblique (direct oblimin) rotation was reported by Matthews et al.
(1999) . Some 767 participants completed the scale follo wing performance of one of several tasks. Some 517
of these participants also completed a pre-task versio n. Horn’s parallel analysis was used to determine the
number of factors extracted. This study identified 10 dimensions, with a single motivation dimension.
As u b s e q u e n ts t u d y( Matthews, Campbell & Falconer 2001 ) differentiated two motivation dimensions related
to intrinsic and achievement motivation. Matthews et al. (2002) conducted a second-order factor analysis,
using the Matthews et al. (1999) data, and the same factor analytic methodology and extracted three factors
labeled: Task Engagement, Distress, and Worry. Facto r structure was similar in pre- and post-task data, and
in an analysis of change scores. Everett’s (1983) factor-score method was used to show that factor solutions
were similar across different data sets. Task Engagement was defined primarily be Energetic Arousal,
Motivation and Concentration, Distress by Tense Arousal, low Hedonic Tone, and low Confidence /C0Control,
and Worry by the remaining cognitive scales. Matthews et al. (2002) suggested the DSSQ state factors might
be understood as relational constructs ( Lazarus, 1999 ) ,d e f i n i n gt h ep e r s o n ’ si m m e d i a t em o d eo fa d a p t a t i o nt o
task demands.
Criterion/Predictive
Two lines of evidence support the predictive validity of the DSSQ. First, the state variables correlate with the
appraisal and coping variables specified by Lazarus (1999) ; indeed, appraisal and coping explain a substantial
part of the variance in state change from pre- to post-task ( Matthews et al., 2013 ). Second, states correlate signifi-
cantly with objective performance measures. For example, Task Engagement is reliably associated with superior
vigilance and performance on other demanding attentional tasks ( Matthews et al., 2010; Matthews & Zeidner,
2012), whereas Distress is negatively associated with working memory ( Matthews & Campbell, 2010 ). The
Cognitive Interference scales of the DSSQ (components of Worry) have been used to investigate performance def-
icits associated with mind wandering ( Smallwood & Schooler, 2006 ).Fairclough and Venables (2006) found that a
battery of psychophysiological measures explained up to 53% of the variance in Task Engagement and up to 42%
in Distress.
Location
Matthews, G., Campbell, S.E., Falconer, S. et al. (2002). Fundamental dimensions of subjective state in perfor-
mance settings: Task engagement, distress and worry. Emotion, 2 , 315/C0340.
Results and Comments
The DSSQ is designed for use in performance environments. It has been shown to be appropriately sensitive
to a range of stress factors manipulated in experimental studies, including cognitive demands, evaluative and
environmental stressors, and the fatigue associated with prolonged work ( Matthews et al., 2013 ). It is of use in
understanding the dynamic interplay between the person and task demands, specifically in relation to task
demand and stressor effects on state, and the impact of state change on information-processing and performance.
The scale may also be used in applied contexts that involve cognitive challenge, including work performance
(Matthews et al., 2002 ), vehicle operation ( Neubauer, Matthews, Langheim, & Saxby 2012 ), and clinical psychol-
ogy ( Matthews et al., 1999 ).
DSSQ SAMPLE ITEMS
Section 1. Mood (see UMACL above)
Section 2. Motivation
Please answer some questions about your attitude to the task you have just done . Rate your agreement with the
following statements by circling one of the following answers:
Extremely 54; Very much 53; Somewhat 52; A little bit 51; Not at all 50
1.The content of the task was interesting 01234
2.The only reason to do the task is to get an external reward (e.g. payment) 01234218 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Section 3. Thinking Style
Below are some statements which may describe your style of thought during task performance. Read each one
carefully and indicate how true each statement was of your thoughts WHILE PERFORMING THE TASK . To answer
circle one of the following answers:
Extremely 54; Very much 53; Somewhat 52; A little bit 51; Not at all 50
1.I tried to figure myself out 01234
2.I felt confident about my abilities 01234
Section 4. Thinking Content
Below is a list of thoughts, some of which you might have had recently. Please indicate roughly how often you had
each thought during THE LAST TEN MINUTES (while performing the task), by circling a number from the list below.
15Never; 2 5Once; 3 5A few times; 4 5Often; 5 5Very often
1.I thought about how I should work more carefully 12345
2.I thought about members of my family 12345
Notes :
The DSSQ is available from Gerald Matthews at the Institute of Simulation and Training, University of Central
Florida, 3100 Technology Parkway, Orlando, Florida, 32826, USA. Contact Gerald Matthews at: [email protected]
Reproduced with permission.
FUTURE RESEARCH DIRECTIONS
In this chapter we have reviewed the psychometric properties of 10 important measures of affect dimensions.
Clearly, measurement timeframes can vary on a continuum, ranging all the way from brief situationally-sensitive
emotional states (that may change throughout the day), through longer-lasting moods (remaining somewhat
stable over a period of a week or even months), to motivational dynamic traits showing only relative stability, to
enduring dispositional traits (persisting over years, or even the lifespan). It is clear that adherence to discrete/
dichotomous state /C0trait concepts provides an oversimplified approach to the measurement of affect dimensions.
An ‘immediate state’ instruction is also restricted because the respondent is asked to report on the current con-
tents of short-term memory. With any other timeframe, the respondent is being asked to retrieve information
from long-term memory.
Nevertheless, the state /C0trait distinction originally proposed by Cattell and Scheier (1963) has been incorpo-
rated into the construction of the MCI, STPI, and MAACL instruments reviewed in this chapter. Other measures
provide additional instructional sets that allow measurement of affects across a wider range of time intervals
(e.g., the DES-IV provides at least three separate measurement timeframes; the PANAS-X provides eight separate
measurement timeframes; the MMSI allows for multiple timeframes). A similar approach could be adopted with
other affect measures reviewed here, greatly enhancing their measurement utility. The PANAS-X provides the
most comprehensive range of timeframe instructions to-date, and this more inclusive approach is recommended
for use with affect measures rather than focusing merely on a limited state /C0trait dichotomy.
Another limitation relates to the tendency to maximize ‘Internal Consistency’ of scales. As Boyle (1991, p. 291)
stated, ‘The term ‘Internal Consistency’ ...is a misnomer, as a high estimate of internal item consistency/item
homogeneity may also suggest a high level of item redundancy, wherein essentially the same item is rephrased
in several different ways.’ Kline (1986) suggested that Cronbach alpha coefficients should fall within the 0.3 to 0.7
range. Below 0.3, there is too little commonality (Internal Consistency); above 0.7, there may be significant item
redundancy (where, for example, a particular item is effectively repeated by being rephrased in different ways),
resulting in a narrow breadth of measurement of the factor/construct.
Furthermore, reverse-worded items are often loaded by a distinct factor suggesting that they measure a rather
different construct ( Boyle, 1977, 1989a ). For this reason, the common practice of including reverse-keyed items in
rating and self-report scales is potentially problematic. Some scales/measures have been constructed, so as to219 FUTURE RESEARCH DIRECTIONS
II. EMOTIONAL DISPOSITIONS |
deliberately avoid the inclusion of reverse-worded items (e.g., the MCI). Ignoring the empirical factor analytic
evidence, many more recently constructed scales/measures have nonetheless included reverse-worded items
with the apparent aim of reducing response sets.
In addition, many rating scales and self-report measures have relied on less than optimal exploratory factor
analytic methodology in their construction. While many such EFAs have been based on item intercorrelations, it
is important to point out that item responses are notoriously unreliable. For this reason, both Cattell, and
Comrey, for example, recommended using the intercorrelations of item-parcels as the starting point for reliable
factor analysis ( Cattell, 1978; Comrey & Lee, 1992 ).
Finally, theoretical understanding of measures of affect remains limited. Based on self-reports of conscious
states, affective dimensions are difficult to conceptualize within causal models of behavior. Difficult questions
remain about the inter-relationships of conscious feeling states, cognitive processes and neural activity (e.g., see
Izard, 2009 ). A major task for future research is to map affective experience onto psychological and neural pro-
cesses with greater precision than hitherto has been accomplished. To this end, use of reliable and valid multidi-
mensional measures of affects across a wide range of timeframes will be required.
References
Aganoff, J. A., & Boyle, G. J. (1994). Aerobic exercise, mood states and menstrual cycle symptoms. Journal of Psychosomatic Research ,38,1 8 3/C0192.
Akande, D. W. (2002). A data-based analysis of the psychometric performance of the Differential Emotions Scale. Educational Studies ,28,1 2 3/C0131.
Baker, F., Denniston, M., Zabora, J., Polland, A., & Dudley, W. N. (2002). A POMS short form for cancer patients: psychometric and structural
evaluation. Psycho-Oncology ,11, 273/C0281.
Barker-Collo, S. L. (2003). Culture and validity of the Symptom Checklist-90-Revised and Profile of Mood States in a New Zealand student
sample. Cultural Diversity and Ethnic Minority Psychology ,9, 185/C0196.
Beckers, J. J., Wicherts, J. M., & Schmidt, H. G. (2007). Computer anxiety: ‘Trait’ or ‘State’? Computers in Human Behavior ,23, 2851/C02862.
Bell, G. J., & Howe, B. L. (1988). Mood state profiles and motivations of triathletes. Journal of Sport Behavior ,11,6 6/C077.
Blumberg, S. H., & Izard, C. E. (1985). Affective and cognitive characteristics of depression in 10- and 11-year-old children. Journal of
Personality and Social Psychology ,49, 194/C0202.
Blumberg, S. H., & Izard, C. E. (1986). Discriminating patterns of emotions in 10- and 11-year-old children. Journal of Personality and Social
Psychology ,5I, 852/C0857.
Bourgeois, A., LeUnes, A., & Meyers, M. (2010). Full-scale and short-form of the Profile of Mood States: A factor analytic comparison. Journal
of Sport Behavior ,33, 355/C0376.
Boyle, G. J. (1977). Delimitation of state and trait curiosity in relation to state anxiety and performance on a learning task . Masters Thesis, University
of Melbourne, Parkville, Victoria.
Boyle, G. J. (1983a). Critical review of state trait curiosity test development. Motivation and Emotion ,7, 377/C0397.
Boyle, G. J. (1983b). Effects on academic learning of manipulating emotional states and motivational dynamics. British Journal of Educational
Psychology ,53, 347/C0357.
Boyle, G. J. (1983c). Higher-order factor structure of Cattell’s MAT and 8SQ. Multivariate Experimental Clinical Research ,6, 119/C0127.
Boyle, G. J. (1984a). Effects of viewing a road trauma film on emotional and motivational factors. Accident Analysis and Prevention ,16, 383/C0386.
Boyle, G. J. (1984b). Reliability and validity of Izard’s Differential Emotions Scale. Personality and Individual Differences ,5, 747/C0750.
Boyle, G. J. (1985a). A reanalysis of the higher order factor structure of the Motivation Analysis Test and the Eight State Questionnaire.
Personality and Individual Differences ,6, 367/C0374.
Boyle, G. J. (1985b). Self-report measures of depression: Some psychometric considerations. British Journal of Clinical Psychology ,24,4 5/C059.
Boyle, G. J. (1985c). The paramenstruum and negative moods in normal young women. Personality and Individual Differences ,6, 649/C0652.
Boyle, G. J. (1986a). Analysis of typological factors across the Eight State Questionnaire and the Differential Emotions Scale. Psychological
Reports ,59, 503/C0510.
Boyle, G. J. (1986b). Estimation of measurement redundancy across the Eight State Questionnaire and the Differential Emotions Scale. New
Zealand Journal of Psychology ,15,5 4/C061.
Boyle, G. J. (1987a). A conjoint dR-factoring of the 8SQ/DES-IV multivariate mood-state scales. Australian Journal of Psychology ,39,7 9/C087.
Boyle, G. J. (1987b). A cross-validation of the factor structure of the Profile of Mood States: Were the factors correctly identified in the first
instance?. Psychological Reports ,60, 343/C0354.
Boyle, G. J. (1987c). Quantitative and qualitative intersections between the Eight State Questionnaire and the Profile of Mood States.
Educational and Psychological Measurement ,47, 437/C0443.
Boyle, G. J. (1987d). Review of the (1985) ‘Standards for educational and psychological testing: AERA, APA and NCME.’. Australian Journal of
Psychology ,39, 235/C0237.
Boyle, G. J. (1987e). Secondary mood-type factors in the Differential Emotions Scale (DES-IV). Multivariate Experimental Clinical Research ,8, 211/C0220.
Boyle, G. J. (1987f). Typological mood state factors measured in the Eight State Questionnaire. Personality and Individual Differences ,8, 137/C0140.
Boyle, G. J. (1988a). Central clinical states: An examination of the Profile of Mood States and the Eight State Questionnaire. Journal of
Psychopathology and Behavioral Assessment ,10, 205/C0215.
Boyle, G. J. (1988b). Elucidation of motivation structure by dynamic calculus. In J. R. Nesselroade, & R. B. Cattell (Eds.), Handbook of multivari-
ate experimental psychology (rev. 2nd ed, pp. 737 /C0787). New York: Plenum.
Boyle, G. J. (1989a). Breadth depth or state trait curiosity? A factor analysis of state trait curiosity and state anxiety scales. Personality and
Individual Differences ,10, 175/C0183.220 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Boyle, G. J. (1989b). Factor structure of the Differential Emotions Scale and the Eight State Questionnaire revisited. Irish Journal of Psychology ,
10,5 6/C066.
Boyle, G. J. (1989c). Sex differences in reported mood states. Personality and Individual Differences ,10, 1179/C01183.
Boyle, G. J. (1991a). Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Personality and Individual
Differences ,12, 291/C0294.
Boyle, G. J. (1991b). Item analysis of the subscales in the Eight State Questionnaire (8SQ): Exploratory and confirmatory factor analyses.
Multivariate Experimental Clinical Research ,10,3 7/C065.
Boyle, G. J. (1992). Multidimensional Mood State Inventory (MMSI) . St. Lucia, Queensland: Department of Psychology, University of Queensland.
(Revised 2012).
Boyle, G. J., & Cattell, R. B. (1984). Proof of situational sensitivity of mood states and dynamic traits, ergs and sentiments to disturbing stimuli.
Personality and Individual Differences ,5, 541/C0548.
Boyle, G. J., & Katz, I. (1991). Multidimensional scaling of the Eight State Questionnaire and the Differential Emotions Scale. Personality and
Individual Differences ,12, 565/C0574.
Boyle, G. J., Stanley, G. V., & Start, K. B. (1985). Canonical/redundancy analyses of the Sixteen Personality Factor Questionnaire, the
Motivation Analysis Test, and the Eight State Questionnaire. Multivariate Experimental Clinical Research ,7, 113/C0122.
Burgo, J. de., & Gendolla, G. E. (2009). Are moods motivational states? A study on effort-related cardiovascular response. Emotion ,9, 892/C0897.
Carey, T. C., Finch, A. J., & Carey, M. P. (1991). Relation between differential emotions and depression in emotionally disturbed children and
adolescents. Journal of Consulting and Clinical Psychology ,59, 594/C0597.
Cattell, R. B. (1973). Personality and mood by questionnaire . San Francisco, CA: Jossey-Bass.
Cattell, R. B. (1978). The scientific use of factor analysis in behavioral and life sciences . New York: Plenum.
Cattell, R. B., Boyle, G. J., & Chant, D. (2002). The enriched behavioral prediction equation and its impact on structured learning and the
dynamic calculus. Psychological Review ,109, 202/C0205.
Cattell, R. B., & Kline, P. (1977). The scientific analysis of personality and motivation . New York: Academic.
Cattell, R. B., & Scheier, I. H. (1963). Handbook for the IPAT Anxiety Scale (2nd ed). Champaign, IL: Institute for Personality and Ability Testing.
Cattell, R. B., & Vogelmann, S. (1977). A comprehensive trial of the scree and K.G. criteria for determining the number of factors. Multivariate
Behavioral Research ,12, 289/C0325.
Child, D. (2006). The essentials of factor analysis (3rd ed). London, UK: Continuum, International Publishing Group.
Clements, P. R., Hafer, M. D., & Vermillion, M. E. (1976). Psychometric, diurnal, and electrophysiological correlates of activation. Journal of
Personality and Social Psychology ,33, 387/C0394.
Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed). Hillsdale, NJ: Erlbaum.
Cox, R. H. (2002). Sport psychology: Concepts and application (5th ed ). Columbia, MO: McGraw-Hill.
Crawford, J. R., & Henry, J. D. (2004). The Positive and Negative Affect Schedule (PANAS): Construct validity, measurement properties and
normative data in a large non-clinical sample. British Journal of Clinical Psychology ,43, 245/C0265.
Cromley, T., Knatz, S., Rockwell, R., Neumark-Sztainer, D., Story, M., & Boutelle, K. (2012). Relationships between body satisfaction and psy-
chological functioning and weight-related cognitions and behaviors in overweight adolescents. Journal of Adolescent Health ,50, 651/C0653.
Curran, J. P., & Cattell, R. B. (1976). Manual for the Eight State Questionnaire . Champaign, IL: Institute for Personality and Ability Testing.
Curran, S. L., Andrykowski, M. A., & Studts, J. L. (1995). Short form of the Profile of Mood States (POMS-SF): Psychometric information.
Psychological Assessment: A Journal of Consulting and Clinical Psychology ,7,8 0/C083.
Devlin, B. H. (1976). The convergent and discriminant validity of the Naylor and Gaudry C-Trait scale . Masters Thesis, University of Melbourne,
Parkville, Victoria.
Dickman, S. J. (2002). Dimensions of arousal: wakefulness and vigor. Human Factors ,44, 429/C0442.
Diener, E., & Larsen, R. J. (1993). The experience of emotional well-being. In M. Lewis, & J. M. Haviland (Eds.), Handbook of emotions
(pp. 405 /C0415). New York: Guilford.
Ekman, P. (1994). Moods, emotions, and traits. In P. Ekman, & R. Davidson (Eds.), The nature of emotion . Oxford: Oxford University Press.
Everett, J. E. (1983). Factor comparability as a means of determining the number of factors and their rotation. Multivariate Behavioral Research ,
18, 197/C0218.
Eysenck, M. W., & Derakshan, N. (2011). New perspectives in attentional control theory. Personality and Individual Differences ,50, 955/C0960.
Fairclough, S. H., & Venables, L. (2006). Prediction of subjective states from psychophysiology: A multivariate approach. Biological Psychology ,
71, 100/C0110.
Fernandez, E., & Kerns, R. D. (2008). Anxiety, depression, and anger: Core components of negative affect in medical populations. In G. J.
Boyle, G. Matthews, & D. H. Saklofske (Eds.), Handbook of personality theory and Assessment: Vol. 1 /C0Personality theories and models
(pp. 257 /C0272). Los Angeles, CA: Sage.
Fisher, C. D. (1998). Mood and emotions while working: Missing pieces of job satisfaction . Gold Coast, Queensland: Bond University.
Fraley, R. C., & Roberts, B. W. (2005). Patterns of continuity: A dynamic model for conceptualizing the stability of individual differences in
psychological constructs across the life course. Psychological Review ,112,6 0/C074.
Fridlund, A. J., Schwartz, G. E., & Fowler, S. C. (1984). Pattern recognition of self-reported emotional state from multiple-site facial EMG activ-
ity during affective imagery. Psychophysiology ,21, 622/C0637.
Gaudry, E., & Poole, C. (1975). A further validation of the state /C0trait distinction in anxiety research. Australian Journal of Psychology ,27,1 1 9/C0125.
Gaudry, E., Vagg, P., & Spielberger, C. D. (1975). Validation of the state /C0trait distinction in anxiety research. Multivariate Behavioral Research ,
10, 331/C0341.
Gibson, S. J. (1997). The measurement of mood states in older adults. Journal of Gerontology ,52, 167/C0174.
Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure. Psychological Assessment ,4,2 6/C042.
Gregg, V. H., & Shepherd, A. J. (2009). Factor structure of scores on the state version of the Four Dimension Mood Scale. Educational and
Psychological Measurement ,69, 146/C0156.
Guadagnoli, E., & Mor, V. (1989). Measuring cancer patients’ affect: revision and psychometric properties of the Profile of Mood States
(POMS). Psychological Assessment ,1, 150/C0154.221 REFERENCES
II. EMOTIONAL DISPOSITIONS |
Heuchert, J. P., & McNair, D. M. (2012). Profile of Mood States (2nd ed.): POMS 2 . North Tonawanda, NY: Multi-Health Systems.
Huelsman, T. J., Furr, R., & Nemanick, R. C. (2003). Measurement of dispositional affect: Construct validity and convergence with a circum-
plex model of affect. Educational and Psychological Measurement ,63, 655/C0673.
Huelsman, T. J., Nemanick, R. C., & Munz, D. C. (1998). Scales to measure four dimensions of dispositional mood: Positive energy, tiredness,
negative activation, and relaxation. Educational and Psychological Measurement ,58, 804/C0819.
Hunsley, J. (1990a). Dimensionality of the Multiple Affect Adjective Check List /C0Revised: A comparison of factor analytic procedures. Journal
of Psychopathology and Behavioral Assessment ,12,8 1/C091.
Hunsley, J. (1990b). The factor structure of the Multiple Affect Adjective Check List /C0Revised (MAACL-R): some statistical considerations.
Journal of Psychopathology and Behavioral Assessment ,12,9 9/C0101.
Izard, C. E. (1990). Facial expressions and the regulation of emotions. Journal of Personality and Social Psychology ,58, 487/C0498.
Izard, C. E. (1991). The psychology of emotions . New York: Plenum.
Izard, C. E. (1993). Four systems for emotion activation: Cognitive and noncognitive processes. Psychological Review ,100,6 8/C090.
Izard, C. E. (2001). Emotional intelligence or adaptive emotions? Emotion ,1, 249/C0257.
Izard, C. E. (2002). Translating emotion theory and research into preventive interventions. Psychological Bulletin ,128, 796/C0824.
Izard, C. E. (2007). Levels of emotion and levels of consciousness. Behavioral and Brain Sciences ,30,9 6/C098.
Izard, C. E. (2009). Emotion theory and research: Highlights, unanswered questions, and emerging issues. Annual Review of Psychology ,60,
1/C025.
Izard, C. E., Fantauzzo, C. A., Castle, J. M., Haynes, O. M., Rayias, M. F., & Putnam, P. H. (1995). The ontogeny and significance of infants’
facial expressions in the first 9 months of life. Developmental Psychology ,31, 997/C01013.
Izard, C. E., Fine, S. E., Schultz, D., Mostow, A. J., Ackerman, B. P., & Youngstrom, E. A. (2001). Emotion knowledge as a predictor of social
behavior and academic competence in children at risk. Psychological Science ,12,1 8/C023.
Izard, C. E., Libero, D. Z., Putnam, P., & Haynes, O. M. (1993). Stability of emotion experiences and their relations to traits of personality.
Journal of Personality and Social Psychology ,64, 847/C0860.
Izard, C. E., Quinn, P. C., & Most, S. B. (2008). Many ways to awareness: A developmental perspective on cognitive access. Behavioral and Brain
Sciences ,30, 506/C0507.
Jacobs, G. A., Latham, L. E., & Brown, M. S. (1988). Test /C0Retest reliability of the State /C0Trait Personality Inventory and the Anger Expression
Scale. Anxiety Research ,1, 263/C0265.
Kashani, J. H., Suarez, L., Allan, W. D., & Reid, J. C. (1997). Hopelessness in inpatient youths: A closer look at behavior, emotional expression,
and social support. Journal of the American Academy of Child and Adolescent Psychiatry ,36, 1625/C01631.
Kashdan, T. B., & Roberts, J. E. (2004). Trait and state curiosity in the genesis of intimacy: Differentiation from related constructs. Journal of
Social and Clinical Psychology ,23, 792/C0816.
Kashdan, T. B., Rose, P., & Fincham, F. D. (2004). Curiosity and exploration: Facilitating positive subjective experiences and personal growth
opportunities. Journal of Personality Assessment ,82, 291/C0305.
Kline, P. (1986). A handbook of test construction: Introduction to psychometric design . New York: Methuen.
Kotsch, W. E., Gerbing, D. W., & Schwartz, L. E. (1982). The construct validity of the Differential Emotions scale as adapted for children and ado-
lescents. In C. E. lzard (Ed.), Measuring emotions in infants and young children (pp. 258 /C0278). Cambridge, UK: Cambridge University Press.
Koˇs´cec, A., & Rado¸ seviˇc-Vida ˇcek, B. (2004). Circadian components in energy and tension and their relation to physiological activation and per-
formance. Chronobiology International: Journal of Biological & Medical Rhythm Research ,21, 673/C0690.
Krohne, H., Schmukle, S. C., Spaderna, H., & Spielberger, C. D. (2002). The State /C0Trait Depression Scales: An international comparison.
Anxiety, Stress & Coping: An International Journal ,15, 105/C0122.
Lazarus, R. S. (1999). Stress and emotions: A new synthesis . New York: Springer.
Leue, A., & Lange, S. (2011). Reliability generalization: An examination of the Positive Affect and Negative Affect Schedule. Assessment ,18, 487/C0501.
Lorr, M., McNair, D. M., & Heuchert, J. P. (2003). Manual for the Profile of Mood States . Toronto: Multi-Health Systems Inc.
Lubin, B., Whitlock, R. V., Reddy, D., & Petren, S. (2001). A comparison of the short and long forms of the Multiple Affect Adjective Check
List/C0Revised (MAACL-R). Journal of Clinical Psychology ,57, 411/C0416.
Lubin, B., & Zuckerman, M. (1999). Manual for the MAACL-R: Multiple Affect Adjective Check List-Revised . San Diego, CA: Educational and
Industrial Testing Service.
Maloni, J. A., Park, S., Anthony, M. K., & Musil, C. M. (2005). Measurement of antepartum depressive symptoms during high-risk pregnancy.
Research in Nursing & Health ,28,1 6/C026.
Maridakis, V., O’Connor, P. J., & Tomporowski, P. D. (2009). Sensitivity to change in cognitive performance and mood measures of energy and
fatigue in response to morning caffeine alone or in combination with carbohydrate. International Journal of Neuroscience ,119, 1239/C01258.
Martin, P. Y., & Marrington, S. (2005). Morningness-eveningness orientation, optimal time-of-day and attitude change: Evidence for the sys-
tematic processing of a persuasive communication. Personality and Individual Differences ,39, 367/C0377.
Matthews, G. (2002). Towards a transactional ergonomics for driver stress and fatigue. Theoretical Issues in Ergonomics Science ,3, 195/C0211.
Matthews, G., Campbell, S., & Falconer, S. (2001). Assessment of motivational states in performance environments. Proceedings of the Human
Factors and Ergonomics Society ,45, 906/C0910.
Matthews, G., & Campbell, S. E. (2009). Sustained performance under overload: Personality and individual differences in stress and coping.
Theoretical Issues in Ergonomics Science ,10, 417/C0442.
Matthews, G., & Campbell, S. E. (2010). Dynamic relationships between stress states and working memory. Cognition and Emotion ,24, 357/C0373.
Matthews, G., Campbell, S. E., Falconer, S., Joyner, L., Huggins, J., Gilliland, K., et al. (2002). Fundamental dimensions of subjective state in
performance settings: Task engagement, distress and worry. Emotion ,2, 315/C0340.
Matthews, G., Davies, D. R., & Lees, J. L. (1990b). Arousal, extraversion, and individual differences in resource availability. Journal of
Personality and Social Psychology ,59, 150/C0168.
Matthews, G., & Fellner, A. N. (2012). The energetics of emotional intelligence. In M. W. Eysenck, M. Fajkowska, & T. Maruszewski (Eds.),
Warsaw lectures on personality, emotion, and cognition (Vol. 2, pp. 25 /C045). Clinton Corners, NY: Eliot Werner Publications.222 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
Matthews, G., & Gilliland, K. (1999). The personality theories of H.J. Eysenck and J.A. Gray: A comparative review. Personality and Individual
Differences ,26, 583/C0626.
Matthews, G., Hillyard, E. J., & Campbell, S. E. (1999). Metacognition and maladaptive coping as components of test anxiety. Clinical
Psychology and Psychotherapy ,6, 111/C0125.
Matthews, G., Jones, D. M., & Chamberlain, A. G. (1990a). Refining the measurement of mood: The UWIST Mood Adjective Checklist. British
Journal of Psychology ,81,1 7/C042.
Matthews, G., Joyner, L., Gilliland, K., Campbell, S. E., Huggins, J., & Falconer, S. (1999). Validation of a comprehensive stress state question-
naire: Towards a state ‘Big Three’? In I. Mervielde, I. J. Deary, F. De Fruyt, & F. Ostendorf (Eds.), Personality psychology in Europe (Vol. 7,
pp. 335 /C0350). Tilburg: Tilburg University Press.
Matthews, G., Panganiban, A. R., & Hudlicka, E. (2011). Anxiety and selective attention to threat in tactical decision-making. Personality and
Individual Differences ,50, 949/C0954.
Matthews, G., Szalma, J., Panganiban, A. R., Neubauer, C., & Warm, J. S. (2013). Profiling task stress with the Dundee Stress State
Questionnaire. In L. Cavalcanti, & S. Azevedo (Eds.), Psychology of stress: New research (pp. 49 /C090). Hauppage, NY: Nova Science.
Matthews, G., Warm, J. S., Reinerman, L. E., Langheim, L., Washburn, D. A., & Tripp, L. (2010). Task engagement, cerebral blood flow veloc-
ity, and diagnostic monitoring for sustained attention. Journal of Experimental Psychology: Applied ,16, 187/C0203.
Matthews, G., & Zeidner, M. (2012). Individual differences in attentional networks: Trait and state correlates of the ANT. Personality and
Individual Differences ,53, 574/C0579.
McNair, D. M., Heuchert, J. P., & Shilony, E. (2003). Profile of Mood States bibliography 1964-2002 . Toronto, Canada: Multi-Health Systems Inc.
McNair, D. M., Lorr, M., & Droppleman, L. F. (1992). Manual for the Profile of Mood States . San Diego, CA: Educational and Industrial Testing
Service.
Morfeld, M., Petersen, C., Kruger-Bodeker, A., von Mackensen, S., & Bullinger, M. (2007). The assessment of mood at workplace /C0psychomet-
ric analyses of the revised Profile of Mood States (POMS) Questionnaire. GMS Psycho-Social-Medicine ,4,1/C09.
Naylor, F. D. (1981). A State /C0Trait Curiosity Inventory. Australian Psychologist ,16, 172/C0183, (Article published online: 2 February 2011.
Doi:10.1080/00050068108255893).
Neubauer, C., Matthews, G., Langheim, L., & Saxby, D. (2012). Fatigue and voluntary utilization of automation in simulated driving. Human
Factors ,54, 734/C0746.
Nyenhuis, D. L., Yamamoto, C., Luchetta, T., Terrien, A., & Parmentier, A. (1999). Adult and geriatric normative data and validation of the
Profile of Mood States. Journal of Clinical Psychology ,55,7 9/C086.
Oginska, H., Fafrowicz, M., Golonka, K., Marek, T., Mojsa-Kaja, J., & Tucholska, K. (2010). Chronotype, sleep loss, and diurnal pattern of sali-
vary cortisol in a simulated daylong driving. Chronobiology International ,27, 959/C0974.
O’Halloran, P. D., Murphy, G. C., & Webster, K. E. (2004). Reliability of the bipolar form of the Profile of Mood States using an alternative test
protocol. Psychological Reports ,95, 459/C0663.
Petrie, J. M., Chapman, L. K., & Vines, L. M. (2013). Utility of the PANAS-X in predicting social phobia in African American females. Journal of
Black Psychology ,39, 131/C0155.
Pfaff, D. W. (2006). Brain arousal and information theory . Cambridge, MA: Harvard University Press.
Rafaeli, E., Rogers, G. M., & Revelle, W. (2007). Affective synchrony: Individual differences in mixed emotions. Personality and Social Psychology
Bulletin ,33, 915/C0932.
Ready, R. E., Vaidya, J. G., Watson, D., Latzman, R. D., Koffel, E. A., & Clark, L. A. (2011). Age-group differences in facets of positive and neg-
ative affect. Aging & Mental Health ,15, 784/C0795.
Reddon, J. R., Marceau, R., & Holden, R. R. (1985). A confirmatory evaluation of the Profile of Mood States: Convergent and discriminant item
validity. Journal of Psychopathology and Behavioral Assessment ,7, 243/C0259.
Reed, J., & Ones, D. S. (2006). The effect of acute aerobic exercise on positive activated affect: A meta-analysis. Psychology of Sport and Exercise ,
7, 477/C0514.
Reio, T. G. (1997). Effects of curiosity on socialization-related learning and job performance in adults .Doctoral dissertation . Falls Church, Virginia:
Virginia Polytechnic Institute and State University.
Revelle, W., & Rocklin, T. (1979). Very simple structure: An alternative procedure for estimating the optimal number of interpretable factors.
Multivariate Behavioral Research ,14, 403/C0414.
Ricard-St-Aubin, J. S., Philippe, F. L., Beaulieu-Pelletier, G., & Lecours, S. (2010). Validation francophone de l’E ´chelle des e ´motions diffe ´ren-
tielles IV (EED-IV) /C0French validation of the Differential Emotions Scale IV (DES-IV. Revue Europe ´enne de Psychologie Applique ´e,60,4 1/C053.
Roberts, B. W., Walton, K., & Viechtbauer, W. (2006a). Patterns of mean-Level change in personality traits across the life course: A meta-
analysis of longitudinal studies. Psychological Bulletin ,132,1/C025.
Roberts, B. W., Walton, K., & Viechtbauer, W. (2006b). Personality changes in adulthood: Reply to Costa & McCrae (2006). Psychological
Bulletin ,132,2 9/C032.
Rossi, V., & Pourtois, G. (2012). Transient state-dependent fluctuations in anxiety measured using STAI, POMS, PANAS or VAS: A compara-
tive review. Anxiety, Stress & Coping: An International Journal ,25, 603/C0645.
Sarason, I. G., Sarason, B. R., Keefe, D. E., Hayes, B. E., & Shearin, E. N. (1986). Cognitive interference: situational determinants and traitlike
characteristics. Journal of Personality and Social Psychology ,31, 215/C0226.
Saup, W. (1992). Neugier und Interesse im (fru ¨hen) Alter [Curiosity and interest in later adulthood]. Zeitschrift fu ¨r Gerontopsychologie und
Psychiatrie ,1,1/C010.
Schimmack, U., & Grob, A. (2000). Dimensional models of core affect: A quantitative comparison by means of structural equation modeling.
European Journal of Personality ,14, 325/C0345.
Schwartz, G. E. (1982). Psychophysiological patterning and emotion revisited: A systems perspective.. In C. E. lzard (Ed.), Measuring emotions
in infants and children (Vol. I, pp. 67 /C093). Cambridge, UK: Cambridge University Press.
Shaw, T. H., Matthews, G., Warm, J. S., Finomore, V., Silverman, L., & Costa, P. T., Jr. (2010). Individual differences in vigilance: Personality,
ability and states of stress. Journal of Research in Personality ,44, 297/C0308.223 REFERENCES
II. EMOTIONAL DISPOSITIONS |
Shin, Y., & Colling, K. B. (2000). Cultural verification and application of the Profile of Mood States (POMS) with Korean elders. Western Journal
of Nursing Research ,22,6 8/C083.
Sjo¨berg, L., Svensson, E., & Persson, L. (1979). The measurement of mood. Scandinavian Journal of Psychology ,20,1/C018.
Smallwood, J., & Schooler, J. W. (2006). The restless mind. Psychological Bulletin ,132, 946/C0958.
Sobhanian, F., Boyle, G. J., Bahr, M., & Fallo, T. (2006). Psychological status of former refugee detainees from the Woomera Detention Centre
now living in the Australian community. Psychiatry, Psychology and Law ,13, 151/C0159.
Specht, J., Egloff, B., & Schmukle, S. C. (2011). Stability and change of personality across the life course: The impact of age and major life
events on mean-level and rank-order stability of the Big Five. Journal of Personality and Social Psychology ,101, 862/C0882.
Spielberger, C. D. (1999). Professional Manual for the State /C0Trait Anger Expression Inventory-2 (STAXI-2) . Odessa, FL: Psychological Assessment
Resources.
Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg, P. R., & Jacobs, G. A. (1983). Manual for the State /C0Trait Anxiety Inventory . Palo Alto, CA:
Consulting Psychologists Press.
Spielberger, C. D., Peters, R. A., & Frain, F. J. (1981). Curiosity and anxiety. In H. G. Voss, & H. Keller (Eds.), Curiosity research: Basic concepts
and results (pp. 197 /C0225). Weinheim, Germany: Beltz.
Spielberger, C. D., & Reheiser, E. C. (2009). Assessment of emotions: Anxiety, anger, depression, and curiosity. Applied Psychology: Health and
Well-being ,1, 271/C0302.
Spielberger, C. D., Reheiser, E. C., Owen, A. E., & Sydeman, S. J. (2004). Measuring the psychological vital signs of anxiety, anger, depression,
and curiosity in treatment planning and outcomes assessment. In M. E. Maruish (Ed.), The use of psychological testing for treatment planning
and outcomes assessment: Volume 3: Instruments for adults (3rd ed, pp. 421 /C0447). Mahwah, NJ: Erlbaum.
Spielberger, C. D., Ritterband, L. M., Reheiser, E. C., & Brunner, T. M. (2003). The nature and measurement of depression. International Journal
of Clinical and Health Psychology ,3, 209/C0234.
Spielberger, C. D., Ritterband, L. M., Sydeman, S. J., Reheiser, E. C., & Unger, K. K. (1995). Assessment of emotional states and personality
traits: measuring psychological vital signs. In J. N. Butcher (Ed.), Clinical personality assessment: Practical approaches (pp. 42 /C058). New York:
Oxford University Press.
Stirling, A. E., & Kerr, G. A. (2006). Perfectionism and mood states among recreational and elite athletes. Athletic Insight: The Online Journal of
Sport Psychology ,8,1 3/C027.
Tangney, J. P., Wagner, P. E., Barlow, D. H., Marschall, D. E., & Gramzow, R. (1996). Relation of shame and guilt to constructive versus
destructive responses to anger across the lifespan. Journal of Personality and Social Psychology ,70, 797/C0809.
Tellegen, A., Watson, D., & Clark, L. A. (1999). On the dimensional and hierarchical structure of affect. Psychological Science ,10, 297/C0303.
Terry, P. C., Lane, A. M., & Fogarty, G. J. (2003). Construct validity of the Profile of Mood States-Adolescents for use with adults. Psychology of
Sport and Exercise ,4, 125/C0139.
Thayer, R. E. (1978). Factor analytic and reliability studies on the Activation /C0Deactivation Adjective Check List. Psychological Reports ,42,
747/C0756.
Thayer, R. E. (1986). Activation-Deactivation Adjective Check List (AD ACL): Current overview and structural analysis. Psychological Reports ,
58, 607/C0614.
Thayer, R. E. (1989). The biopsychology of mood and arousal . New York: Oxford University Press.
Thayer, R. E. (1996). The origin of everyday moods . New York: Oxford University Press.
Thayer, R. E., Takahashi, P. J., & Pauli, J. A. (1988). Multidimensional arousal states, diurnal rhythms, cognitive and social processes, and
extraversion. Personality and Individual Differences ,9,1 5/C024.
Van Whitlock, R., & Lubin, B. (1998). Predicting outcomes of court-ordered treatment for DWI offenders via the MAACL-R. Journal of Offender
Rehabilitation ,28,2 9/C040.
Walker, M. K., Sprague, R. L., Sleator, E. K., & Ullmann, R. K. (1988). Effects of methylphenidate hydrochloride on the subjective reporting of
mood in children with attention deficit disorder. Issues in Mental Health Nursing ,9, 373/C0385.
Watson, D., & Clark, L. A. (1999). The PANAS-X: Manual for the Positive and Negative Affect Schedule /C0Expanded Form . Cedar Rapids, IA:
University of Iowa.
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: the PANAS.
Journal of Personality and Social Psychology ,54, 1063/C01070.
Watson, D., & Walker, L. M. (1996). The long-term stability and predictive validity of trait measures of affect. Journal of Personality and Social
Psychology ,70, 567/C0577.
Wrenn, K. C., Mostofsky, E., Tofler, G. H., Muller, J. E., & Mittleman, M. A. (2013). Anxiety, anger, and mortality risk among survivors of myo-
cardial infarction. American Journal of Medicine ,126, 1107/C01113.
Wyrwich, K. W., & Yu, H. (2011). Validation of POMS questionnaire in postmenopausal women. Quality of Life Research: An International
Journal of Quality of Life Aspects of Treatment, Care & Rehabilitation ,20, 1111/C01121.
Youngstrom, E. A., & Green, K. W. (2003). Reliability generalization of self-report of emotions when using the Differential Emotions Scale.
Educational and Psychological Measurement ,63, 279/C0295.
Zeidner (1998). Test anxiety: The state of the art . New York: Plenum.
Zuckerman, M. (1990). Broad or narrow affect scores for the Multiple Affect Adjective Check List? Comment on Hunsley’s ‘Dimensionality of
the Multiple Affect Adjective Check List /C0Revised’. Journal of Psychopathology and Behavioral Assessment ,12,9 3/C097.
Zuckerman, M., & Lubin, B. (1985). Manual for the Revised Multiple Affect Adjective Check List . San Diego, CA: Educational and Industrial
Testing Service.
Zuckerman, M., Lubin, B., & Rinck, C. M. (1983). Construction of the new scales for the Multiple Affect Adjective Check List. Journal of
Behavioral Assessment ,5, 119/C0129.
Zuckerman, M., Lubin, B., Rinck, C. M., Soliday, S. M., Albott, W. L., & Carlson, K. (1986). Discriminant validity of the Multiple Affect
Adjective Check List /C0Revised. Journal of Psychopathology and Behavioral Assessment ,8, 119/C0128.224 8. MEASURES OF AFFECT DIMENSIONS
II. EMOTIONAL DISPOSITIONS |
CHAPTER
9
Measures of Alexithymia
Bob Bermond1, Paul Oosterveld2and Harrie C.M. Vorst1
1University of Amsterdam, Amsterdam, The Netherlands
2Leiden University, Leiden, The Netherlands
Based on 20 psychiatric interviews with psychosomatic disordered patients, Nemiah and Sifneos (1970a)
observed that they:
‘manifested either a total unawareness of feelings or an almost complete incapacity to put into words what they were experiencing.
The associations of the majority of the patients were characterized by a nearly total absence of fantasy or other material related to their
inner, private mental life of thoughts, attitudes and feelings, and a recounting, often in almost infinite detail, of circumstances and
events in their environment, including their own actions. Their thoughts, that is, were stimulus-bound rather than drive-directed.’
Later Sifneos, (1973) coined the term alexithymia for this complex of features. Ten years earlier, Marty and de
M’Uzan (1963) had described many of the above-presented features and introduced the term ‘ pense ´ope´ratoire ’.
(Nowadays pense ´ope´ratoire refers to externally oriented thinking.) Although several slightly different descriptions
exist, most authors take those of Marty and M’Uzan (1963) , and Nemiah and Sifneos (1970a) as the definition of
alexithymia. Indeed, alexithymia has been conceptualized as one of several possible personal risk factors for a
variety of medical and psychiatric disorders ( Taylor, Bagby, & Parker 1997 ). The results, however, are not consis-
tent; others suggested that alexithymia is not specifically related to psychosomatic complaints ( Lesser, Ford, &
Friedmann 1979 ), while Lumley, Stettner, and Wehmer (1996) , and Kojima (2012) came to the conclusion that
there is little support for the hypothesis that alexithymia leads to organic disease. There is, however, general
agreement that alexithymic patients do poorly in psychotherapy focusing on insight and emotional awareness
(Lumley, Neely, & Burger 2007 ).
The ideas presented in the research literature as to what causes alexithymia vary from (1) neurological
(Bermond, Moormann, & Vorst 2006 ); (2) disturbances during childhood (childhood stress/abuse, being raised in
an emotionally cold family, disturbances or deficiencies in early family and social environment), or severe stress
during adulthood (Krystal, 1988); to (3) genetic influences ( Picardi et al., 2011 ). Thus, the general idea is that the
etiology of alexithymia involves multiple factors.
Most authors regard alexithymia as a relatively stable and dimensional personality trait ( Salminen, Saarija ¨rvi,
A¨a¨irela, & Tamminen 1994; Taylor et al, 1997; Taylor, Bagby, & Luminet 2000; De Gucht, 2003; Picardi, Toni, &
Caroppo 2005; Mikolajczak & Luminet, 2006; Parker, Keefer, Taylor, & Bagby 2008 ). Although alexithymia scores
change over time or due to treatment, the correlations between initial measurement and follow-up remain high
(Rufer et al., 2006; de Haan et al., 2012 ). This has led some authors to conceptualize alexithymia as a complex of
both state and trait elements ( Lumley et al., 2007 ), while others differentiate between primary and secondary alex-
ithymia, in which secondary alexithymia is seen as a reaction to severe stress or illness ( Bretagne, Pedinielli, &
Marliere 1992 ).
Alexithymia is a popular research field. PubMed presents more than 18,000 hits (as at November 2013), and
the construct has not only been correlated with classical psychosomatic complaints, but also with a wide array of
other phenomena. Just to give an impression: self-depreciation, introversion, persecutory ideation, impulse
expression, poor intentional control, guilt, fear, depression, rumination, non-cardiac chest pain, breast cancer, dia-
betes, chronic pain, eating disorders, substance dependence, pathological gambling, kidney failure, stroke, HIV
227Measures of Personality and Social Psychological Constructs.
DOI: http://dx.doi.org/10.1016/B978-0-12-386915-9.00009-7 ©2015 Elsevier Inc. All rights reserved. |
infection, fibromyalgia, panic disorder, erectile dysfunction, sperm counts, chronic itching, glucose level regula-
tion, number of words spoken at age one, preference for negatively valenced movies, transsexualism, autism/
Asperger’s disorder, vaginismus, circumcision, dysphoric mood, schizoid personality disorder, self-injury, lack of
enhanced gamma band power, circulating cytokine profile, dissociative proneness, and neuroticism. However,
the effect sizes are generally small or moderate. This large and loose nomological network is often seen as part of
the alexithymia success story. However, when a network becomes too large and loose it suggests that there is
something wrong with the concept, or/and its measurement.
Presently, there is agreement that the following traits are core elements of the alexithymia construct /C0marked
reductions in the capacities to: fantasize, identify, verbalize, and analyze emotions (externally oriented thinking).
One would, thus, expect at least four subscales. However, there are alexithymia scales comprising 1, 2, 3, 4, 5, or
even 6 facets/subscales, indicating that alexithymia is, in its measurement, ill-defined.
When coining the term alexithymia, Sifneos (1973) clearly mentioned ‘marked constriction in experiencing
emotions’ in his description of alexithymia. However, flatness of affect was not included in his Beth-Israel
Hospital Psychosomatic Questionnaire (BIQ), and for good reason: the BIQ is an observation scale and emotional
feelings belong to the domain of first person data that cannot be observed nor reduced to third person data
(Chalmers, 1999 ). Rarely mentioned, Apfel and Sifneos (1979) also published an alexithymia self-rating question-
naire with open questions containing ten items referring to emotional feelings. Flatness of affect should, accord-
ingly, be measured in self-rating scales. Not only have Nemiah and Sifneos always stressed the importance of
this element of alexithymia ( Nemiah, 1977, 1996; Nemiah & Sifneos, 1970a, 1970b; Nemiah, Freyberger, & Sifneos
1976; Sifneos, 1973, 1991, 2000 ), alexithymia experts agree that ‘flatness of affect’ is one of the most characteristic
features of alexithymia ( Haviland & Reise, 1996a ).
There is the question as to what the criterion validity of alexithymia measures should be ( Lumley et al., 2007 ).
The most direct study would be one in which attending therapists would select a group of clearly alexithymic
individuals and non-alexithymic individuals, independent of alexithymia scores, and only measures making the
same classification could be regarded as having criterion validity. There is, only one study coming close to this
(Taylor et al., 1988 ). In this study cut-off scores for the TAS-26 were established by some kind of an iterative
method. Thus, cut-off scores were not set a priori, but posterior chosen to produce the optimal result.1Since the
alexithymia construct stems from the field of psychosomatics, correlations with psychosomatic complaints could
be the next best thing. However, there are reasons to assume that alexithymia does not induce disease and is not
specifically related to psychosomatic illness but related to unexplained bodily experiences often mistaken for
pathology ( Flannery, 1977 ). Furthermore, only some psychosomatic patients show clear alexithymic characteris-
tics ( Sifneos, 1973; Porcelli & Meyer, 2002 ). Finally, the alexithymia nomological network is too loose, and too
large to be used as a standard for alexithymia scales.
Other problems are due to the fact that (a) alexithymia is conceived as a cluster of various traits; and (b) alex-
ithymia scales are both diagnostic and research instruments.
Another question is whether scales should be unidimensional or multidimensional. If one considers alexithy-
mia as a phenomenon defined by a number of highly correlated traits, one accepts many cross-loadings, and
thus, high correlations between subscales ( Haviland, Warren, & Riggs 2000 ). In such cases, the interpretation of
subscale scores is problematic. However, although some alexithymia facets are, on face validity, correlated (iden-
tify emotions and verbalizing emotions) others are not or less (fantasize and identify emotions). Moreover, in
case of moderate total scores, extreme subscale scores still indicate specific deficits in emotion regulation, and
there are indications that different facets or combinations of alexithymia facets are related differently to other
constructs, and different types of problems ( Moormann et al., 2008; Chen, Xu, Jing, & Chan 2011 ).Vorst and
Bermond (2001) strived, therefore, to make their subscales as independent from one another as possible.
However, the lower the correlations between subscales the more problematic the interpretation of moderate sum
scores become. There is a strategy feasible in between these extremes. For instance, Bagby, Parker, and Taylor
(1994a) in their TAS-20 construction, demanded that all items should have corrected item-total correlation of
$.20 (total here defined as the remaining items on the other content domains/facets) while dismissing all items
with cross-loadings of ..35. Hence, these authors strived to items that on the one hand selectively measured
only one facet of alexithymia, while on the other hand all items also had to contain an element that referred to
other facets.
1Meganck, Inslegers, Vanheule & Desmet 2011 used some kind of golden standard, since here an observation scale had to be filled out on the
basis of various alexithymia measurements, suggesting more or less ‘true’ alexithymia scores.228 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
Some authors take items from other sources that were not intended to measure alexithymia, or create items
that refer to constructs assumed to be related to alexithymia (e.g., Taylor, Ryan, & Bagby 1985 ;Fukunishi,
Yoshida & Wogan 1998 ;Haviland et al., 2000 ). This introduces the threat that the measure lacks content validity
wherein the items measures alexithymia indirectly. For clinical use, this can be acceptable if the measure is
regarded as a proxy measure. However, since such scales contain items that measure primarily other constructs,
correlations with these other constructs make such scales less fit for research.
Previous reviews ( Bagby, Taylor, Parker, & Ryan 1986; Bagby, Taylor, & Atkinson 1988; Parker, Taylor, Bagby,
& Thomas 1991; Bretagne et al., 1992; Linden, Wen, & Paulhus 1995, and Taylor et al., 2000 ) have been reported for
the Beth Israel Hospital Psychosomatic Questionnaire (BIQ), Schalling-Sifneos Personality Scale (SSPS), Minnesota
Multiphasic Personality Inventory Alexithymia Scale (MMPI-A), Thematic Apperception Measure (TAT),
Archetypal measure with nine elements (SAT 9), Toronto Alexithymia Scale (revised) (TAS & TAS-R), Analog
Alexithymia Scale (AAS) Speech analysis, and the Rorschach Alexithymia Scale (RAS). With exception of the BIQ
and TAS (R), these measures have insufficient psychometric properties and will not be reviewed in the present
chapter, with the exception of the BIQ, TAS-R, and the RAS.
MEASURES REVIEWED HERE
Although there are several emotion scales/measures that are sometimes used to estimate alexithymia we will
limit ourselves to those explicitly intended to measure alexithymia. Measures of alexithymia fall into three cate-
gories: projection scales, observation scales, and self-rating scales. To these we have added the category ‘measure-
ment scales for children and adolescents’, because measurement of alexithymia in these age groups has specific
problems.
Only measures for which psychometric data are published are discussed. These are: the Rorschach
Alexithymia scale (RAS), the Beth Israel Hospital Psychosomatic Questionnaire (BIQ), the modified Beth Israel
Questionnaire (M-BIQ), California Q-set Alexithymia Prototype (CAQ-AP), Observation Alexithymia Scale
(OAS), the Toronto Structured Interview for Alexithymia (TSIA), Toronto Alexithymia Scales (TAS-26, TAS-R,
TAS-20), the Amsterdam Alexithymia (AAS), the Bermond-Vorst Alexithymia Questionnaire (BVAQ), the
Psychological Treatment Inventory-Alexithymia Scale (PTI-AS), the Alexithymia Observation Scale for Children
(AOSC), the Toronto Alexithymia Scale for Children (TAS12), and the Emotional Awareness Questionnaire
(EAQ). The ‘Online Alexithymia Questionnaire’ (OAQ) will not be discussed, since we could not find any psy-
chometric data. Some scales (TAS-26, TAS-R, AAS, & BIQ) are clearly precursors of later measures (TAS-20,
BVAQ, & M-BIQ), and therefore are discussed briefly in describing the development of the final scale, in each
case. The scales/measures are reviewed in the order presented below.
Projection Scales
1.Rorschach Alexithymia Scale ( Porcelli & Mihura 2010 )
Observation Scales
2.Beth Israel Hospital Psychosomatic Questionnaire ( Sifneos, 1973 ; cf.Linden et al., 1995 )
3.Modified Beth Israel Questionnaire ( Taylor et al., 1997 )
4.California Q-set Alexithymia Prototype ( Haviland & Reise, 1996a )
5.Observation Alexithymia Scale ( Haviland et al., 2000 )
6.Toronto Structured Interview for Alexithymia ( Bagby, Taylor, Parker, & Dickens 2006 )
Self-rating Scales
7.Toronto Alexithymia Scale 20 ( Bagby et al., 1994a )
8.Bermond-Vorst Alexithymia Questionnaire ( Vorst & Bermond, 2001 )
9.Psychological Treatment Inventory-Alexithymia Scale ( Gori, Gieannini, Palmieri., Salvini, & Schuldberg 2012 ).
Measurement Scales for Children and Adolescents
10.Alexithymia Observation Scale for Children ( Fukunishi et al., 1998 )
11.Toronto Alexithymia Scale for Children ( Heaven, Ciarrochi, & Hurrell 2010 )
12.Emotional Awareness Questionnaire ( Rieffe et al., 2007, Rieffe, Oosterveld, Miers, Meerum Terwogt, & Ly; 2008 )229 MEASURES REVIEWED HERE
III. EMOTION REGULATION |
OVER VIEW OF THE MEASURES
The Rorschach Alexithymia Scale (RAS; Porcelli & Mihura, 2010 ) is the only alexithymia projection measure
discussed in this chapter, since all other alexithymia projection measures have insufficient psychometric proper-
ties. Although the psychometric data for the RAS are very limited the existing data are promising. For those clini-
cians who use the Rorschach as a diagnostic instrument, the RAS can be used to estimate TAS-20 alexithymia
scores.
Observation scales: Observation scales are meant to be filled out by raters on the basis of the knowledge s/he
already has, or has obtained in either a preceding interview or preceding observation of behavior. This means
that the quality of observation scales is among other things, dependent on the expertise and skills of the rater.
The 8-item dichotomous scored observation measure, the Beth Israel Hospital Psychosomatic Questionnaire
(BIQ, Sifneos, 1973 ), is the oldest alexithymia but, but its research utility was later largely surpassed by the
26-item Toronto Alexithymia Scale (TAS-26; Taylor et al., 1985 ).
The Modified Beth Israel Questionnaire (M-BIQ; Taylor et al., 1997 ;Bagby, Taylor, & Parker 1994b ) latest BIQ
improvement. The main differences with the original BIQ are: four more items, and a 7-point Likert-type
response format, instead of the original dichotomous scoring.
The California Q-set Alexithymia Prototype (CAQ-AP; Haviland & Reise, 1996a) is an alexithymia observation
scale, based on the California Q-statements set. The measure is time consuming ( 645 minutes) and needs
expertise. In fact, the measure was meant to be used by professionals.
The Observation Alexithymia Scale (OAS; Haviland et al., 2000 ) is a brief observation scale, based on the
CAQ-AP, to be used by professionals as well as laymen. The main difference with the CAQ-AP is that the state-
ments are written in ordinary language, understandable by laymen.
The Toronto Structured Interview for Alexithymia (TSIA; Bagby et al., 2006 ), was, as its name indicates, meant
to structure the preceding interview as much as possible (including probes and prompts), thereby reducing the
importance of the factor expertise and skills of the interviewer.
Self-rating scales: The quality of self-rating scales is fully dependent on the quality of the items. It is for this
reason that we will describe the history of development of the scales, and sources of the items included in to
the scales.
The 20-item Toronto Alexithymia Scale (TAS-20; Bagby et al., 1994a) had two precursors; the, above mentioned
TAS-26, and the Toronto Alexithymia Scale Revised (TAS-R; Taylor, Bagby, & Parker 1992 ). The final scale covers
three subscales: Difficulty identifying feelings, Difficulty describing feelings, and Externally-oriented thinking.
Most of the relative recent alexithymia research has been done with the TAS-20.
The 40-item Bermond Vorst Alexithymia Questionnaire (BVAQ; Vorst & Bermond 2001 ) has also extensively
been used in research, although less than the TAS-20. The scale covers five subscales: Inability to differentiate
between emotions, Inability to verbalize emotions, Inability to analyze emotions, Inability to fantasize, and
Inability to experience emotions. Higher order factor analyses indicated that these five subscales are part of two
higher dimensions: an alexithymia cognitive factor and an alexithymia affective factor. Scores on the scale can be
analyzed on subscale level, and on the level of the two dimensions mentioned. Since the two dimensions are
orthogonal, the 40-item-sum-total score is without meaning.
The Psychological Treatment Inventory Alexithymia Scale (PTI-AS; Gori et al., 2012 ) is an extremely short scale
of only five items.
Measurement of alexithymia in adolescents and children: Although this category contains observation scales as well
as self-rating scales, we opted to group them together, since the measurement of alexithymia in children has
specific problems.
The Alexithymia Observation Scale for Children (AOSC; Fukunishi et al., 1998 ) is a 12-item observation scale
covering two subscales: Difficulty communicating to others and Difficulty relating to others, to be filled out on
base of observations of behavior.
The Toronto Alexithymia Scale for Children (TAS12; Heaven et al., 2010 ;Rieffe et al., 2010 ) is a 12-item self-rat-
ing scale, containing the 12 items of the TAS-20, concerning the subscales Difficulty identifying feelings and
Difficulty describing feelings. There are two versions one contains the original TAS items, the other contains the
same items but now rewritten in children’s language.
The Emotional Awareness Questionnaire (EAQ-30; Rieffe et al., 2008 ) is a 30-item self-rating scale covering six
subscales: Differentiating emotions, Verbal sharing emotions, Analyzing emotions, Not hiding emotions, Bodily
awareness, and Attending to others’ emotions. The first three subscales aim to measure the same domains as the
TAS-20, the other three aim to measure alexithymia related features.230 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
PROJECTION SCALES
Rorschach Alexithymia Scale (RAS)
(Porcelli & Mihura 2010 ).
Variable
The RAS was developed to provide a projection scale estimate of the TAS-20 alexithymia scores. The RAS con-
sists of a selection of Rorschach indices, meaning that individuals have to report what they see in all Rorschach
inkblots. These responses must then be scored with the aid of the extensive Rorschach manual ( Exner, 1993 ).
Description
Porcelli and Meyer (2002) stated that earlier Rorschach /C0alexithymia studies had various methodological short-
comings. They therefore re-studied the associations of various Rorschach indices with the TAS-20 in a group of
92 chronic inflammatory bowel disease patients. On face validity they selected 27 Rorschach indices, of which 24
turned out to be significantly related to alexithymia as measured by the TAS-20. Recently the RAS ( Porcelli &
Mihura, 2010 ) was developed using scores of 127 psychiatric outpatients. For reasons of effect size, redundancy,
and simplicity, Porcelli and Mihura reduced the 24 indices down to six. One first has to present all Rorschach
inkblots, then score the responses of the person, according to the Rorschach manual, and then select the relevant
RAS indices, making its administration time consuming, with the total time needed depending on the number of
response to the various inkblots.
There is only one RAS publication (Porcelli, personal communication).
Sample
Porcelli and Meyer’s (2002) sample consisted of 92 chronic inflammatory bowel disease (IBD) patients, taking
5-aminosalicylate alone or in combination with steroid treatment, according to their IBD activity status. No
patient had undergone surgery. The sample was homogeneous for disease, geographical area, and treatment set-
ting (% of women, and mean age not reported). Porcelli and Mihura’s (2010) sample consisted of 127 psychiatric
outpatients (57.5% women) with a mean age of 30.4 years ( SD59.9) and a mean education of 13.0 years
(SD53.7).
Reliability
Inter-Rater
Porcelli and Meyer (2002) reported intra-class correlations for the Rorschach indices between two raters scor-
ing the same 30 Rorschach protocols ( M5.87; range 0.72 to 1.00)
Validity
Convergent/Concurrent
The sum of the RAS weighted scores (unstandardized regression coefficients) correlated positively with TAS-
20 (.78), TAS-20-DIF (.77), DDF (.63), and EOT (.69). The RAS cutting-score of 56 provided sensitivity, specificity,
and overall correct classification (.88, .94, and .92 respectively), in terms of TAS-20 scores ( Porcelli & Mihura,
2010 ).
Location
Porcelli, P. & Mihura, J.L. (2010). Assessment of alexithymia with the Rorschach comprehensive system: The
Rorschach Alexithymia Scale (RAS). Journal of Personality Assessment , 92, 128 /C0136.
Results and Comments
The limited data available for the TAS-20 point to good concurrent validity. However, since there are no other
psychometric data, more RAS studies are necessary. Furthermore, the Rorschach administration is time consum-
ing and requires extensive experience. Finally RAS scores are just estimates of TAS-20 scores.
RAS Sample Items
Sample Rorschach indices: Form%, CDI, M, (Rorschach Comprehensive System; Exner, 1993 ).231 PROJECTION SCALES
III. EMOTION REGULATION |
OBSERVATION SCALES
Modified Beth Israel Questionnaire (M-BIQ)
(Taylor et al., 1997 )
Variable
As a measure of alexithymia, the M-BIQ was developed in order to improve on the psychometric properties of
the Beth Israel Questionnaire (BIQ; see below).
Description
The Beth Israel Questionnaire (BIQ) ( Sifneos, 1973 ) was the first alexithymia scale published. It was developed
to provide an alexithymia observation scale, which has to be filled out, using a dichotomous response format, by
attending therapists, based on knowledge s/he already has, or has obtained in a preceding interview. The BIQ
comprises 17 items of which eight refer to alexithymia (6 indicative and two contra-indicative). The core items
cover the following domains: verbalizing emotions, emotion expression, fantasy, and external thinking. All
reviewers ( Paulson, 1985; Sriram et al., 1988 ;Bretagne et al., 1992 ; and Linden et al., 1995 ), agreed that, under the
condition that the interview is structured and recorded, the BIQ has acceptable psychometric properties. Taylor
et al. (1997) have produced a Modified Beth Israel Questionnaire (M-BIQ). Adaptation was considered necessary
since studies that used separate interviews to rate the same patients reported low inter-rater reliabilities, indicat-
ing that the scores are influenced by the experience, bias, and style of the interviewer ( Taylor et al., 2000 ). The
M-BIQ items were first published in Taylor et al. (1997) , and psychometric data were added later by Bagby et al.
(1994b) . BIQ items were rewritten and four new items (referring to fantasy and dreaming) were added to the
original eight BIQ items. In addition, the rating scale was changed from a dichotomous to a 7-point Likert-type
format. The resulting 12-item questionnaire comprises six items pertaining to the ability to identify and verbally
communicate feelings (Affect Awareness /C0AA), and six items pertaining to imaginal activity and externally-
oriented thinking (Operatory Thinking /C0OT) ( Bagby et al., 1994b ).
All items have sufficient face validity.
Sample
The two Bagby et al. (1994b) samples comprised (1) 39 patients (14 males; 25 females, M536.62 years,
SD510.56) referred for assessment and possible treatment in the behavioral medicine out-patient clinic; and
(2) 85 undergraduate students (28 males; 55 females, M521.47 years. SD 55.24), respectively.
Reliability
Internal Consistency
Cronbach alpha coefficients reported by Fukunishi, Nakagawa, Nakamura, Kikuchi, and Takubo (1997) ranged
from .70 to .85 ( M5.79). Comparable alpha coefficients were reported (.90) by Haviland, Warren, Riggs, and
Nitch (2002) , as well as (.83) by Lumley, Gustavson, Partridge, and Labouvie-Vief (2005) , and (.85) by Meganck
et al. (2011) .
Test/C0Retest
Taylor et al. (2000) reported a test /C0retest reliability coefficient over a three-month interval of .71 for a college
student sample and .51 for a psychiatric outpatients’ sample.
Inter-Rater
Bagby et al. (1994b) reported significant inter-rater reliability coefficients among three clinicians who inter-
viewed 39 outpatients referred to a Behavioral Medicine Clinic (Kappa .51).
Validity
Convergent/Concurrent
Haviland et al. (2002) reported positive correlations between the M-BIQ and the Observation Alexithymia
Scale (OAS; Haviland et al., 2000 ; see section below on OAS) of .69, between the M-BIQ subscale AA and OAS of
.75, and between the M-BIQ subscale OT of .48, while the correlations between M-BIQ and OAS subscales varied232 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
between .16 and .71 (mean .47). However, in the Lumley et al. (2005) study, the correlations were all #.3, (range
.00 to .19). This striking difference could be explained by the fact that different raters completed the OAS and
M-BIQ in the Lumley study. Finally, Meganck et al. (2011) reported a positive correlation (.36) between the
M-BIQ and the OAS. Bagby et al. (1994b) , and Meganck et al. (2011) both reported positive correlations between
the M-BIQ and the Toronto Alexithymia Scale (TAS-20, see section below on TAS-20), of .53 and .48, respectively.
Bagby et al. also reported positive correlations with the TAS-20-subscales as follows: DIF (.36), DDF (.57), and
EOT (.30). Taylor et al. (2000) cited a Spanish study (Martı ´nez-Sa ´nchez et al., 1998) reporting significant correla-
tions between the M-BIQ and its subscales with the TAS-20 (.47 to .51). In addition, Lumley et al. (2005) reported
the following correlations: TAS-20/M-BIQ (.26), TAS-20/M-BIQ-subscale AA (.33), TAS-20/M-BIQ-subscale OT
(.12). Fukunishi et al. (1997) reported sensitivity and selectivity of the M-BIQ to the TAS-20 of 84.2% and 89.1% in
a psychiatric sample and 77.1% and 82.5% in a student sample. Also, Meganck et al. (2011) reported a positive
correlation of .76 between the M-BIQ and Toronto Structured Interview for Alexithymia (TSIA, see section below
on TSIA), and .59 with ‘alexithymia’ (see Footnote 1).
Divergent/Discriminant
Scores on the M-BIQ and its two subscales were found to be unrelated to general intelligence and vocabulary
scores, as measured by the Shipley Institute Living Scale. However, the M-BIQ subscale AA correlated ( /C0.32)
with the abstract thinking scores of the CCEI ( Taylor et al., 2000 ).
Construct/Factor Analytic
Fukunishi et al. (1997) performed a principal components analysis with varimax rotation using a small sample
of 149 psychiatric outpatients. They found that the M-BIQ mean component loading on subscale AA was .58, and
on OT was .55. Total explained variance was: M-BIQ (40.9%), AA (23.7%), and OT (17.2%). The authors reported
comparable results for a sample of 501 college students. The two M-BIQ subscales were found to correlate .41
(Fukunishi et al., 1997 ) and .63 ( Haviland et al., 2002 ), respectively.
Criterion/Predictive
Fukunishi et al. (1997) reported correlations of the M-BIQ with the MMPI-2 subscales, ( N5149). Of the 29 cor-
relations 17 were greater than .30, and in the expected direction. The authors further reported comparable results
for an undergraduate sample ( N5473). Lumley et al. (2005) reported a negative correlation ( /C0.29) with the
Emotional Approach Coping Scale and ( /C0.32) with the Trait Meta Mood Scale.
Location
Taylor, G., Bagby, R.M. & Parker J.D. (1997). Disorders of affect regulation: Alexithymia in medical and psychiatric
illness . Cambridge University Press.
Results and Comments
All items have sufficient face validity. There is evidence of acceptable reliability, and discriminant and conver-
gent validity. However published subscales correlations (.41 & .63) suggest significant measurement overlap.
M-BIQ Sample Items
The patient mostly described details concerning symptoms rather than feelings.
The patient expressed affect more in physical terms than in thoughts.
The patient had a rich affective vocabulary.
The patient indicated that he/she did not daydream very much.
California Q-set Alexithymia Prototype (CAQ-AP)
(Haviland & Reise, 1996a ).
Variable
The CAQ-AP was developed in order to provide an observational measure of alexithymia based on the
California Q-set., to be used by professionals. The California Q-set consists of 100 statements, meant to describe
persons in standardized language ( Block, 1961 ).233 OBSERVATION SCALES
III. EMOTION REGULATION |
Description
Haviland and Reise (1996a) described the prototypical features of the alexithymic individual. They asked 17
alexithymia experts to sort all 100 statements of the California Q-set into a forced 9-category, quasi-normal distri-
bution varying from ‘alexithymia most-uncharacteristic’ to ‘alexithymia most-characteristic’. Thirteen alexithymia
experts returned usable sorts. Individual item scores of these 13 experts were summed, ranked, and converted to
the original 9-point score distribution. The California Q-set Alexithymia Prototype (CAQ-AP) was formed from
the 13 statements with the highest sums (5 most-characteristic and 8 quite-characteristic), and the 13 statements
with lowest sums (5 most-uncharacteristic and 8 quite-uncharacteristic). The CAQ-AP has no subscales. Since the
California-set was constructed before the introduction of the alexithymia construct, many of the CAQ-AP items
refer to constructs related to alexithymia, but not to alexithymia directly.
Sample
The Haviland and Reise (1996a) sample comprised 13 alexithymia experts. The sample size in the Haviland
(1998) study, consisted of 155 undergraduate students (84 women and 71 men, M520 years, SD52), rating con-
temporary and historical leaders.
Reliability
Inter-Rater
Haviland and Reise (1996a) reported low to acceptable inter-rater reliability correlations for the various
Q-statements with a mean Q-correlation of .58 (ranging from .20 to .75). However, correlations between individ-
ual raters and the mean CAQ-AP scores (over all raters) were much higher (mean Q-correlation .77, range .54 to
.85).
Validity
Convergent/Concurrent
Haviland and Reise (1996a) reported CAQ-AP Q-correlations with the two Lewinian constructs; one of which
was positively related to alexithymia ‘Ego-control’ (.45), and one assumed to be negatively related to alexithymia
‘Ego resiliency’ ( /C0.70). These authors further reported a correlation between the CAQ-AP and the Social Skill
Inventory Emotional expression ( /C0.34), between the CAQ-AP and the NEO Personality Inventory Extraversion
(.38), Beck Depression Inventory (.38, all other correlations were ,.3). Finally, Haviland and Reise (1996a)
reported a Q-correlation with the Overcontrolled prototype of .45.
Divergent/Discriminant
Haviland, Sonne, and Kowert (2004) compared the CAQ-AP with the California Q-set Psychopathy Prototype
(CAQ-PP). The correlation between the CAQ-AP scores and those on the CAQ-PP, in a group of 155 undergradu-
ates was .13.
Location
Haviland, M.G. & Reise, S.P. (1996). A California Q-set alexithymia prototype and its relationship to ego-
control and ego-resiliency. Journal of Psychosomatic Research, 6, 597/C0608.
Results and Comments
Some convergent/concurrent validity correlations in the Haviland (1998) study could have been inflated by
the fact that some Q-statements in the CAQ-AP relate more or less directly to extraversion, depression, and ego
control (see sample items), suggesting that the CAQ-AP is less suitable for research purposes. The CAQ-AP is a
time consuming procedure (45 to 60 minutes; Haviland et al., 2000 ).
CAQ-AP Sample Items
Is socially receptive of a wide range of interpersonal cues.
Feels a lack of personal meaning in life.
Has a brittle ego-defense system; has a small reserve of integration; would be disorganized and maladaptive
when under stressor trauma.
Is emotionally bland; has flattened affect.234 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
Observation Alexithymia Scale (OAS)
(Haviland et al., 2000 )
Variable
The OAS was developed in order to get a relative brief observational measure of alexithymia stated in ordi-
nary language so it could be used both by professionals as well as laypersons.
Description
Haviland et al. (2000) started by rewriting the 26 Q-statements of the CAQ-AP and added some new ones.
This was considered necessary in order to: (1) eliminate passive voice, double negatives, and ambiguity;
(2) maintain reading ease; (3) preserve balance between indicative and contra-indicative items; and (4) maintain
good conceptual coverage of the alexithymia construct. In a study among 203 students at a health science univer-
sity, rating other people whom they know well, 33 items out of the original 44-item set, were selected for the
OAS. Each item has to be rated on a 4-point response scale. The OAS comprises five subscales labeled: Distant,
Uninsightful, Somatizing, Humorless, and Rigid.
Although the procedure as followed by the authors is straightforward, many of the OAS items (see section
sample items) seem far removed from the original description of the alexithymic individual as described by
Nemiah and Sifneos (1970a) , and Sifneos (1973) . Nevertheless, given the method used, there are good reasons to
assume that these OAS items are still related to alexithymia. Fifteen items are negatively keyed; however these
contra-indicative items are quite unequally spread over the subscales.
Sample
The Haviland et al. (2000) item selection study was based on an initial sample of 203 undergraduates (73%
women; 27% men) whom were asked to rate other people whom they knew very well: parents, spouses, girl-
friends/boyfriends, adult children, or siblings. Of the targets, 42% were women, mean age 32 years (range 17 to
90). The sample used for the EFA consisted of 467 undergraduates (61% female; 39% male), rating the same type
of persons as mentioned above. Of the targets, 46% were women, mean age 26 years (range 18 to 78). The sample
used for the CFA consisted of 352 graduate students (79% women; 21% men), rating similar persons as men-
tioned above. Of the targets 54% women, mean age 28 years (range 18 to 78). The sample in the Haviland,
Warren, Riggs, and Gallacher (2001) study consisted of 20 clinical psychologists who rated a patient (not suffer-
ing from dementia, bipolar disorder, or schizophrenia, nor diagnosed with an Axis II disorder) whom they knew
very well.
Reliability
Internal Consistency
Haviland et al. (2000) reported Cronbach alpha coefficients of .88 and .89, while those for the subscales varied
between .72 and .86 (comparable results were reported by Haviland et al., 2001, 2002 ;Yao, Yi, Zhu, & Haviland
2005 ;Berthoz, Haviland, Riggs, Perdereau, & Bungerner 2005 ;Mueller, Alpers, & Reim 2006 ;Dorard et al., 2008 ;
Thorberg et al., 2010 ;Meganck, Vanheule, Desmet, & Inslegers 2010, 2011 ;Foran, O’Leary, & Williams 2012 , and
Coolidge, Estey, Segal, & Marle 2013 ). However, Dorard et al., 2008, Meganck, et al., 2010, 2011 and Coolidge et
al., 2013 reported the following somewhat lower alpha coefficients (.54, .52, .43, & .61) for the subscale Rigid.
Test/C0Retest
Haviland et al. (2000) reported a stability coefficient for the OAS of .87 over a 2 /C03 week interval, with
test/C0retest correlations for the subscales ranging from .61 (Somatizing) to .87 (Humorless). Comparable results
were reported by Haviland et al. (2001) ,Yao et al. (2005) , and Meganck et al. (2010) .Thorberg et al. (2010) using a
test/C0retest interval of three months, reported somewhat lower stability coefficients; especially for the OAS (.65),
and the subscale Uninsightful (.48).
Inter-Rater
Inter-rater reliabilities were reported by: Yao et al. (2005) ,Mueller et al. (2006) ,Berthoz, Perdereau, Godart,
Corcos, and Haviland (2007) ,Dorard et al. (2008) , and Meganck et al. (2010) . The coefficients for the OAS varied
between .80 and .71, whereas those for the OAS subscales ranged from .32 to .89. Especially the results for the
subscales Uninsightful (mean .56, range .32 to .78), and Rigid (mean .56, range .35 to .64) were low.235 OBSERVATION SCALES
III. EMOTION REGULATION |
Validity
Convergent/Concurrent
The correlations between the OAS and the M-BIQ have been presented above.
In addition, Meganck et al. (2011) reported a positive correlation of .37 between the OAS and TSIA ,and a posi-
tive correlation with ‘alexithymia’ as measured by their scale (see footnote 1) of .59, respectively. Berthoz et al.
(2005, 2007) ,Lumley et al. (2005) ;Yao et al. (2005) ;Mueller et al., (2006) ;Dorard et al. (2008) and Foran et al.
(2012) reported positive correlations with the TAS-20 varying between .25 and .41 (mean .34); between the OAS
and TAS-20 subscales ranging from .03 to .40 (mean .23) and between TAS-20 and OAS subscales ranging from
.02 to .38 (mean .23). Thorberg et al. (2010) and Meganck et al. (2010, 2011) report low correlations; OAS with
TAS-20: .09, .23 & .28; correlations OAS with TAS-subscales range .03 /C0.22, mean .12; TAS with OAS-subscales
range .02 /C0.22, mean .12. The raters in the Thorberg et al. and Meganck et al. studies were either clinical psycholo-
gists at masters or doctoral level, or attending therapists. Thus, the very disappointing results may not be
explained by assuming that the raters had serious shortcoming in their psychological skills or knowledge of their
patients.
Berthoz et al. (2007) and Dorard et al. (2008) measured alexithymia using the BVAQ-B and reported OAS/
BVAQ-B correlations of .31 and .46, those between the OAS and BVAQ-B subscales ranging from .11 to .37 (mean
.24), with those between the BVAQ-B and OAS subscales ranging from .04 to .39 (mean .25). However; the BVAQ
provides two independent sumscores, the correlations should have been calculated with these two sumscores
(see section BVAQ).
Construct/Factor Analytic
An EFA (principal axis factoring procedure) was carried out using a sample of 467 university students (rating
either their family members, girlfriends/boyfriends, or other friends), that resulted in five factors labeled:
‘Distant’ (10 items, 17%), ‘Uninsightful’ (8 items, 14.2%), ‘Somatization’, (5 items, 9.7%), ‘Humorless’ (5 items,
13.6%), and ‘Rigid’ (5 items, 11.1%) ( Haviland et al., 2000 ). However, 23 of the 33 items had cross loadings $.30
indicating a less than optimal simple structure solution. The authors also presented a CFA using a sample of 352
university students. The model test comprised the five primary factors and a second order factor (alexithymia)
with within-dimension item parcels of 2 to 4 items in each parcel and 2 or 3 parcels per dimension. This model
provided a good fit to the data ( χ2/df51.33, CFI 5.99). Similar results using item-parcels have been reported
(Haviland et al., 2001; Haviland et al., 2002; Berthoz et al., 2005; and Yao et al., 2005 ). However, Meganck et al.
(2010) pointed out that unidimensionality of the scales is an important requirement for the use of parcels, and if
not met, item parceling can disguise misspecified models and erroneously indicate a good fit for bad models.
Given the many cross loadings mentioned above, we may assume that this condition is not fulfilled. Also, many
of the studies mentioned above allowed, post hoc relaxations of the model to improve the fit. Because of these sta-
tistical shortcomings, Meganck et al. (2010) studied the factor structure of the Dutch OAS version, in both a clini-
cal sample (201 psychiatric inpatients, rated by their attending psychologist and a nonclinical sample (264, rated
by university students). Three models were measured in both samples: (1) Haviland’s model with five first-order
factors (Distant, Uninsightful, Somatizing; Humorless, Rigid) that are loaded by one second-order factor, using
item parcels; (2) The same model but with all items loaded separately; (3) A first-order model with five correlated
factors (see above), with all items loaded separately. An excellent fit was found for the first model in both
samples (clinical sample: CFI 5.99, SRMR 5.05; nonclinical sample: CFI 5.98, SRMR 5.06). Both other models
turned out to have a much lower fit to the data. Model 2 (clinical sample: CFI 5.88, SRMR 5.13; nonclinical
sample: CFI 5.89, SRMR 5.14). Model 3 (clinical sample: CFI 5.88, SRMR 5.13; nonclinical sample: CFI 5.90,
SRMR5.11).
Haviland et al. (2001, 2002) ,Berthoz et al. (2005, 2007) ,Yao et al., (2005) ,Lumley et al. (2005) , and Dorard et al.
(2008) reported correlations between the OAS subscales, as follows: Between Distant and Uninsightful, range .16
to .54, (mean .31), Distant/Humorless, range .38 to .65 (mean .54), Distant/Rigid, range .28 to .46 (mean .37),
Uninsightful/Somatizing, range .32 to .55 (mean .42), Uninsightful/Humorless, range .23 to .41 (mean .33),
Uninsightful/Rigid, range .26 to .57 (mean .44), Somatizing/Rigid, range .20 to .46 (mean .34), and Humorless/
Rigid, range .34 to .55 (mean .43).
Criterion/Predictive
Haviland et al. (2001) compared a clinical sample with a non-clinical sample, and presented effect sizes (d/SD
of nonclinical group) for the differences between the two groups. These effect sizes were large (OAS 1.3,236 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
subscales varied between 0.8 (Somatizing) to 1.1 (Humorless). Lumley et al. (2005) reported correlations between
the OAS and OAS subscales with: (1) the Levels of Emotional Awareness Scale; (2) the Mayer /C0Salovey /C0Caruso
Emotional Intelligence Test and its four subscales; and (3) the Trait Meta-Mood Scale and its three subscales. Of
the 72 correlations calculated, only four correlations exceeded .30.
However, Foran et al. (2012) reported predictive correlations, using a sample of 109 married/cohabitating cou-
ples, between the OAS and the Symptom Checklist-90, the Beck Depression Inventory Revised, the Emotional
Intelligence Scale, the couples Emotional Awareness Scale (measuring emotional awareness of oneself and one’s
partner), and the Marital Satisfaction Inventory Revised (providing eight measurements for various relationship-
relevant domains of satisfaction). Among males, 11 out of the 13 correlations were significant and in the expected
direction (range .20 to .64, mean .45), whereas among females, 12 correlations were significant (range .24 to .54,
mean. 45). Dorard et al. (2008) calculated correlations between OAS scores and those on the BDI-13 and State and
Trait Anxiety Inventory (STAI). Only the STAI-trait correlated significantly with the OAS scores (.37). However,
the authors also reported, unannounced, predictive correlations between OAS and rater BDI and rater STAI trait
as well as rater STAI state scores (respectively .35, .27 & .31).
Mueller et al. (2006) studied a group of 45 psychosomatic inpatients with either high or low OAS scores and
compared responses on an emotional Stroop task. High scoring patients showed significantly less emotional bias
for emotionally negative words, while such results were not found when using the TAS-20. Berthoz et al. (2007)
used the TAS-20 and Dorard et al. (2008) used the TAS-20 and the BVAQ-B to classify individuals into alexithy-
mics vs. non-alexithymics. In both studies, non-alexithymics scored significantly lower on the OAS. Finally,
Haviland et al. (2001) used receiver operating characteristic (ROC) analyses to determine OAS total and subscale
score thresholds for differentiating the clinical from non-clinical members of their group. Sensitivity and specific-
ity of the OAS and subscales were acceptable (sensitivity: range .73 for OAS to .63 Somatization; specificity .80
OAS to .60 Rigid).
Location
Haviland, M.G., Warren, W.L. & Riggs, M.L. (2000). An observer scale to measure alexithymia. Psychosomatics,
41, 385 /C0392.
Results and Comments
Most items refer to features related to alexithymia, and thus do not measure alexithymia directly. Fifteen items
are negatively keyed, however these contra-indicative items are quite unequally spread over the subscales. The
OAS factors explain 66% of the variance. However, the many cross loadings, and correlations between subscales
is problematic. The OAS and its subscales have adequate reliabilities, whereas convergent/concurrent validity
correlations with the M-BIQ and TSIA produced only acceptable results if the same person rated both measure.
Furthermore, the correlations between the OAS and the TAS-20 and BVAQ point to insufficient concurrent valid-
ity. However published studies, indicate convergent validity.
OAS Sample Items
Is a warm person (Distant).
Falls apart when things are really tough (Uninsightful).
Worries much about his or her health (Somatizing).
Has a good sense of humor (Humorless).
Is too self-controlled (Rigid).
Toronto Structured Interview for Alexithymia (TSIA)
(Bagby et al., 2006 ).
Variable
The TSIA is a structured interview with an observation scale. The TSIA manual ( Bagby et al., 2009 ; Grabe et al.,
2014) provides guidelines for the administration (interview questions with probes and prompts), and scoring.
Interviewers/raters should be familiar with the alexithymia construct and be trained in administering the TSIA.237 OBSERVATION SCALES
III. EMOTION REGULATION |
Description
In developing the TSIA Bagby et al. (2006) started with 60 interview questions, based on the TAS-26, TAS-20,
BIQ and other measures related to alexithymia. After a pilot study 43 items remained, and after an item selection
method much like that described for the TAS-20 (see section TAS-20), 24 items remained, six for each facet. These
facets are, with the exception of Imaginal processes (IMP, not measured by the TAS-20), comparable to those of
the TAS-20 and labeled: Identifying emotional feelings (DIF), Describing emotional feelings (DDF), and
Externally oriented thinking (EOT). These facets are part of two higher-order dimensions/subscales labeled:
Affect Awareness’ (AA 5DIF & DDF), and ‘Operative Thinking’ (OT 5EOT & IMP). All items have face validity
for their facet.
The TSIA manual provides guidelines for the administration (interview questions with probes and prompts).
Sample
The two samples in the Bagby et al. (2006) study comprised, (1) 136 normal adults from the general commu-
nity (41 males; 95 females; M532.3 years, SD59.78); and (2) 97 psychiatric outpatients (20 men; 77 women;
M532.9 years, SD511.9). Both samples were predominantly middle class and had at least a high school educa-
tion. Seven different raters were used, six had Masters degrees in clinical/counseling psychology, and one was a
research assistant with training and experience in diagnostic interviewing.
Reliability
Internal Consistency
Bagby et al. (2006) reported Cronbach alpha coefficients for the TSIA dimensions (AA & OT), and subscales, as
found in a community sample and a psychiatric outpatients sample, ranging from .70 to .88, with only one excep-
tion (community sample IMP .61). Grabe et al. (2009) ,Carretti et al. (2011) ,Meganck et al. (2011) , and Insleggers
et al. (2013) reported comparable findings.
Inter-Rater
Bagby et al. (2006) presented inter-rater reliabilities calculated by interclass correlation for the TSIA, the
higher-order factors AA and OT and the subscales for experts as well as non-expert. The expert results in the
patient sample for TSIA, AA, and OT were respectively .90, .86, and .93, whereas those for the subscales ranged
between .82 and .93. These coefficients were somewhat lower in the community sample (.73, .74, and .68, sub-
scales .71 to .84). The non-expert results were (clinical sample .83, .85 & .68, subscales .82 to .86; community sam-
ple .73, .75 & .68, subscales .69 to .75). Grabe et al. (2009) , and Carretti et al. (2011) in various patient groups
found interclass correlations comparable to those in the patient group of Bagby et al. (2006) , whereas Insleggers
et al. (2013) reported somewhat lower coefficients.
Validity
Convergent/Concurrent
Concurrent validity was established by correlations with the TAS-20: patient sample TSIA/TAS-20 (.68), TSIA-
AA/TAS-20 (.80), TSIA-OT/TAS-20 (.39), TSIA-DIF/TAS-20 (.77), TSIA-DDF/TAS-20 (.70), TSIA-EOT/TAS-20
(.55), and TSIA-IMP/TAS-20 (.11) ( Bagby et al., 2006 ). Of the correlations between TSIA and TAS-20 subscales we
mention only those claimed to measure the same construct: patient sample DIF/DIF (.63), DDF/DDF (.63), EOT/
EOT (.48). However, correlation coefficients in the community sample were lower TSIA/TAS-20 (.36), TSIA-AA/
TAS-20 (.42), TSIA-OT/TAS-20 (.20), TSIA-DIF/TAS-20 (.32), TSIA-DDF/TAS-20 (.42), TSIA-EOT/TAS-20 (.32),
TSIA-IMP/TAS-20 (.01), DIF/DIF (.29), DDF/DDF (.37), EOT/EOT (.47). Grabe et al. (2009) using psychiatric
inpatients, and Carretti et al. (2011) using a mixed group of healthy individuals, and psychiatric or medical out-
patients also presented correlations with the TAS-20 comparable to those of Bagby’s et al. community sample
(Grabe et al., ranging from .49 [TSIA] to .34 [IMP]), (Carretti et al., ranging from .53 [AA] to .05 [IMP]). Meganck
et al. (2011) reported a number of concurrent correlations found in a sample of psychiatric patients: TSIA/TAS-20
(.47), TSIA/OAS (.37), TSIA/Alexithymia (see Footnote 1) (.45), TSIA/M-BIQ (.76). The high correlation between
the TSIA and the M-BIQ was explained by the fact that the same rater completed both measures. Finally,
Inslegers et al. (2013) reported a correlation TSIA/TAS-20 (.34).
With the exception for the figures from Bagby’s et al. (2006) patient group, and the same rater TSIA/M-BIQ
correlation, all other correlations explain a maximum 28% of the variance indicating that these alexithymia scales238 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
measure some overlapping variance but measure mostly different domains. This is especially indicated by the
low correlations between TSIA subscales and TAS-20 subscales that are purported to measure the same facets.
Construct/Factor Analytic
The authors reported PCA analyses performed with the combined two samples mentioned above, taking out
2-, 3-, and 4-component solutions. The 4-component model (one for each facet) explained 48.4% of the variance,
and was considered the best. A CFA was also performed with the combined two samples, in which eight models
were tested. This provided very acceptable fit for two models; a 4-factor (one for each facet) non-hierarchical
model, and a model with four factors (one for each facet) nested under two higher-order factors ‘Affect
Awareness’ (AA 5DIF & DDF), and ‘Operative Thinking’ (OT 5EOT & IMP). The authors opted for the latter
(χ2/df51.5, GFI 5.88, CFI 5.91, NNFI 5.89, RMSEA 5.05). Comparable results have been described by Grabe
et al. (2009) , and Carretti et al. (2011) , and Insleggers et al. (2013).
Most correlations between subscales as presented by Bagby et al. (2006) were high: AA/OT (.55), DIF/DDF
(.70), DIF/EOT (.52), DIF/IMP (.18), DDF/EOT (.66), DDF/IMP (.37), EOT/IMP (.47). Comparable figures were
found by Grabe et al. (2009) ,Carretti et al. (2011) , and Insleggers et al. (2013).
Criterion/Predictive
Meganck, Vanheule, Inslegers, and Desmet (2009) report significant correlations with ‘The linguistic Inquiry
and Word Count for the TSIA’ (frequency references to others β5.26, and the TSIA subscale EOT (frequency of
communication words β5/C0.33, Complexity communication words β5/C0.52 and frequency references to others
β5.38). The authors concluded that these results may indicate that alexithymic persons will talk more frequently
about other people than themselves. Inslegers et al. (2012) present correlations with subscales Complexity ( /C0.42),
Social Causality ( /C0.31) and Complexity ( /C0.18) of ‘The Social Cognition and Object Relations Scale’ and the
subscales Dominance (.09) and Affiliation ( /C0.38) of ‘The Inventory of Interpersonal Problems’.
Location
Bagby, R.M., Taylor, Parker, J.D.A, & Dickens, S.E. (2006). The development of the Toronto structured inter-
view for alexithymia: Item selection, factor structure, reliability, and concurrent validity. Psychotherapy and
Psychosomatics, 75, 25/C039.
Results and Comments
The above evidence suggests a stable factor structure, acceptable to good reliabilities, including good inter-
rater reliabilities. However, in light of the high intercorrelations, interpretation of the dimension and some
subscale scores may be problematic. The concurrent validity seems insufficient. However, more research is
warranted.
TSIA Sample Items
Are you sometimes puzzled or confused about what emotion you are feeling? (DIF)
Is it usually easy for you to find words to describe your feelings to others? (DDF)
Do you tend to just let things happen rather than trying to understand why they turn out a certain
way? (EOT)
Is it rare for you to fantasize? (IMP) (Taylor, personal communication)
SELF-RATING SCALES
Toronto Alexithymia Scale (TAS-20)
(Bagby et al., 1994a ).
Variable
Since existing self-rating scales had insufficient psychometric properties Taylor and coworkers developed the
26-item Toronto alexithymia (TAS). The scale was revised in 1992 (TAS-R) and again revised in 1994 (TAS-20).239 SELF-RATING SCALES
III. EMOTION REGULATION |
Description
Taylor et al. (1985) published the 26-item Toronto Alexithymia Scale (here called TAS-26). The authors initially
described alexithymia more or less according to the description of Nemiah and Sifneos (1970a) :
‘Alexithymic patients have difficulty identifying and describing their feelings; their cognitive style is concrete and reality-based ( la
pense ´e ope´ratoire ) and they have impoverished inner emotional and fantasy lives.’ Nemiah and Sifneos (1970a)
Yet, instead of constructing subscales for these five traits, they redefined the alexithymia construct:
‘Recognizing that the development of any self-report scale must begin with a definition of the construct being measured. We first
reviewed the literature on alexithymia and then selected five content areas thought to reflect the substantive domain of the construct.
These were: (1) difficulty in describing feelings; (2) difficulty in distinguishing between feelings and bodily sensations; (3) lack of intro-
spection; (4) Social conformity; and (5) impoverished fantasy life and poor dream recall.’ Taylor et al. (1985)
As stated by the authors themselves, they thus introduced elements into their alexithymia construct that were
not part of the original construct ( Taylor et al., 1997 , p.58). Furthermore, of their item pool of 41 items, 16 were
taken from either the SSPS, the Interoceptive Awareness Subscale of the Eating Disorder Inventory or the Need
for Cognition Scale. This had consequences for their final scale. Factor analysis, item-total correlations, and esti-
mates of internal consistency of the 41-item scale were conducted to determine the selection of items to be
included in the final version. Twenty-six items were selected loading on four factors that were described as: F1
‘Ability to identify and describe feelings, and distinguish between feelings and bodily sensations’; F2 ‘Ability to
communicate feelings to other people’; F3 ‘Daydream factor’; and F4 ‘Focusing on external events rather than
inner experiences’ (externally oriented thinking). Item-factor loading varied between .31 and .69. Together the
four factors explained 25.7% of the variance.
Taylor et al. (1992) published a revised version (TAS-R). The authors added 17 new items to the existing 26
items. The combined pool of 43 items was administered to a sample of 965 undergraduates, and on the basis of
item-factor loadings ( $.30), sufficient internal reliabilities, and low correlations ( ,.20) with measures of social
desirability, 23 items were selected for the TAS-R. Factor analysis yielded two factors: Factor 1 comprised 14
items assessing both the ability to distinguish between feelings and bodily sensations associated with emotional
arousal and the ability to describe feelings to others (18.5% variance), whereas Factor 2 comprised nine items
assessing externally oriented thinking (6.6% variance).
Two years later, improvement of the TAS-R was considered necessary, because confirmative factor analysis
indicated that a three-factor had better fit, and further since the TAS-R lacked a subscale assessing imaginal activ-
ity. Hence, Bagby et al. (1994a) developed the TAS-202. Seventeen items were added to the 26 of the original
TAS-26. For the 43-item version, correlations with the Marlowe /C0Crowne Social Desirability Inventory (SDI),
item-facet correlations, and item-total correlations with the remaining items on the other content domains were
examined. Items correlating $.20 with the SDI, correlating #.20 with their facet or #.20 with the remaining
items on other content domains were dismissed. The remaining items were subjected to principal factoring, with
varimax rotation. Items with loadings of $.35 on one and only one factor were retained. The overall alpha of the
imaginal processing scale turned out to be .69, while further these items dropped the corrected mean item-total
correlation below .20. In fact, only three of the 12 imaginal processing items met the pre-established statistical
requirement. Instead of creating new items the authors decided, against their initial aim, to take all items refer-
ring to imaginal processing out of the scale ( Bagby et al., 1994a ). The final scale comprises 20 items and three sub-
scales, named: ‘Difficulty in identifying feelings’ (DIF, 7 items), ‘Difficulty describing feelings’ (DDF, 5 items),
and ‘Externally-oriented thinking’ (EOT, 8 items). Each item is rated on a 5-point Likert-type response scale .
Most items have face validity for their facets, but there are exceptions, three items referring to ill-understood
bodily experiences, and one item referring to the cause of anger, and not to an inability to identify emotions,
furthermore, the items of EOT seem fall in two groups (see Results and Comments).
Sample
The samples in the Bagby et al. (1994a) study consisted of 965 undergraduates (159 male; 242 females;
M521.1 years, SD54.5.), and 218 (94 males; 124 females; M535.2 years, SD 511.5) diagnostically heteroge-
neous psychiatric outpatients. The two samples in the Bagby et al. (1994b) study comprised (1) 85 undergraduates
2Since the TAS-20 literature is extensive, we have searched on ‘most important’ and ‘last 5 years’.240 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
(28 males; 55 females [two failed to indicate their gender], mean age 21.47 years, SD 55.24) and (2) 83 undergrad-
uates (22 males; 61 females; M 521.47 years, SD 55.24, and 22 males; M 525.41 years, SD 58.00).
Reliability
Internal Consistency
Vorst and Bermond (2001) presented acceptable fit indices for unidimensionality; DDF ( χ2/df53.67, GFI 5.90,
AGFI5.82, RMSEA 5.07), DIF ( χ2/df510.18, GFI 5.94, AGFI 5.89, RMSEA 5.12) EOT ( χ2/df53.62, GFI 5.93,
AGFI5.88, RMSEA 5.09). Bagby et al. (1994a) reported Cronbach alpha coefficients for the TAS-20 as found in
two samples: TAS-20 (.80 & .83); DIF (.79 & .81), DDF (.75 & .75), and EOT (.66 & .64). Slightly different but still
comparable figures have been published ( Loas et al., 2001; Simonsson-Sarnecki et al., 2000; Parker, Taylor, &
Bagby 2003; Mu ¨ller, Bu ¨hner, & Ellgring 2003; Cleland, Magura, Foote, Rosenblum, & Kosanke 2005; Meganck,
Vanheule, & Desmet 2008; Culhane, Morera, Watson, & Millsap 2009; Gignac, Palmer, & Stough 2007; Leising,
Grande, & Faber 2009; Parker, Eastabrook, Keefer, & Wood 2010 ). The alpha coefficient of the subscale EOT was
in four of the above mentioned studies low ( #.6).
Test/C0Retest
Bagby et al. (1994a) reported a three-week stability coefficient of .77. Sa¨kkinen, Kaltiala-Heino, Ranta, Haataja,
and Joukamaa (2007) reported a comparable result (.76), while higher reliabilities were reported by Besharat
(2008) , and Berthoz and Hill (2005) : .88 and .80, respectively. Lower coefficients were reported by Richards,
Fortune, Griffiths, and Main (2005) ,Kojima, Frasure-Smith, and Lespe ´rance (2001) (.69 and .47), however, these
were over larger test /C0retest intervals of 10 weeks and six months, respectively.
Validity
Convergent/Concurrent
The concurrent validities with most other scales have been described before in the concerning sections. In
addition Vorst and Bermond (2001) reported a correlation between the TAS-20 and BVAQ (see section BVAQ) of
.64, and comparable correlations have been reported by Mu¨ller et al. (2003) of .71, and Morera, Culhane, Watson,
and Skewes (2005) of .68 and .59. Some authors ( Zech, Luminet, Rime ´, & Wagner 1999; Berthoz & Hill, 2005;
Deborde et al., 2007; Berthoz et al., 2007; Sauvage and Loas, 2006 ) have used the 20-item-BVAQ-B version (see
section BVAQ), although there is relatively more variance in these scores, the mean value does not deviate (range
.31 to .77, mean .60). Since the BVAQ measures two orthogonal alexithymia dimensions, the BVAQ provides two
sumscores: BVAQ-COG & BVAQ-AFF, consequently, the correlations should have been calculated with these
dimensions. Vorst and Bermond reported a correlation between the TAS-20 and BVAQ-COG (.80). Mu¨ller et al.
(2003) reported a correlation of .71, Morera et al. (2005) .68 & .59, and Goerlich, Aleman, and Martens (2012) .85.
Whereas two authors working with the 20-item BVAQ-B ( Berthoz & Hill, 2005; Sauvage, Berthoz, Deborde,
Lecercle, & Loas 2005 ) reported correlations respectively of .67 (mean value of six correlations) and .52. Finally,
Vorst and Bermond (2001) present correlations between the TAS-20 and BVAQ-AFF of /C0.04, and Sauvage et al.
(2005) present .12. From these figures it is clear that the TAS-20 and the BVAQ-COG measure the same domain,
whereas the BVAQ-AFF and TS20 cover different alexithymia domains.
Construct/Factor Analytic
An EFA with oblique rotation of the intercorrelations of the 20 items resulted in a 3-factor solution: F1 Difficulty
in identifying feelings (DIF), F2 Difficulty describing feelings (DDF), and F3 Externally-oriented thinking (EOT).
The factor loadings varied between .35 and .64 (mean .51); DIF (range .40 to .59, mean .55), DDF (.47 to .64, mean
.56), EOT (.35 to 61, mean .44). There was only one cross loading exceeding .30. The three factors explained 31% of
the variance: DIF 12.6%, DDF 9.6%, and EOT 8.8% ( Bagby et al., 1994a ). Comparable percentages of explained vari-
ance have been established by Haviland and Reise (1996b), whereas Fukunishi et al. (1997) ,Vorst and Bermond
(2001) ,Kojima et al 2001 ),Richards et al. (2005) , and Mattila et al. (2010) found somewhat higher percentages.
Confirmative factor analysis (oblique, three factors) in two samples produced acceptable fit indices. Student sam-
ple: ( N5401, χ2/df53.01, GFI 5.89, AGFI 5.86, RMS 5.069); psychiatric outpatient sample: ( N5218,
χ2/df52.14, GFI 5.86, AGFI 5.83 and RMS 5.070). More or less comparable fit results for the three factor model
were found by, for instance Zech et al. (1999) ;Simonsson-Sarnecki et al. (2000) ;Cleland et al. (2005) ; and Sa¨kkinen
et al. (2007) ; and a better fit by Vorst and Bermond (2001) ;Parker, Bagby, Taylor, Endler, & Schmitz (1993, 2003) ;
De Gucht, Fontaine, and Fischler (2004) ;Besharat (2008) ;Culhane et al. (2009) ; and Loas et al. (2001). Mattila et al.241 SELF-RATING SCALES
III. EMOTION REGULATION |
(2010) reported lower fit for this model, and good fit for a model in which the contra-indicative items were allowed
to cross load onto a residual factor. Kooiman, Spinhoven, and Trijsburg (2002) found a 2-factor structure in four
samples, and reviewing the TAS-20 literature, stated that although most authors concluded that the 3-factor struc-
ture could be replicated, the fit was troublesome: in all studies half of the EOT items had very low factor loadings.
Parker et al. (2003) found excellent fit for a 3-factor structure in a large Canadian sample, ( N51933 normals;
GFI5.98, AGFI 5.98, CFI 5.97; RMSR 5.05, RMSEA 5.06). Finally, Taylor, Bagby, and Parker (2003) presented fit
indices for a 3-factor structure of 24 studies with translations of the TAS-20. The results of the large population
study ( N51933) have been reanalyzed by Gignac et al. (2007) resulting in a lower fit (CFI 5.88), whereas the stan-
dards used in the 24 studies mentioned were mild: χ2/df,5, GFI$.85, AGFI $.80, RMSR #.10, RMSEA ,.08,
and TLI $.80.Gignac et al. (2007) , and Reise, Bonifay, and Haviland. (2013) who analyzed the TAS-20 came to the
conclusion that the TAS-20 general factor accounts for too much variance, leaving too little for subscales3.
Meganck et al. (2008) measuring various models, came to the conclusion that the original 3-factor model and
4-factor model both had good fit. Mu¨ller et al. (2003) , found good fit for several different models (2, 3, & 4 factors)
in their clinical sample, but were unable to find any good factor structure in their non-clinical sample. Richards
et al. (2005) presented principal component results (EOT & DDT items loading on the factor for DIF), and low fit in
a confirmative factor analysis.
Bagby et al. (1994a) reported for two samples the following correlations between subscales: DIF/DDF (.72 &
.65), DIF/EOT (.32 & .10), DDF/EOT (.50 & .36). Most findings are comparable ( Thorberg et al., 2010; Henry
et al., 2006; Meganck et al., 2008; Besharat, 2008; Culhane et al., 2009 ). Some studies reported overall lower esti-
mates ( Haviland & Reise, 1996b; Zech et al., 1999; Vorst & Bermond, 2001 ), and some higher ( Simonsson-Sarnecki
et al., 2000; Loas et al., 2001; Parker et al., 2003; Mu ¨ller et al., 2003; Parker et al., 2010 ).
Criterion/Predictive
Bagby et al. (1994b) reported correlations, among others, indicating that the TAS-20 and its three factors are all
negatively related to both Psychological Mindedness Scale (TAS-20 /C0.68, DIF /C0.44, DDF /C0.51, EOT /C0.54), and
the Need for Cognition Scale (TAS-20 /C0.55 DIF . /C0.40, DDF /C0.36, EOT /C0.44). The authors further reported correla-
tions with the NEO Personality Inventory (NEO-PI). Of the correlations between TAS-20 and NEO-PI facets,
Openness ( /C0.49) and Neuroticism (.27) were significant. The TAS-20 has been very successful (PubMed presents,
as per November 2013, 389 articles in response to ‘TAS-20’, and 531 in response to ‘Toronto alexithymia scale
20’), too much to be discussed. Various studies indicate discriminant, and convergent validity, (for review:
Taylor et al., 1997, 2000 ;Taylor and Bagby, 2000 ; and Lumley et al., 2007 ). The predictive validity of the TAS-20
was also demonstrated in a recent study by Bollinger and Howe (2011) , indicating that the prevalence of alexithy-
mia is higher among circumcised men as compared with genitally intact men.
Location
Bagby, R.M., Parker, J.D.A. & Taylor, G.J. (1994a). The twenty-item Toronto alexithymia scale-I. Item selection
and cross-validation of the factor structure. Journal of Psychosomatic Research, 38, 23/C032.
Results and Comments
The TAS-20 appears to have adequate reliability. Although the concurrent validities with the M-BIQ, TSIA,
OAS, and BVAQ varied between low and moderate, those with the BVAQ-COG were high. Also the specificity
and selectivity of the TAS-20 to the M-BIQ and vice versa reached satisfactory levels. As for the factor structure,
most studies present three factors, but two, and four factor structures have also been described; this, together
with high correlations between subscales, compromises interpretation of subscale scores. Finally, publications
with the TAS-20 indicate discriminant and convergent validity.
However, the TAS-20 does not measure all alexithymia facets, three items refer to ill-understood bodily experi-
ences, the numbers of items per subscale are not fully balanced, and the scale is not balanced for indicative and
contra-indicative items. In fact, only five items are negatively keyed, while four of these are part of the subscale
EOT. These negatively keyed items tend to reduce the fit of factor models, internal consistency, and tend to form
a scale by themselves ( Gignac et al., 2007; Meganck et al., 2008; Mattila et al., 2010 ). Finally, on face validity, the
EOT items fall in two groups: pragmatic thinking (PR) and importance of emotions (IM). Meganck et al. (2008)
3Bagby, Taylor, Parker, Quilty, and Parker (2007) have reacted to the criticism of Gignac et al. (2007) by stating that the factor model on which
their conclusion was based, was not theoretically substantiated, but this is beside the point. Substantiated or not, Gignac’s analysis indicates
that there is too little variance left for the TAS20 subscales for meaningful diagnostic differentiation.242 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
tested various models and a three factor model in which DIF and DDF were taken as one factor with two addi-
tional factors for PR, and IM was found have the best fit.
TAS-20 Sample Items
I am often confused about what emotion I am feeling (DIF).
I have physical sensations that even doctors don’t understand (DIF).
It is difficult for me to find the right words for my feelings (DDF).
Being in touch with emotions is essential (EOT).
Bermond-Vorst Alexithymia Questionnaire (BVAQ)
(Vorst & Bermond, 2001 ).
Sample
The three samples in the Vorst and Bermond (2001) study consisted of university students Dutch sample: 375
undergraduates (66% females, M521.3 years, SD511.2). French speaking Belgian sample (175 undergraduates;
147 women, M520.6 years, SD54.0), and 129 English students (72% females, M521.4 years, SD54.7).
Variable
Since according to the authors existing alexithymia self-rating scales, did not cover all alexithymia domains as
described by Nemiah and Sifneos (1970a) and Sifneos (1973) ;Vorst and Bermond (2001) published a 40-item self-
rating measure, the BVAQ.
Description
Bermond, Vorst, Vingerhoets, and Gerritsen (1999) presented the 20-item Amsterdam alexithymia Scale, a
BVAQ precursor. Orthogonal factor analysis with five fixed factors produced in two samples comparable five fac-
tors with an equal number of four items (two indicative and two contra-indicative). The five factors were named:
‘Inability to differentiate between or identify emotions’ (IDEN); ‘Inability to fantasize’ (FAN); Inability to analyze
emotions (ANA); inability to experience emotional feelings (EMO); ‘Inability to verbalize emotions’ (VERB).
Revision of the scale was considered necessary, as some subscales had low internal consistencies, and some items
of the subscale ANA did not clearly refer to analyzing emotions. The revised scale was called the Bermond Vorst
Alexithymia Questionnaire (BVAQ; Vorst & Bermond, 2001 ).
The BVAQ covers the same domains as the Amsterdam Alexithymia Scale, but with subscales of eight items
(four indicative and four contra-indicative). The BVAQ was constructed in such a way that the first 20 items, are
comparable to the last 20 items, resulting in two parallel scales (A & B forms), in order to facilitate independent
pre- and post-treatment measurement.
The correlations between the A & B forms are: BVAQ (.81); VERB (.79); FAN (.76); IDEN (.62); EMO (.65);
ANA (.65); and comparable results were found for Belgian and English samples ( Vorst & Bermond, 2001 ), and in
US Anglo and Hispanic samples ( Morera et al., 2005 ). The Cronbach alphas of the 20-item scales are lower as
compared with the 40-item version; BVAQ-A, ranging over three samples (Dutch, Belgian & English) .55 to .61,
mean .58; BVAQ-B .67 to .68, mean .67. All items have face validity for their facet, though four ANA subscale
items refer to whether one believes that emotions should be analyzed and four refer to what one actually does. It
is the experience of the authors that subjects who do not analyze their emotions, but are or have recently been
into insight giving therapy tend to score relatively positive on the four first mentioned items compared to the
four last ones. Scores on the scale can be analyzed as well on a subscale level as on the level of the two dimen-
sions/subscales (BVAQ-COG & BVAQ-AFF, seen below).
Reliability
Internal Consistency
Vorst and Bermond (2001) assessed the unidimensionality of subscales by conducting a confirmatory factor
analysis in the Dutch sample. With exception of the subscale VERB ( χ2/df59.33, GFI 5.86, AGFI 5.76, RMSEA
5.15), most indices were acceptable: IDENT ( χ2/df53.66 GFI 5.95, AGFI 5.91, RMSEA 5.08), ANA
(χ2/df53.65 GFI 5.95, AGFI 5.91, RMSEA 5.08), FANTA ( χ2/df54.42 GFI 5.94, AGFI 5.90, RMSEA 5.10),
EMO ( χ2/df55.39 GFI 5.93, AGFI 5.87, RMSEA 5.11). Mu¨ller et al. (2003) reported acceptable fit to the data243 SELF-RATING SCALES
III. EMOTION REGULATION |
for the subscales VERB, IDEN, and ANA, but insufficient fit for FAN and EMO. Cronbach alpha coefficients in the
Dutch sample were: BVAQ (.81), VERB (.87), FAN (.82), IDEN (.76), EMO (.75), ANA (.77), and comparable results
were found for the Belgian and English samples ( Vorst & Bermond, 2001 ), and by Morera et al. (2005) , and
Culhane, Morera, Watson, and Millsap (2011) . Somewhat lower values were reported by Hornsveld and Kraaimaat
(2012) , and by Bermond et al. (2007) for their Italian, Polish, and Russian samples. Finally, Bekker, Bachrach and
Croon (2007) reported alpha coefficients for BVAQ-COG (see below) (.84), FAN (.85), and EMO (.74).
Test/C0Retest
Berthoz and Hill (2005) reported stability coefficients (three weeks interval) for the BVAQ-B in a group of 27
patients with autism spectrum disorder as follows: BVAQ-B (.81), VERB (.82), FANTA (.66), INDEN (.63), EMO
(.62), and ANA (.72). Hornsveld and Kraaimaat (2012) reported somewhat lower correlations with an interval of
some weeks, (respective values: .70, .63, .56, .39, .47, and .67). However, the results for the 35 controls in the
Berthoz and Hill study were lower (respective values: .32, .35, .56, .67, .22, and .20).
Validity
Convergent/Concurrent
Correlations with the TAS-20 have been described in the section TAS-20.
Construct/Factor Analytic
Principal component analyses established, in a Dutch sample ( N5375), factor loadings between .30 and .82
(mean .62), with four cross loadings between .3 and .4; subscale loadings VERB range .56 to .78, mean .68; FAN
.50 to .82, .68; IDEN.47 to .66, .58; EMO .30 to .69, 58; ANA .30 to .67, .58. Comparable results were found in
French speaking Belgian ( N5175) and English ( N5129) samples. The five factors explained together 45 /C046% of
the variance; Mu¨ller et al. (2003) report 55%. The 5-factor model was tested by multi-group CFA, over the three
samples. Indices were acceptable; BVAQ: χ2/df52.46, GFI 5.80, AGFI 5.78, RMSEA 5.058; BVAQ-A: χ2/
df52.80, GFI 5.90; AGFI 5.87, RMSEA 5.065; BVAQ-B: χ2/df52.39, GFI 5.91; AGFI 5.89, RMSEA 5.0574
(Vorst & Bermond, 2001 ).Zech et al. (1999) , and Mu¨ller et al. (2003) reported lower indices; BVAQ-French: χ2/df
2.17, GFI .80, Agfi .77 RMSEA .064, CFI .76; BVAQ-English χ2/df 1.84, GFI .81, Agfi .79 RMSEA .059, CFI .83, and
BVAQ-German ( χ2/df 2.43, SRMR .081, RMSEA .062 LO90 .058, HI90 .066). The fit was considered sufficient for
the English BVAQ, and the German BVAQ, but insufficient for the French BVAQ. However, the fit turned out to
be better for the French BVAQ-B version in the sample of Zech et al.: BVAQ-French ( χ2/df51.66, GFI 5.93,
Agfi5.90 RMSEA 5.046, CFI 5.91). Culhane et al. (2011) , tested, in a U.S Anglo and U.S. Hispanic sample,
4-factor and 5-factor models; a weak factorial invariant model, a strong factorial invariant model, a strong facto-
rial invariant model with group differences on latent means, and the strict factorial invariant model, and found
good fit for all models, the differences in fit indices among the models was minimal, ( χ2/df52.40 to 2.76,
RMSEA 5.057 to .064, 90% CI .055 to .066, SRMR 5.084 to .085, CFI 5.92 to .93, NNFI 5.91 to .93). However,
Hornsveld and Kraaimaat (2012) found in two samples ( N5139 and N5160) insufficient fit for the 5-factor
structure; patient sample: χ2/df52.07, GFI 5.62, CFI 5.51, RMSEA 5.09; student sample: χ2/df52.14,
GFI5.63, CFI 5.40, RMSEA 5.09, GFI 5.63, CFI 5.40, and RMSEA 5.09.
Higher-order factors: PCAs of subscale intercorrelations produced two factors in the Dutch and Belgian samples,
named Cognitive (covering IDEN, VERB & ANA) and Affective (EMO & FAN), although the English sample also
produced comparable higher order factors IDEN loaded on the Affective factor ( Vorst & Bermond, 2001 ).
Likewise, Bekker, Bachrach, and Croon (2007) found in a group of 202 undergraduates confirmation for the cog-
nitive factor but not for the affective factor. Bermond et al. (2007) tested the model of two higher order factors in
analyses over seven samples (Dutch ( N5375), Italian ( N5791), Belgian ( N5175) Polish ( N5427), Australian
(N5216), Russian ( N5139), and English ( N5175)). Four models were tested: (I) a two factor model with
correlated factors, an affective factor (EMO & FAN), and a cognitive factors (IDEN & VERB) with ANA loading
on both factors; (II) like I but two independent factors; (III) two correlated factors (IDEN, VERB & ANA) and
(EMO & FAN); (IV) a one factor model (IDEN, VERB, ANA, EMO & FAN). Models I & II showed acceptable fit:
I(χ2/df53.60, SRMR 5.039, NFI 5.95, CFI 5.96, CAIC 5807); II ( χ2/df54.15, SRMR 5.066, NFI 5.93,
CFI5.94, CAIC 5784). The authors preferred the more parsimonious model II, and concluded that there are two
4To some the fit for the BVAQ-40 may appear marginal, it should however, be noticed that the analysis covered five subscales in three
different language samples, with equality restrictions on factor loadings, error terms, intercepts, correlations between factors and factor
means. More restrictive models generally result in lower fit.244 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
orthogonal higher factors involved in alexithymia that were called ‘Affective’ (AFF) and ‘Cognitive’ (COG).
Bagby et al. (2009) tested seven models, although the model preferred by Bermond et al. (2007) was not included,
two of their models came close. A 5-factor hierarchical model in which IDEN, VERB, and ANA nested under one
higher order factor, and EMO and FAN under another correlating higher-order factor, and a 5-factor hierarchical
model in which IDEN, VERB are nested under one higher order factor, and ANA, EMO & FAN under another
correlating higher-order factor. The data indicated good fit for both models in three out of five samples and in
the pooled sample, whereas there was, according to the norms as used by ( Bagby et al., 1994a, and Taylor et al.,
2003 ) acceptable fit in all samples5. These results provide confirmation for the original model. Moreover, the two
dimensions in the emotional experience have also been theoretically substantiated ( Bermond, 2008 ).
Types of alexithymia: Since there are two orthogonal higher factors or alexithymia dimensions incorporated into
BVAQ scores, interpretation of the BVAQ-40-item sum scores is senseless. The BVAQ provides two sum scores:
BVAQ-COG and BVAQ-AFF, making four extreme types identifiable, three of which have reduced capacities in
one or both alexithymia dimensions. Alexithymia types were predicted on base of the neuropsychology of emo-
tions ( Bermond, 1997 ) and called type I (alexithymia as described by Nemiah and Sifneos; severe reductions in
both AFF and COG), type II alexithymia (severe reductions in COG together with at least normal AFF capabili-
ties), type III, the opposite of type II (severe reductions in AFF together with at least normal COG capacities).
Finally, the type scoring favorable on both BVAQ dimensions was called lexithym ( Bermond et al., 2007 ). Since
the two higher order factors interact with one another in their relations with other constructs, the four extreme
groups have very different personality traits, as is indicated by Moormann et al. (2008) . The idea of various types
of alexithymia has been challenged ( Bagby et al., 2009; Mattila et al., 2010 ). In short, the arguments come down to
the fact that alexithymia and the two alexithymia dimensions are dimensional latent traits; not categorical, and
for this reason the four extreme groups should not be called types, it is thus, a debate about language, not a sci-
entific debate nor a debate about alexithymia.6
In the Dutch sample the correlations between subscales varied for the 40-item version between .02 (EMO/
IDEN) and .43 (VERB/ANA), mean .20. Comparable, but somewhat lower, correlations were found in the
Belgian (mean .14) and English (mean .20) samples ( Vorst & Bermond, 2001 ).Morera et al. (2005) reported com-
parable figures, Mu¨ller et al. (2003) higher figures. Correlations between subscales have also been reported for
the BVAQ-B form ( Zech et al., 1999; Deborde, Berthoz, Perdereau, Coros, & Jeammet 2004, 2007; Berthoz et al.,
2007; and Sauvage & Loas, 2006 ). Although there is more variance in these figures, the mean scores are fully com-
parable (ranging from .39 VERB/ANA, to .09 IDEN/EMO). The figures for VERB/ANA and VERB/IDEN are rel-
atively high, in all studies mentioned, however, the mean correlations still explain less than 17% of the variance.
The correlations between the two higher order factors in the Bermond et al. (2007) study ranged over the seven
samples between .06 and .27, mean .15; Deborde et al. (2004) reported (.06).
Criterion/Predictive
Although the number of BVAQ publications is less compared to the TAS-20, BVAQ scores have also been
related to an extensive array of other constructs. Many of these studies either related the other construct with
specific BVAQ subscales, the BVAQ alexithymia dimensions COG & AFF, or one of the alexithymia types. For
instance, BVAQ-AFF is positively related to schizoid personality disorder, COG is positively related with
avoidant-personality traits, whereas both COG and AFF are both negatively related to schizotypal personality
disorder ( De Rick & Vanheule, 2007 ). Dissociative-proneness correlates negatively with subscale IDEN and posi-
tively with FAN ( Elzinga, Bermond, & van Dyck 2002 ). Autism as well as childhood sexual abuse are associated
5It should be noted that in Bagby’s et al. (2009) study a less demanding model produced better fit, but that is the logical consequence of being
less demanding.
6The arguments boil down to the idea that alexithymia is a dimensional latent trait ( Mattila et al., 2010 ). The BVAQ authors agree with the
idea of dimensionality, proposing two more or less normally distributed higher order factors. Combinations of selections out these dimensions
could be referred to as ‘types’ as ( Bagby et al., 2009 ). However, playing with the word type, Bagby et al self-contradictory change to a
categorical instead of dimensional view of alexithymia: ‘To infer variants or types, one must use subjects as the variable and identify if these
subjects sort into meaningful “clusters” ’ (Bagby et al., ibid). One either adheres to a dimensional or a categorical view of a construct. They
further state that if there are two subtypes (in the meaning of dimensions) of alexithymia, this should result in two types of clusters
(categories) in a cluster analysis, where certain types of clusters should correspond with distinctions on the first dimension and the other
types of cluster with the second. This makes no sense. If one were to distinguish people based on wealth and height, one cannot expect to find
clusters that correspond with either poor or wealthy and tall or short people. As the cluster-analysis performed by Bagby et al. is groundless,
the results are meaningless.245 SELF-RATING SCALES
III. EMOTION REGULATION |
with the COG dimension and type II alexithymia, not with AFF ( Berthoz & Hill, 2005; Bermond, Moormann,
Albach, & Dijke 2008 ). Schizophrenia, and somatization are related to alexithymia type II ( Van’t Wout, Aleman,
Bermond & Kahn 2007; Bailey et al., 2007 ). Neuroticism and depression are correlated positively with Identifying
and Verbalizing and type II alexithymia, but negatively with Fantasizing, and not with type I alexithymia
(Morera et al., 2005; Bermond, 2010 ). The galvanic skin response (GSR) amplitude in response on emotional
stimuli is related to AFF not to COG ( Bermond, Bierman, Cladder, Moormann, & Vorst 2010 ). Finally, Moormann
et al. (2008) published profiles of the three alexithymia types that were based on the alexithymia literature,
and personality-scale data from 143 psychology students. These authors described the type I person as a loner
orEinzelga ¨nger, the type II person as neurotic, and the type III as narcissistic. Although the publication of
Moormann et al. (2008) should be regarded as provisional and in need of replication (Bermond et al., in prepara-
tion), it together with the other BVAQ-related publications, indicates discriminant and convergent validity; for
the BVAQ subscales, the BVAQ COG and AFF dimensions, as well as the usefulness of the three alexithymia
types.
Location
Vorst, H. & Bermond, B. (2001). Validity and reliability of the Bermond /C0Vorst alexithymia questionnaire.
Personality and Individual Differences , 30, 413 /C0434.
Results and Comments
The internal consistencies of the subscales and the dimensions COG and AFF vary between sufficient and
good, with a possible exception for the subscale Emotionalizing for which, some studies, mention insufficient
alphas. Results regarding Test /C0retest reliability are few and vary between insufficient and acceptable. Results for
the five-factor structure vary between insufficient to good, but most publications indicate at least acceptable fit.
Results regarding the higher order structure with two dimensions point to very acceptable fit. Concurrent valid-
ity with the TAS-20 produced acceptable results for the three cognitive subscales and good validity for the
dimension COG. However, concurrent validity with the dimension AFF indicated that the BVAQ-AFF measures
other domains of alexithymia compared to the TAS-20. Publications with the BVAQ point to discriminant and
convergent validity for subscales as well as BVAQ-COG and AFF, and the usefulness of the various alexithymia
types. BVAQ scores can be analyzed on a subscale level as well as on the level of the two dimensions/subscales
(BVAQ-COG & BVAQ-AFF). Since the two dimensions are orthogonal to one another the 40-item-sum-total score
is without meaning.
BVAQ Sample Items
I find it difficult to express my feelings verbally (VERB).
When I am upset, I know whether I am afraid or sad or angry (IDEN).
I hardly ever consider my feelings (ANA).
Before I fall asleep, I imagine all kinds of events, encounters and conversations (FAN).
When I see somebody crying uncontrollably, I remain unmoved (EMO).
Psychological Treatment Inventory /C0Alexithymia Scale (PTI-AS)
(Gori et al., 2012 ).
Variable
The aim of the authors was to develop an extremely short alexithymia scale.
Sample
The samples in the Gori et al. (2012) study consisted of (1) 743 normals (48.4% females; M533.7 years,
SD51.8.), and (2) 35 patients with diagnoses of eating disorders (82.2 % females; M526.33 years, SD 59.27).
Description
The PTI-AS, is a five-item self-rating scale with a 5-point Likert-type response scale, aimed to assess symptoms
of alexithymia, which is denoted by difficulty in: identifying feelings, describing feelings, analyzing feelings,
and impoverished inner emotional life. According to Gori et al. (2012) , the PTI-AS adequately covers the five246 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
alexithymia domains mentioned above. However, on face validity, three items refer to Verbalizing emotions, one
to Identifying/Analyzing emotions, and one to Fear of emotions.
Reliability
Internal Consistency
The authors reported a Cronbach alpha coefficient of .88 ( Gori et al., 2012 ).
Validity
Convergent/Concurrent
The PTI-AS correlated positively with the TAS-20 and BVAQ-COG, but not with the BVAQ-AFF as follows: PTI-
AS/TAS-20 (.74), PTI-AS/TAS-DIF (.70), PTI-AS/TAS-DDF (.55), and PTI-AS/TAS-EOT (.32) and PTI-AS/BVAQ
(.40), PTI-AS/BVAQ-COG (.63), PTI-AS/BVAQ-AFF /C0.12, PTI-AS/BVAQ subscale ANA (.29), PTI-AS/BVAQ-
AS/BVAQ-VERB (.65), PTI-AS/BVAQ-IDENT (.44), PTI-AS/BVAQ-FAN /C0.07, and PTI-AS/BVAQ-EMO /C0.12.
Construct/Factor Analytic
An exploratory factor analysis ( N5378) resulted in a one-factor structure that explained 71.1% of the total var-
iance. Factor loadings were in the high range: .70 /C0.85. A subsequent confirmatory factor analysis (one factor
model) provided a good fit to the data (CFI 5.98, TLI 5.95, RMSEA 5.08, SRMR 5.04;χ2520.30).
Criterion/Predictive
The PTI-AS exhibited predictive correlations ( ..3) with: Drive for Thinness; Bulimia; Body Dissatisfaction;
Personal Alienation; Interpersonal Insecurity; Interpersonal Alienation; Interoceptive Deficits; Emotional
Dysregulation; and Asceticism of the Eating Disorder inventory.7
Also, Gori et al. (2012) compared a clinical group ( N535) with a non-clinical group ( N535) and demonstrated
that the clinical group scored significantly higher on the PTI-AS, with a large effect size (d/mean SD5.92).
Location
Gori et al. (2012) . Assessment of alexithymia: Psychometric properties of the Psychological Treatment
Inventory Alexithymia Scale (PTI-AS). Psychology, 3, 231/C0136.
Results and Comments
The PTI-AS does not cover all the domains mentioned by the authors. This could partly explain the high inter-
nal consistency the good factor structure, and high item-total correlations (range .70 to .85), which would have
been unlikely if the five items, covered different domains. However, this does not devaluate the high correlations
with the TAS-20 and the BVAQ-COG. Furthermore, results point to convergent validity. However, the Gori et al.
(2012) study is the only PTI-AS publication (as per November 2013), and although very promising, more research
is warranted.
PTI-AS Sample Items
Sometimes I have difficulty finding the words to describe my feelings.
I am scared of my own emotions (Gori, personal communication).
Measurement of Alexithymia in Adolescents and Children
In this context Parker et al. (2010) is important. These authors studied the TAS-20 in four groups of adolescents
(aged 19 to 21 years, N5267; aged 17 to 18 years; N5288; aged 15 to 16 years; N5297; aged 13 to 14 years,
N5149), and their results indicated that CFA indices, the amount of explained variance, standardized estimates
of factor loadings, and Cronbach alpha coefficients decrease with age. Reading comprehension could not fully
explain these decreases, and the authors assumed that developmental phenomenon reflecting still-developing
levels of emotional awareness and ability to express and regulate emotions are a factor in these results. We agree
with the suggestion of Parker et al. (2010) especially since the prefrontal cortex, which, fulfills important
functions for the emotional experience and control of emotional behavior is not fully matured before the end of
adolescence ( Bermond et al., 2006 ).
7In the Gori et al. (2012) article these correlations are erroneously indicated with a ,instead of .(Gori personal communication).247 SELF-RATING SCALES
III. EMOTION REGULATION |
Alexithymia Observation Scale for Children (AOSC)
(Fukunishi et al., 1998 ).
Variable
The AOSC ( Fukunishi et al., 1998 ) aimed to develop an alexithymia observations scale for children, to be filled
out on base of behavioral observations.
Sample
The sample in the Fukunishi et al. (1998) study consisted of 286 elementary school children (151 boys and 135
girls, M59.0 years, SD51.4 years), that were in Grade 1 ( N532), Grade 2 ( N531), Grade 3 ( N577), Grade 4
(N574), Grade 5 ( N539) and Grade 6 ( N533). Twelve schoolteachers served as raters, observing the children’s
behavior on the playground and during school activities over at least a six-month time period.
Description
The authors started with 23 items that covered three facets; ‘Difficulty in communicating feelings to others’,
‘Poor fantasy life’, and ‘Externally oriented thinking’. However, the authors stated that ‘poor fantasy life’, and
‘externally oriented thinking’ are not observable traits; they therefore, opted for items referring to ‘Difficulty
relating to others’. For reasons of low item-total correlations and low factor loadings they reduced the number
of items to 12. Thus the final scale covered two facets ‘Difficulty in communicating feelings to others’, and
‘Difficulty relating to others’. All items have face validity for their facets as described by the authors.
Reliability
Internal Consistency
Fukunishi et al. (1998) reported a Cronbach alpha coefficient of .84.
Test/C0Retest
Stability correlations over a two-month test /C0retest interval were found to be for AOSC (.71), for ‘Difficulty in
communicating feelings to others’ (.72), and for ‘Difficulty relating to others’ (.74) ( Fukunishi et al., 1998 ).
Validity
Construct/Factor Analytic
A principal components analysis ( N5286) resulted in two components labeled: ‘Difficulties describing feel-
ings’ (7 items), and ‘Difficulty relating to others’ (5 items). The correlation between the two subscales was found
to be .14 ( Fukunishi et al., 1998 ).
Criterion/Predictive
Fukunishi et al. (1998) reported that AOSC scores correlated positively with the Yatabe /C0Guilford Personality
Test. Subscales: Depression (.26), and Lack of Cooperativeness (.27), and negatively with the subscales:
Ascendance ( /C0.42), and Social Extraversion ( /C0.39).
The authors ( Fukunishi, Tsuruta, Hirabayashi, & Asukai 2001 ) further demonstrated that total and subscale
scores were significantly higher for children with refractory hematological diseases ( N533,M58.1 years;
SD54.3) compared with a control group ( N5286, matched for gender and age). Furthermore, scores on the
observer Posttraumatic Stress Response Checklist in the hematological diseases sample, correlated (.40) with the
AOSC subscale ‘Difficulty in communicating feelings to others’, but not with the subscale ‘Difficulty relating to
others’.
Location
Fukunishi, I, Yoshida, H. & Wogan, J. (1998). Development of the Alexithymia Scale for Children: A prelimi-
nary study. Psychological Reports, 82, 43/C049.
Results and Comments
To our knowledge there are no other currently available psychometric publications relating to the AOSC mea-
sure. However, the results point to good factor structure, sufficient internal consistency, and test /C0retest reliability248 9. MEASURES OF ALEXITHYMIA
III. EMOTION REGULATION |
as well as some evidence of convergent validity. However, the items that loaded on Factor 2 (‘Difficulty relating
to others’) are not directly related to alexithymia. These items are more related to other constructs including
extraversion, and thus the AOSC measure is less fit for research into the relations between alexithymia and such
personality factors.
Toronto Alexithymia Scale for Children (TAS-12)
(Heaven et al., 2010; Rieffe et al., 2010 ).
Variable
Some authors have tried to adapt the TAS-20 in such a way that it could be used with adolescents
and children.
Sample
The sample in the Heaven et al. (2010) study comprised 944 children from three different schools (modal
age513 years; 324 males; 332 females; 140 children did not indicate their gender). Only 84.3% (796) of these chil-
dren returned properly completed measures. The sample used by Rieffe et al. (2010) consisted of 579 Iranian ele-
mentary and middle school children (281 boys; 298 girls; M512.2 years; range 10 to 15).
Description
The decreases in the above mentioned Parker et al. (2010) study were especially dramatic for the EOT subscale,
as was demonstrated before by Rieffe et al. (2006), and Sa¨kkinen et al. (2007) . Thus, some authors dropped the
EOT items from the TAS-20 when used with children, resulting in the TAS-12. This scale covers the original two
TAS-20 factors/subscales: ‘Difficulty identifying feelings’ (DIF, 7 items), and ‘Difficulty describing feelings’
(DDF, 5 items). Furthermore Rieffe et al. (2010) rephrased the items into children’s language, and changed the
original 5-point Likert-type response scale into a simpler 3-point scale. Thus, there are two versions of the
TAS-12, which will be discussed together.
Reliability
Internal Consistency
Heaven et al. (2010) reported a Cronbach alpha coefficient of .87 (with the non-rephrased version) in a sample
of 796 eight-year-old school children.
Validity
Construct/Factor Analytic
Heaven et al. (2010) reported that a principal axis factor analysis with oblimin rotation ( N5796) undertaken
on the intercorrelations of the 12 non-rephrased items produced a one-factor solution, with loadings varying
between .43 and .74. Likewise, Rieffe et al. (2010) using rephrased items reported that, a principal components
analysis carried out on the item intercorrelations ( N5579) also resulted in a unidimensional solution with factor
loadings exceeding .35 for all items but one (loading .28).
Criterion/Predictive
Heaven et al. (2010) reported that scores on the non-rephrased TAS-12 correlated ( $.30) in a group of 796 chil-
dren (modal age 13 years) with Positive and Negative Affect Schedule /C0Expanded Form, subscales fear, hostil-
ity, sadness, and self-esteem. Likewise, Rieffe et al. (2010) reported that in a group of 12-year-olds ( N5579, range
10/C015 years), scores on the rephrased TAS-12 measure correlated positively with subscales of the Mood
Questionnaire somatic complaints (.36) worry/rumination (.54), anger (.31), sadness (.43), and fear (.26), respec-
tively. Furthermore, children ( M513 years) diagnosed with cancer ( N5343) scored more highly on the
rephrased 12-item scale compared with controls ( N5246), and the difference was directly related to the severity
of their illness ( Mishra, Maudgal, Theunissen, & Rieffe 2012 ).
Location
Heaven, P.C.L., Ciarrochi, J. & Hurrell, K. (2010). The distinctiveness and utility of a brief measure of alexithy-
mia for adolescents. Personality and Individual Differences, 49, 222 /C0227.249 SELF-RATING SCALES
III. EMOTION REGULATION |
Subsets and Splits