вход по аккаунту



код для вставкиСкачать
Cognition and Emotion
ISSN: 0269-9931 (Print) 1464-0600 (Online) Journal homepage:
Age differences in vocal emotion perception: on
the role of speaker age and listener sex
Antarika Sen, Derek Isaacowitz & Annett Schirmer
To cite this article: Antarika Sen, Derek Isaacowitz & Annett Schirmer (2017): Age differences in
vocal emotion perception: on the role of speaker age and listener sex, Cognition and Emotion, DOI:
To link to this article:
View supplementary material
Published online: 24 Oct 2017.
Submit your article to this journal
Article views: 3
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at
Download by: [University of Florida]
Date: 26 October 2017, At: 08:19
Age differences in vocal emotion perception: on the role of speaker age
and listener sex
Antarika Sena, Derek Isaacowitzb and Annett Schirmerc,d,e
Neurobiology and Aging Programme, National University of Singapore, Singapore, Singapore; bDepartment of Psychology,
Northeastern University, Boston, USA; cDepartment of Psychology, The Chinese University of Hong Kong, Hong Kong, Hong Kong;
The Mind and Brain Institute, The Chinese University of Hong Kong, Hong Kong, Hong Kong; eMax Planck Institute for Human
Cognitive and Brain Sciences, Leipzig, Germany
Downloaded by [University of Florida] at 08:19 26 October 2017
Older adults have greater difficulty than younger adults perceiving vocal emotions. To
better characterise this effect, we explored its relation to age differences in sensory,
cognitive and emotional functioning. Additionally, we examined the role of speaker
age and listener sex. Participants (N = 163) aged 19–34 years and 60–85 years
categorised neutral sentences spoken by ten younger and ten older speakers with a
happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions
from younger and older speakers denoted the intended emotion with similar
accuracy. As expected, younger participants outperformed older participants and
this effect was statistically mediated by an age-related decline in both optimism
and working-memory. Additionally, age differences in emotion perception were
larger for younger as compared to older speakers and a better perception of
younger as compared to older speakers was greater in younger as compared to
older participants. Last, a female perception benefit was less pervasive in the older
than the younger group. Together, these findings suggest that the role of age for
emotion perception is multi-faceted. It is linked to emotional and cognitive change,
to processing biases that benefit young and own-age expressions, and to the
different aptitudes of women and men.
Received 3 May 2017
Revised 28 September 2017
Accepted 6 October 2017
Prosody; affective; ageing;
acoustic; motivation; working
memory; age bias
Vocal emotion perception and ageing
It has long been established that healthy ageing is
associated with a decline in mental functioning (e.g.
Colsher & Wallace, 1991; Riddle, 2007). Among other
processes, emotional perception has been implicated
(Ruffman, Henry, Livingstone, & Phillips, 2008). For
example, older adults make more mistakes than
younger adults when interpreting the emotion of nonverbal signals such as vocal expressions. Here, we
sought to explore and to better describe this phenomenon. Specifically, we aimed to replicate previous
reports of sensory, cognitive, and emotional ageing
statistically mediating age differences in vocal
emotion perception. More importantly, we explored
whether and how these differences are characterised
by speaker age effects and whether they vary
between women and men.
The last few decades produced a growing interest in
the emotional processes of older adults (Schirmer,
2014). Many studies appeared comparing individuals
aged 60 and above with middle-aged and/or
younger adults showing age-related performance
decrements (for a review see Isaacowitz et al., 2007;
Ruffman et al., 2008). In the context of voice perception, participants were presented with nonverbal
exclamations such as “Ah” (Fecteau, Armony, Joanette,
& Belin, 2005; Ryan, Murray, & Ruffman, 2009) or verbal
material that was semantically neutral (Mitchell, Kingston, & Barbosa Bouças, 2011; Ruffman, Halberstadt, &
Murray, 2009), non-sensical (Demenescu, Mathiak, &
Mathiak, 2014; Ruffman, Halberstadt, et al., 2009), or
low-pass filtered, removing segmental information
and leaving basic aspects of prosody intact (Mitchell
CONTACT Annett Schirmer
Supplemental data for this article can be accessed
© 2017 Informa UK Limited, trading as Taylor & Francis Group
Downloaded by [University of Florida] at 08:19 26 October 2017
et al., 2011). A typical experimental task entailed indicating whether the vocal tone of two successive
stimuli was identical (Orbelo, Grim, Talbott, & Ross,
2005) or which of several expression options best
characterised the tone (Demenescu et al., 2014).
Together, available results suggest that the recognition of sadness, anger, fear, and happiness are difficult for older adults (Demenescu et al., 2014;
Lambrecht, Kreifelts, & Wildgruber, 2012; Lima, Alves,
Scott, & Castro, 2014; Paulmann, Pell, & Kotz, 2008;
Ryan et al., 2009; but see Hunter, Phillips, & MacPherson, 2010; Ruffman, Sullivan, & Dittrich, 2009), whereas
the recognition of surprise seems relatively preserved
(Hunter et al., 2010; Paulmann et al., 2008; Ruffman,
Halberstadt, et al., 2009; Ryan et al., 2009).
Attempts have been made to uncover the mechanisms by which ageing affects performance in emotion
recognition tasks. The dominant approach has been a
correlation or mediation analysis linking age differences in control measures with those in emotion judgments. Results suggest that hearing loss (Lambrecht
et al., 2012; Mitchell & Kingston, 2014) and cognitive
decline (Krendl & Ambady, 2010; Lambrecht et al.,
2012; Orbelo et al., 2005; Pichora-Fuller, 2003) could
be relevant. Additionally, current theories of emotional
ageing hold that a reduction in feeling emotions
(Cacioppo, Berntson, Bechara, Tranel, & Hawkley,
2011) and changes in the importance of certain
emotions (Carstensen, Isaacowitz, & Charles, 1999; Isaacowitz, Wadlinger, Goren, & Wilson, 2006; Levenson,
Carstensen, Friesen, & Ekman, 1991; Tsai, Levenson, &
Carstensen, 2000) affect how older adults engage
with emotional expressions and how they respond to
them. Nevertheless, evidence for these different mechanisms is equivocal and the mechanisms themselves
are still debated (Ruffman et al., 2008).
Own-age vs other-age emotion perception
Apart from uncertainty as to the roles of sensory, cognitive, and emotional ageing, little is known about
how situational factors affect vocal emotion recognition in young and old. Of particular interest here is
the expresser’s age. This interest derives from face perception research pointing to processing differences
between signals from young as compared to old individuals and own-age as compared to other-age individuals (but see Ebner & Johnson, 2009). Compared
to signals from older individuals, signals from
younger individuals are perceived more accurately
by both younger and older decoders (Riediger,
Voelkle, Ebner, & Lindenberger, 2011). Additionally,
compared to other-age faces, own-age faces
produce fewer recognition errors (Malatesta, Izard,
Culver, & Nicolich, 1987) and reduce associated age
differences (although not entirely consistently, Riediger et al., 2011). Related to this, own-age faces
attract more gazing (Ebner, He, & Johnson, 2011; He,
Ebner, & Johnson, 2011) and activate emotion areas
in the brain more readily (Ebner et al., 2013).
These effects have been attributed to a range of
causes (Fölster, Hess, Hühnel, & Werheid, 2015) of
which three will be relevant here. First, it has been
argued that age affects the accuracy with which
emotions are encoded. Structural changes of the
face may impact emotion signalling making it harder
to identify the expressions of older as compared to
younger individuals (Malatesta et al., 1987). Second,
the culture associated with a particular peer group
may engender generational differences in emotion
signalling. Moreover, the familiarity individuals have
with a given group may then influence their ability
to recognise emotions. Younger individuals may be
more familiar with younger as compared to older
age groups, whereas older individuals, due to their
life experience, may be equally familiar with both
(Wiese, Schweinberger, & Hansen, 2008). Last, motivation may be relevant in that individuals may be
more interested in interacting with young (Gordon &
Arvey, 2004) and own-age individuals (for a review
see Rhodes & Anastasi, 2012) as compared with old
and other-age individuals because they expect these
interactions to be more relevant or rewarding.
To date, these possibilities have been rarely tested.
Moreover, available evidence has come from face perception only (e.g. Fölster et al., 2015; Rhodes & Anastasi, 2012). Insights are lacking from other
modalities, like the voice, which may differ from the
face in terms of age-related effects on emotion
expression and perception. In terms of expression,
vocal ageing produces a loss in collagenic and
elastic fibers, connective tissue degeneration (e.g.
articular cartilage), and muscle atrophy (Colton,
Casper, Leonard, Thibeault, & Kelly, 2011; Gracco &
Kahane, 1989; Kahane, 1987). Thus, it impairs the vibratory properties of the vocal folds (e.g. Ohno & Hirano,
2014) making older voices sound less stable than
younger voices. Moreover, compared to younger
voices, older voices have a higher harmonic-to-noise
ratio and show a pitch decrease or increase for
women and men, respectively, thus marking the
speaker’s sex less clearly (Decoster & Debruyne,
Downloaded by [University of Florida] at 08:19 26 October 2017
1997; Dehqan, Scherer, Dashti, Ansari-Moghaddam, &
Fanaie, 2012). Analogously, facial ageing produces
wrinkles and tissue sagging that may impact expressive clarity. Yet, specific effects may differentiate
from vocal ageing both qualitatively and quantitatively. In terms of perception, age effects have been
documented for auditory and visual systems. Auditory
deficits, referred to as presbycusis, emerge gradually,
most likely through the accumulative impact of
noise, toxins (e.g. smoking), and genetic mechanisms
on the hair cells in the inner ear (Moser, Predoehl, &
Starr, 2013; Yamasoba et al., 2013). As a consequence,
older adults have a poorer frequency resolution and
an increased hearing threshold especially at higher
frequencies. Visual deficits result from macular
degeneration, increasing lense opacity and myopia
among others (Andersen, 2012). Compared to agerelated hearing loss, however, these deficits are typically more readily noticed and corrected (Oyler, 2012).
Given these modality differences, it seems critical
to explore the role of encoder age for nonverbal channels other than the face. Moreover, new insights from
the voice could help identify and differentiate
modality specific and unspecific mechanisms.
The female emotion perception benefit in
young and old
Apart from the expresser’s age, the receiver’s sex may
be relevant for age effects in emotion perception.
Support for this idea comes from a range of implicit
and explicit paradigms showing greater emotion sensitivity in women than in men.
Implicit paradigms have been used with both
behavioural and neuronal measures. Behavioural
responses have been recorded in priming tasks
where participants classify a target that is preceded
by a task-irrelevant prime. In such settings, women
are more readily influenced by the affective primetarget relationship (Donges, Kersting, & Suslow, 2012;
Schirmer, Seow, & Penney, 2013). Sex differences in
neuronal responses have been documented fairly consistently using event-related potentials (ERPs). Among
other findings, women show greater ERP differences
than men between emotional and neutral vocalisations that are task irrelevant (Hung & Cheng, 2014;
Schirmer, Striano, & Friederici, 2005) or for which
verbal but not vocal content has to be remembered
(Schirmer, Chen, Ching, Tan, & Hong, 2013).
Past explicit paradigms involved tasks akin to those
used to study ageing, and women were found to be
more accurate than men (Bonebright, Thompson, &
Leger, 1996; Hall, 1978; Mill, Allik, Realo, & Valk,
2009). Notably, however, this difference shows inconsistently across published work (e.g. Lima et al., 2014;
Paulmann et al., 2008) leading some to speculate that
it may be concealed by ceiling effects. In line with this,
a study comparing emotion perception on subtle and
highly-expressive displays found that women outperformed men for the former but not the latter (Hoffmann, Kessler, Eppel, Rukavina, & Traue, 2010).
To date, few studies explored the developmental
course of sex differences in emotion perception and
results are mixed. Some studies collapsing over
young and old cohorts reported a female advantage
in emotion perception accuracy (Lambrecht, Kreifelts,
& Wildgruber, 2014; Mill et al., 2009; Williams et al.,
2009). However, of the few studies that investigated
the interaction of age and sex, two found a main
effect of age only (Lima et al., 2014; Paulmann et al.,
2008), two reported main effects of both age and sex
(Ruffman, Murray, Halberstadt, & Taumoepeau, 2010;
Sullivan, Campbell, Hutton, & Ruffman, 2017), and
two found a sex effect that increased with age. In
other words, the age-related decline in task accuracy
was smaller in women than in men suggesting that
women were relatively better protected (Campbell,
Ruffman, Murray, & Glue, 2014; Demenescu et al., 2014).
An improved understanding of the developmental
trajectory of sex differences would be relevant in two
ways. First, it would help specify the roles of genetic
(for a review see Hines, 2010) and environmental
factors (e.g. Koenig & Eagly, 2005) in shaping male
and female emotion sensitivity. Although their
respective contributions are now well recognised,
the mechanisms and significance of nature and
nurture for behavioural outcomes are still incompletely understood. Considering age would help with
this by, for example, elucidating the relation
between developmental changes in the levels of sex
hormones and emotion recognition abilities. Second,
insights into the intersection of age and sex would
be instructive as concerns the mechanisms underpinning the effect of ageing on emotion recognition.
Specifically, factors that promote and protect against
performance decline may be gleaned from sexspecific rates of decline (Campbell et al., 2014).
The present study
In sum, past research revealed age-related impairments in vocal emotion perception. Yet, the role of
Downloaded by [University of Florida] at 08:19 26 October 2017
perceptual, cognitive, and emotional processes is still
debated. Moreover, how age effects are shaped by
biases in the perception of young and own-age
stimuli as well as listener sex remains unclear. To
address these gaps, we recorded emotional
expressions from a set of younger and older individuals and subjected recordings to an acoustic analysis
aimed at elucidating potential age effects on
emotion expression. Previous work examining the
relation between voice acoustics and emotions has
shown that different parameters like pitch, intensity
or duration take on different values for different
emotions, thus enabling emotion identification
(Banse & Scherer, 1996). Subsequently, we used the
recordings in an explicit emotion perception task
with naive, age-matched men and women whose
hearing, cognitive, and emotional state were carefully
assessed. Specifically, we applied an extensive test
battery including pure tone thresholds, basic
measures of intelligence, working memory, and a
range of emotion measures targeted at positive and
negative affect, anxiety, depression, optimism, and
emotion regulation.
In line with the evidence reviewed above, we
expected emotion perception to be less accurate
for older as compared to younger listeners and for
this difference to be statistically related to sensory,
cognitive, and/or emotional age effects. More importantly, we anticipated an effect of speaker age on listener accuracy. If age and own-age biases were
additive, younger listeners should do better when listening to young as compared to old speakers
because those speakers are both young and of the
same age. Older listeners on the other hand might
perform comparably when listening to young and
old speakers. Moreover, performance differences
between younger and older listeners should be
more pronounced for young as compared to old
speakers. Acoustic analysis of vocal expressions
should reveal whether age differences in expression
accuracy and/or expressive style account for these
effects. Specifically, reduced and qualitatively different emotion differentiation as a function of age
would speak for the former and the latter, respectively. Last, we hypothesised that female listeners outperform male listeners especially for more difficult
expressions. If this sex difference were to buffer
ageing effects in women as suggested previously
(Campbell et al., 2014; Demenescu et al., 2014), it
should be greater in older relative to younger
This study recruited 169 participants. Of those, four
had to be excluded because they dropped out
from the study with incomplete data. Data from an
additional two participants was discarded because
their accuracy was below three standard deviations
of the mean. The remaining participants comprised
40 older men and 43 older women aged 68.6
years (6.2 SD, range 60–85) as well as 40 younger
men and 40 younger women aged 23.1 years (3.6
SD, range 19–34). All participants were Englishspeaking Singaporeans. Older participants were
recruited through online advertisements and by
reaching out to various elder-care centres, senior
volunteer groups, and senior activity centres.
Younger participants were recruited from the
National University of Singapore and the community
via online university portals and online advertisements, respectively. The final sample is comparable
in terms of age, sex, and socio-economic background to previous studies (Ruffman et al., 2008).
However, two novel aspects include its large size
and East-Asian origin. Participants provided written
informed consent and received financial compensation of S$15 per hour.
Stimulus material
Twenty Chinese Singaporean speakers who were layactors and had taken part in non-professional
theatre performances contributed to stimulus construction. Half the speakers (6 women, 4 men) had a
mean age of 66.4 years (8.6 SD, range 50–81) and produced age-matched stimuli for our older participants.
The other speakers (6 women, 4 men) had a mean age
of 21.9 years (1.6 SD, range 20–24 years) and produced
age-matched stimuli for our younger participants.
Because the number of speakers was fairly small (N
= 20) and the ratio of men and women unbalanced,
we considered it best to ignore speaker sex and to
focus on speaker age effects instead.
The sentence material for this study comprised of
six semantically neutral, subject-verb-object sentences
(e.g. “The man walked to the office”). Although one
may argue that an emotional voice would render
these sentences emotionally incongruous, we considered this issue negligible as much of everyday
language is neutral and emotions are often added
nonverbally. Moreover, our approach has been
Downloaded by [University of Florida] at 08:19 26 October 2017
established previously (Mitchell et al., 2011; Ruffman,
Halberstadt, et al., 2009) and circumvents other,
more serious issues, arising from other approaches
such filtering speech or using pseudowords.
Speakers were invited individually. After a vocal
warm-up, they were given the list of sentences and
were informed about the emotion categories they
should portray. For each category, they were given a
short scenario (e.g. you have received very good
news) and asked to act out the associated emotion
while producing the sentences on the list. Thus,
each speaker portrayed each of the given sentences
with angry, happy, neutral, and sad vocal expressions
resulting in 480 sentences (120 per emotion, 240 per
speaker age). Recordings were done in a soundproof chamber and digitised at a 16 bit/44.1 KHz
sampling rate. The amplitude of the recorded sentences was normalised at the root-mean-square
value using Adobe Audition 2.0. The quality of the
recordings and their emotional expressiveness were
verified subjectively by the authors of this study.
Additionally, as detailed in the results section, stimulus
quality was assessed objectively via acoustic analysis.
Praat (Boersma & Weenink, 2013) was used to
extract stimulus duration, harmonics-to-noise ratio
(HNR; the periodicity of the signal), fundamental frequency (F0) mean (the lowest frequency band of an
utterance perceived as pitch or speech melody), F0
standard deviation (F0 SD), F0 range (the pitch difference between the lowest and the highest value in an
utterance), formant 1 frequency mean (F1; the next
highest frequency band in the utterance following
F0), F1 bandwidth (F1B), formant 2 frequency mean
(F2; the next highest frequency band in the utterance
following F1), jitter (cycle-to-cycle variation in F0;
measures short-term variability/perturbations in
pitch), and shimmer (cycle-to-cycle variation in amplitude; measures short-term variability/perturbations in
voice intensity).
Participants completed a screening and an experimental session. The sessions were separated by a
minimum of 15 min and a maximum of 30 days constrained largely by the availability of older adults
and the centres through which they were recruited.
The average delay between sessions did not differ significantly between older (mean 4.3, SD 6.5 days) and
younger (mean 2.9, SD 4.9 days) participants as
demonstrated with a Welch t-test (t(152) = 1.49, p > .1).
Screening session
The screening session comprised the following tests
which were selected based on prior research in this
area (e.g. Stanley & Isaacowitz, 2015). First, participants
completed a set of questionnaires and paper-pencil
tests including a demographics questionnaire, the
State-Trait Anxiety Inventory (STAI1 and STAI2; Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983), the
Center for Epidemiologic Studies Depression Scale
(CES-D; Radloff, 1977), the Emotion Regulation Questionnaire (ERQ; Gross & John, 2003), the Positive and
Negative Affect Schedule (PANAS; Watson, Clark, &
Tellegan, 1988), the Revised Life Orientation Test
(LOT-R; Scheier, Carver, & Bridges, 1994), the Digit
Symbol Substitution Test of the Wechsler Adult Intelligence Scale (DSST), the Digit Span Tests (Digit Forward,
DF; Digit Backward, DB) of the Wechsler Adult Intelligence Scale, and the Mini–Mental State Examination
(MMSE; Folstein, Folstein, & McHugh 1975).
Subsequently, we assessed participants’ hearing. To
this end, pure tones were presented via headphones
into one ear and participants indicated whether they
had heard a tone with their left or right ear by
raising their left or right arm. Tones were played at
1000, 2000, 4000, and 500 Hz, in that order for all participants. For the first 83 participants (20 older men, 23
older women, 20 younger men, 20 younger women),
each frequency was first presented at 25 dB, whereas
for later participants, each frequency was first presented at 0 dB. Measurements were chosen in accordance with the audiometric ISO values as mentioned by
the World Health Organization (WHO, http://www.who.
int/pbd/deafness/hearing_impairment_grades/en/) and
included frequencies and intensities crucial for understanding speech (French & Steinberg, 1947).
If participants heard a tone presented to them, the
current dB level was recorded as the threshold for that
particular frequency and ear. Upon failing to hear that
tone, intensity was increased in steps of 5 dB until loud
enough for the participant to hear. A single pure-tone
average was derived by calculating mean hearing
thresholds across both ears, and across all frequencies.
Screening measures of study participants were subjected to a series of Welch t-tests comparing old and
young (Table 1). With the exception of the use of suppression in emotion regulation, both groups differed
on all measures.
Experimental session
During the experimental session, participants completed an explicit emotion perception task. Each trial
Table 1. Listener demographic and screening measures.
Downloaded by [University of Florida] at 08:19 26 October 2017
Mean (SD)
Education Years
1st Recruitment Drive Hearing Threshold (dB)
25 (0)
2nd Recruitment Drive Hearing Threshold (dB)
37.32 (9.0)
State-Trait Anxiety Inventory I
State-Trait Anxiety Inventory II
ERQ Reappraisal
ERQ Suppression
16.55 (4.4)
Digit Symbol Substitution Test
Digit Span Forward
Digit Span Backward
Professional Background (Count Data)
Full-time employed
Part-time employed
Our participants had a broad professional background including untrained positions such as drivers and highly trained positions such as
teachers, air traffic controllers, and managers. Both young and old individuals were recruited from institutions catering to low and
high-income families.
began with a fixation cross. After 200 ms, a sentence
played over headphones while the fixation cross
remained on the screen. The fixation cross disappeared at sentence offset and was followed by five
choices including “1-angry”, “2-happy”, “3-neutral”,
“4-sad” and “5-other”. Participants made their choice
by pressing the appropriate number on a keyboard.
If participants selected “other”, they were prompted
to type in the emotion that they felt described the
speaker’s state. Participants submitted their choice
using the enter key after which they were asked to
rate emotion intensity and arousal on 5-point scales
ranging from 1 (very weak) to 5 (very strong). Then
the screen turned blank and the next trial began
after 1000 ms.
Prior to completing the experimental task, participants were instructed about the general procedure
and performed eight practice trials that were not
part of the stimulus set of the experiment proper.
The actual experiment comprised 480 trials. Spoken
sentences were presented only once and in a randomised order.
Emotional expression
Mean values for each acoustic parameter, speaker
group, and emotion condition are presented in Supplementary Figure S1. Parameter values were subjected to separate two-way ANOVAs with Emotion
(angry, happy, sad, neutral) as a repeated measures
factor and Speaker Age (old, young) as a between
items factor. Emotion main effects were pursued
Table 2. Summary of results from the ANOVA analyses for emotion expression.
F0 mean
F0 range
F1 mean
F1 bandwidth
F2 mean
Emotion Effect Statistic
F(3, 54) = 41.2, p < .001, η 2 = .53
F(3, 54) = 26.9, p < .001, η 2 = .33
F(3, 54) = 35.85, p < .001, η 2 = .38
F(3, 54) = 4.82, p = .005, η 2 = .13
p > .1
F(3, 54) = 1.31, p < .001, η 2 = .32
p > .1
p > .1
F(3, 54) = 4.85, p = .004, η 2 = .08
F(3, 54) = 2.27, p = .09, η 2 = .02
Post-hoc comparisons
sad > angry = happy = neutral
sad > happy = neutral > angry
angry = happy > sad > neutral
happy = angry > neutral; angry, neutral = sad < happy
angry > happy > neutral = sad
neutral > angry = happy; all other ps > .1
angry > happy; all other ps > .1
Downloaded by [University of Florida] at 08:19 26 October 2017
with six paired t-tests, the results of which were corrected for multiple comparisons using the BenjaminiHochberg procedure. With the exception of F0
range, F1 bandwidth, and F2 mean, expressions differed by Emotion (Table 2). Importantly, none of the
parameters produced significant main or interaction
effects involving Speaker Age (ps > .16).
We conducted two discriminant analyses, one for
each speaker age group, to further probe potential
speaker age effects and to ascertain reliable acoustic
discrimination between the emotion conditions.
Acoustic parameters served as the independent variables and emotion served as the dependent variable.
The analysis yielded three discriminant functions for
both older and younger speakers. Figure 1 illustrates
the first and second discriminant function of each
group. The first function accounted for 76.7% variance in older speakers and 77.9% variance in
younger speakers. Correlations between the acoustic
parameters and this function established F0 mean
and HNR as the two best discriminators in older
speakers (rs = .63, .44), and duration and HNR as the
two best discriminators in younger speakers (rs
= .84, .30). The second function accounted for 19.5%
and 18.7% variance in older and younger speakers,
respectively. It correlated most with F0 mean and
duration in both older (rs = .72, .53) and younger
speakers (rs = .71, .45). The third function accounted
for 3.8% and 3.4% variance in older and younger
speakers, respectively. It correlated most with F0 SD
and F0 range in both older (rs = .63, .42) and
younger speakers (rs = .45, .48).
The older speaker model accurately categorised
70.4% of the stimuli (angry, 63.3%; happy, 61.7%;
neutral, 81.7%; sad, 75%), while the younger speaker
model accurately categorised 78.3% of the stimuli
(angry, 80%; happy, 48%; neutral, 88.3%; sad, 96.7%).
A chi-square analysis indicated that results did not
differ as a combination of speaker age and emotion
(χ 2 = 5.02, df = 3, p = .170). Thus, we conclude that
acoustic parameters enabled classification of
emotional expressions well above chance (> 25%) for
all age groups and expressions.
Emotion perception
Emotion perception performance is illustrated in
Figures 2 and 3. Our interest focused on the accuracy
with which participants perceived emotions. Arousal
and intensity ratings were collected for exploratory
purposes only and are documented in the Supplementary Materials. Incidentally, their patterns
compare to those of accuracy. Response times were
unsuitable for statistical analysis because participants
Figure 1. Discriminant analysis results for older (left) and younger (right) speakers, respectively. Emotions (angry, happy, sad, neutral) were predicted on the basis of the acoustic parameters described in the results section. Each vocal expression is plotted according to its discrimination
scores for discriminant functions 1 and 2. The different emotion categories are represented by different geometrical symbols and their centroids
are marked as 1 for the angry, 2 for the happy, 3 for the neutral, and 4 for the sad condition. Of interest is the difference between centroids, how
closely individual expressions cluster around their condition centroid, and how much they overlap with the expressions of another condition.
Larger centroid distances, tighter expression clusters, and less overlap between expressions from different conditions indicate better
Downloaded by [University of Florida] at 08:19 26 October 2017
first chose or typed an emotion before pressing the
enter key.
Emotion categorisation responses were converted
into unbiased hit rates (Hu; Wagner, 1993) and
arcsine transformed before being subjected to an
ANOVA with Emotion (angry, happy, neutral, sad)
and Speaker Age (young, old) as repeated measures
factors and Listener Age (young, old) and Listener Sex
(female, male) as between subjects factors.
The analysis revealed main effects of Emotion (F(3,
477) = 248.99, p < .001, η 2 = .31) indicating that sad
voices had the highest accuracy followed by angry,
neutral, and happy voices (all pairs significant, p
< .001). Additionally there were main effects of Listener
Age (F(1, 159) = 49.44, p < .001, η 2 = .16) and Speaker
Age (F(1, 159) = 733.84, p < .001, η 2 = .16) as well as
interactions involving Emotion and Listener Age (F(3,
477) = 12.68, p < .001, η 2 = .02), Emotion and Speaker
Age (F(3, 477) = 49.79, p < .001, η 2 = .02), Listener Age
and Speaker Age (F(1, 159) = 106.99, p < .001, η 2
= .03), Emotion, Listener Age and Listener Sex (F(3,
477) = 2.74, p < .05, η 2 = .00), and Emotion, Listener
Age and Speaker Age (F(3, 477) = 35.07, p < .001, η 2
= .01). All other effects were non-significant (p > .1).
We pursued the interaction of Emotion, Listener
Age, and Listener Sex for each level of Emotion. The
interaction of Listener Age and Listener Sex was significant for neutral voices (F(1, 159) = 4.33, p = .04, η 2
= .03) indicating that younger (F(1, 78) = 6.19, p = .01,
η 2 = .07) but not older women (p > .25) outperformed
their male peers. For happy voices, only the main
effect of Listener Sex reached significance indicating
that both younger and older women outperformed
younger and older men, respectively (F(1, 159) =
6.17, p = .01, η 2 = .04). Main and interaction effects of
Sex were non-significant for sad and angry voices
(ps > .25).
The Emotion, Listener Age, and Speaker Age interaction was explored by analyzing each level of
Emotion. This revealed a significant interaction of Listener Age and Speaker Age in the angry (F(1, 161) =
5.94, p = 0.02, η 2 = .00), happy (F(1, 161) = 188.23, p <
0.001, η 2 = .13), neutral (F(1, 161) = 57.69, p < 0.001,
η 2 = .03), and sad (F(1, 161) = 7.01, p = .01, η 2 = .01)
conditions. Based on our interest in listener age
effects, we pursued these two-way interactions for
each speaker age. This revealed that younger listeners
outperformed older listeners in the case of younger
speakers for angry (F(1, 161) = 21.17, p < 0.001, η 2
= .12), happy (F(1, 161) = 161.08, p < 0.001, η 2 = .50),
neutral (F(1, 161) = 56.04, p < 0.001, η 2 = .26), and sad
(F(1, 161) = 12.65, p < 0.001, η 2 = .07) voices. Similarly,
in the case of older speakers, younger listeners were
significantly better than older listeners for angry (F(1,
161) = 18.82, p < 0.001, η 2 = .10), happy (F(1, 161) =
16.87, p < 0.001, η 2 = .09), neutral (F(1, 161) = 24.42, p
< .001, η 2 = .13), and sad (F(1, 161) = 4.38, p = .04, η 2
= .03) voices. Importantly, however, differences
between younger and older listeners were larger for
younger than older speakers. Due to our interest in
speaker age effects, we explored each two way interaction a second time by listener age. This revealed that
in the case of older listeners, younger speakers were
better recognised than older speakers for angry (F(1,
82) = 92.23, p < .001, η 2 = .06), neutral (F(1, 82) =
29.91, p < .001, η 2 = .02) and sad (F(1, 82) = 187.82, p
< .001, η 2 = .19) voices. However, performances were
comparable for both speaker age groups in the case
of happy voices (p > .25). In the case of younger listeners, younger speakers were better recognised than
older speakers for angry (F(1, 79) = 116.84, p < .001,
η 2 = .16), happy (F(1, 79) = 347.39, p < .001, η 2 = .43),
neutral (F(1, 79) = 287.66, p < .001, η 2 = .29), and sad
(F(1, 79) = 290.66, p < .001, η 2 = .42) voices. Notably,
differences between younger and older speakers
were larger for younger than older listeners.
Statistical mediation of age differences in
emotion perception
We conducted a multiple mediation analysis to assess
whether age-related differences in screening
measures statistically explain age-related differences
in emotion perception. The model estimated total,
direct, and indirect (individual as well as combined)
effects of multiple simultaneous mediators. Method
and terminology are described in much detail by
Preacher and Hayes (Preacher & Hayes, 2008) and
have been applied to the study of ageing and
emotion previously (Lambrecht et al., 2012, 2014;
Lima et al., 2014). In short, the estimated total effect
refers to the statistical relationship between a
primary dependent and independent variable, in our
case emotion recognition and age. The direct effect
refers to that same relationship when the effect of
potential mediators is controlled. The indirect effect
refers to the relationship of dependent and independent variable that is explained by one or more
mediators. Again, in our case, an indirect effect may
be that working memory statistically explains age
differences in emotion recognition.
Downloaded by [University of Florida] at 08:19 26 October 2017
Figure 2. Emotion perception plots illustrating the interaction of listener age and listener sex for each emotion. Error bars represent the betweensubject standard error.
Age was defined as the independent variable and
Hu averaged across all variables besides listener age
was defined as the dependent variable. Mediators
were selected as follows. We first identified screening
measures that correlated with age and, among those,
identified measures with conceptual overlap (e.g.
STAI-1 and STAI-2). In case of such overlap, we chose
the measure showing the greater correlation with
age. Last, if mediators were strongly inter-correlated
(r ≥ .6), the one with the smaller age effect was
dropped. Together, these steps helped reduce problems arising from collinearity (Preacher & Hayes,
2008). The mediators that entered the model included
trait scores of the State-Trait Anxiety Inventory (STAI2), reappraisal scores of the Emotion Regulation Questionnaire (ERQ-R), Positive Affect (PANAS-P), Revised
Life Orientation Test (LOT-R), the Digit Symbol
Substitution Test (DSST), and the Digit Forward Span
Test (DF).
The mediation analysis revealed that the total
effect coefficient ± SE (−.0034 ± −.0005) of the
regression of emotion perception performance on
age was significant (t(161) = −7.38, p < .001) and that
the direct effect coefficient ± SE (−.0016 ± .0009),
when controlling for all mediators, was only marginally significant (t(155) = −1.77, p = .08). Thus, there
was an indirect effect, which was estimated via bootstrapping (5,000 resamples; Hayes, 2009). Results in
the form of 95% confidence intervals of the estimated
effects revealed a significant effect across all
mediators, 95% CI [−.0039, −.0003], and for DSST
[−.0036, −.0004] and LOT-R scores [−.0009, −.0001],
individually. Lower scores were associated with an
increased emotion perception deficit for older relative
Downloaded by [University of Florida] at 08:19 26 October 2017
Figure 3. Emotion perception plots illustrating the interaction of speaker age and listener age for each emotion. Error bars represent the withinsubject standard error.
to younger adults. None of the other individual
mediators crossed the significance threshold (STAI2
[−.0003, .0007], ERQ-R [−.0001, .0003], PANAS-P
[−.0001, .0006], DF [−.0007, .0001]). We conducted a
similar analysis with Sex as a co-variate to explore a
possible role of this factor in the relationship
between screening measures and emotion perception
performance. Results were unchanged, suggesting
that effects of sex were negligible.
Although it is well established that older adults experience increasing difficulties with emotion perception,
past research has described the phenomenon incompletely. Specifically, previous studies paid little attention to possible speaker age and listener sex effects.
Additionally, there has been conflicting evidence on
the role of sensory, emotional, and cognitive
changes. The present study addressed this situation.
Replicating previous work, we found that older
adults perform worse than younger adults when categorising vocal expressions and that cognitive and
emotional age effects are statistically related to this.
Extending previous work, we demonstrate a role of
the vocaliser’s age and the receiver’s sex in modulating performance differences between young and
old. The following paragraphs deal with these findings
in more detail.
Own-age versus other-age emotion perception
In line with our predictions, we found that listener age
effects were more pronounced for younger as
Downloaded by [University of Florida] at 08:19 26 October 2017
compared with older speakers. Moreover, emotion
perception was better for younger than for older
speakers and this effect was more prominent in
younger as compared to older listeners. These
results agree with the simple additive model of age
and an own-age biases proposed in the introduction.
They differ only in that both younger and older listeners performed better for younger as compared to
older speakers suggesting that the age bias affects listener performance more strongly than the own-age
The present findings contradict one previous vocal
perception study (Dupuis & Pichora-Fuller, 2015) that
found no effects of speaker age in two separate experiments. However, this former study used only two
female speakers and compared only 28 or 56
younger adults with 28 older adults and was likely
less sensitive than the present protocol. On the
other hand, the findings obtained here map onto
visual work showing that expressions from young individuals produce overall better emotion perception
(Riediger et al., 2011) and that own-age faces
dampen emotion perception deficits in older adults
(Riediger et al., 2011; but see Ebner & Johnson,
2009). Hence, they extend these visual phenomena
into the auditory modality. Additionally, they speak
to the mechanisms underpinning speaker age effects
proposed previously.
The mechanisms that were of interest here concerned age differences in (1) expression accuracy, (2)
expression style and the familiarity with that style, as
well as (3) the motivation to engage with young and
old individuals. The present results offer no support
for the first mechanism. A discriminant analysis on
acoustic parameters categorised young and old
expressions with comparable accuracy and an
ANOVA conducted for each acoustic parameter
failed to identify speaker age effects. However,
because their absence may be due to the relatively
small speaker sample or the fact that our acoustic
analysis, although detailed, was far from exhaustive,
more work is needed to determine whether indeed
the ability to encode emotions vocally is preserved
in older age.
The second mechanism received partial support
from both the expression and perception results.
The discriminant analysis, but not the ANOVA on
voice acoustics, showed that the different vocal parameters differed in the way they contributed to the
emotional expressions of younger and older speakers.
Whereas younger speakers relied most on expression
duration, older speakers relied most on pitch when
emotionally modulating speech. This was complemented by an interaction of speaker age and listener
age in emotion recognition. As mentioned above,
the age gap in listener performance was smaller in
the context of older as compared to younger speakers
in line with the notion that vocal expressions change
with age. Familiarity with these expressions (Wiese
et al., 2008) may then differentiate recognition processes in an age-dependent fashion and buffer the
performance of older adults when listening to peers.
The present results also support the third mechanism, which is based on motivation. This mechanism
does not presuppose age differences in expression
accuracy or style, but merely assumes that agerelated changes in general voice acoustics allow listeners to infer speaker age and that this in turn influences
their motivation to engage with the speaker. Young
and own-age speakers are preferred over old and
other-age speakers (Gordon & Arvey, 2004). The interaction of speaker age and listener age evident in the
perceptual data is in line with this. However, as this
interaction agrees also with the familiarity mechanism
mentioned above, future research is needed to clearly
differentiate the two. This may be achieved, for
example, by establishing expressive style differences
more clearly using a larger sample of speakers or by
manipulating motivational processes through age
priming (e.g. Dijksterhuis, Aarts, Bargh, & van Knippenberg, 2000; Kelley, Tang, & Schmeichel, 2014; Packer &
Chasteen, 2006).
The female emotion perception benefit in
young and old
Past research has shown greater emotion sensitivity in
women than in men at both young and old ages.
However, whereas reported sex differences are relatively robust when explored with implicit paradigms
(e.g. Schirmer, Chen, et al., 2013), they are more
fickle when explored with explicit paradigms,
tending to show only when expressions are subtle
(Hoffmann et al., 2010). In line with previous studies,
we found that women performed more accurately
than men. Moreover, as reported by others (Lambrecht et al., 2014), this effect was present for happy
and neutral expressions only, which were overall less
well-perceived than sad and angry expressions.
Although the superior performance of women is
well established in the literature, its developmental
course has been rarely investigated and results are
Downloaded by [University of Florida] at 08:19 26 October 2017
equivocal. Some studies find no sex effects (Lima et al.,
2014; Paulmann et al., 2008), whereas others find sex
effects independently of age (Ruffman et al., 2010; Sullivan et al., 2017) or show that sex effects increase as
individuals get older (Campbell et al., 2014; Demenescu et al., 2014). The present results add to this controversy by suggesting that sex effects decline with
age. Whereas for happy expressions, women outperformed men irrespective of age, for neutral
expressions the effect was confined to younger
women. These differences may be due to methodological choices (e.g. faces vs voices). Additionally,
sample size and sampling biases may make it difficult
to estimate true sex effects in cross-sectional designs.
Moreover, this issue may be especially relevant for
studies with small group sizes of less than 20 individuals (Campbell et al., 2014; Demenescu et al., 2014;
Lima et al., 2014; Paulmann et al., 2008).
Although pending replication, the present study –
with 40–43 individuals per group – disagrees with
the idea that women are buffered against an agerelated performance decrement. On the contrary, it
suggests that emotion recognition deficits become
an equal concern for the two sexes. This could be
due to sex-specific environmental influences declining
with age. For example, stereotype threat concerning
the nonverbal abilities of men and women (Koenig &
Eagly, 2005) may be greater for young than old individuals because of differences in life experience. Additionally, the biological factors underpinning sex
differences may change with age. Declining amounts
of circulating sex hormones may reduce the female
benefit in emotion recognition. This possibility
accords with existing work linking sex hormones with
sex differences in nonverbal tasks (Ebner, Kamin,
Diaz, Cohen, & MacDonald, 2015; Schirmer et al., 2008).
Possible explanations of age effects in emotion
Although speaker age and listener sex modulated
accuracy differences between the two age groups
examined here, they did not explain these differences
in full. Moreover, the presence of a robust listener age
main effect pointed to more fundamental processes
compromising the task performance of older participants. Several suggestions have been made as to
what these processes might be and the present
results contribute to this debate. A first, obvious candidate is an emerging hearing deficit termed presbycusis. Some findings suggest it impairs auditory emotion
perception (Lambrecht et al., 2012; Mitchell & Kingston, 2014). However, other studies show it to be irrelevant (Dupuis & Pichora-Fuller, 2015; Orbelo et al.,
2005) or to explain age differences in emotion perception incompletely (Lambrecht et al., 2012). The present
results align with this latter perspective. Although
older participants had poorer hearing than younger
participants, this difference was statistically unrelated
to the difference in emotion perception.
A second possible causal mechanism concerns agerelated changes in the experience of emotions. At
present, the literature is divided as to whether all
emotions or only positive emotions change. In
support of the former position, research has shown
that age weakens subjective and bodily responses to
both negative (Curtis, Aunger, & Rabie, 2004) and positive events (Levenson et al., 1991; Tsai et al., 2000).
Additionally, there is evidence that the amygdala, a
brain structure thought to represent stimulus relevance and to be instrumental in both positive and
negative affect (Sander, 2012), shows an age-related
decline (Allen, Bruss, Brown, & Damasio, 2005;
Cacioppo et al., 2011; Malykhin, Bouchard, Camicioli,
& Coupland, 2008; Pressman et al., 2016) albeit the relevance of this decline for emotion processing is still
debated (Mather, 2012). Taking the latter position,
socio-emotional selectivity theory (SST), holds that
ageing reduces an individual’s temporal horizon,
making short-term outcomes more relevant than
long-term outcomes (Carstensen, 2006; Carstensen
et al., 1999). Thus, older adults see less value in tolerating immediate distress for a delayed reward and,
when given the choice, opt for short-term, hedonic
pay-offs. In line with this, there is some evidence
that their amygdala reactivity to negative but not positive events is reduced relative to younger adults
(Mather et al., 2004).
The present study documents emotion changes for
a number of measures including positive affect
(PANAS-P), negative affect (PANAS-N), optimism
(LOT-R) and the use of reappraisal in emotion regulation (ERQ Reappraisal). Compared to younger participants, older participants had a more positive and a less
negative affective state, had a less optimistic outlook
on their future, and used reappraisal more frequently.
Although these effects partially concur with SST,
together they failed to explain age differences in
emotion perception. The mediation analysis indicated
that age differences in optimism were positively
related to age differences in emotion perception and
that lower optimism partially explained poorer
Downloaded by [University of Florida] at 08:19 26 October 2017
emotion perception in older relative to younger adults.
Moreover, other emotion measures were irrelevant.
Last, it has been argued that declining cognitive
processes impair emotion perception. In particular,
the abilities to attend to multiple information units
and to manipulate these units in working memory
(typically conceived of as fluid intelligence; Kyllonen
& Christal, 1990) are necessary when listening to
sounds and deciding which of several response
options is appropriate. As these abilities peak in the
early twenties and then slowly wane (Filley &
Cullum, 1994; Gazzaley, Sheridan, Cooney, & D’Esposito, 2007; Wechsler, 1955), one can expect them to
contribute to age differences in explicit tests of
emotion perception. Accordingly, such a contribution
has been demonstrated (Krendl & Ambady, 2010; Lambrecht et al., 2012; Lima et al., 2014; Orbelo et al., 2005;
Pichora-Fuller, 2003) and was replicated here. Specifically, we observed that working memory (DSST)
declined with age and that this decline statistically
mediated the relationship between age and emotion
Outlook and conclusions
Before concluding, we wish to discuss a few questions
arising from the present work. One question concerns
an apparent difference between emotion effects in
the visual and the auditory domain. Whereas happy
expressions elicit the highest levels of performance
in face perception studies, they elicited the lowest
levels of performance here. Moreover, whereas the
happy face advantage shows consistently (Kirita &
Endo, 1995; Lipp, Craig, & Dat, 2015), results for
happy voices are mixed with some converging (e.g.
Dupuis & Pichora-Fuller, 2015; Paulmann et al., 2008)
and others diverging from the present results (Demenescu et al., 2014). A modality difference in the recognition of happy expressions might be due to
differences in the frequency with which people
encounter them in the face and in the voice.
Perhaps smiling is more prevalent and more readily
posed than vocal happiness. Alternatively, the expressive features of facial happiness may be quite salient
and readily discerned from those of other emotions.
By contrast, the expressive features of vocal happiness
may be less discriminating. These and other possibilities are theoretically interesting and should be
explored in future studies.
A second question arising from this work is
whether the statistical relation between age effects
on emotion recognition and both working memory
and optimism could be replicated with a longitudinal
design. Although popular in the literature, the crosssectional approach is confounded by economic progress and life-style changes that may produce generations with different emotion sensitivity (see
Lindenberger, von Oertzen, Ghisletta, & Hertzog,
2011 for a discussion of problems with the cross-sectional approach). Future research should tackle this
possibility and explore emotion perception with longitudinal data. Moreover, it should aim for a larger age
range and the testing of more and other expressions
including less common social signals like those of confidence or trust (Jiang & Pell, 2015; Lima et al., 2014).
Although not without shortcomings, many methodological features make the present study a valuable addition to existing work on ageing and
emotion. Among other features, this includes a
large sample size, extensive participant screening,
as well as the use of age-matched stimuli that were
subjected to extensive acoustic analyses. Its results
are hence informative and contribute to our understanding of the relationship between age and vocal
emotion perception. First, the results suggest a relevance of both optimism and working-memory in
explaining performance differences between young
and old. Second, they show that age deficits are
greater for expressions from younger as compared
to older speakers in line with an additive bias
towards young and own-age individuals. Last, our
findings indicate that the female benefit in emotion
perception persists in older adults. Yet, this benefit
becomes smaller, making emotion perception deficits
a relevant concern for both older men and women.
Disclosure statement
No potential conflict of interest was reported by the authors.
Allen, J. S., Bruss, J., Brown, C. K., & Damasio, H. (2005). Normal
neuroanatomical variation due to age: The major lobes and
a parcellation of the temporal region. Neurobiology of Aging,
26, 1245–1260.
Andersen, G. J. (2012). Aging and vision: Changes in function and
performance from optics to perception. Wiley Interdisciplinary
Reviews: Cognitive Science, 3, 403–410.
Banse, R., & Scherer, K. R. (1996). Acoustic profiles in vocal
emotion expression. Journal of Personality and Social
Psychology, 70, 614–636.
Downloaded by [University of Florida] at 08:19 26 October 2017
Boersma, P., & Weenink, D. (2013). Praat: Doing phonetics by
computer [Computer program]. Version 5.3.51. Retrieved
Bonebright, T. L., Thompson, J. L., & Leger, D. W. (1996). Gender
stereotypes in the expression and perception of vocal affect.
Sex Roles, 34, 429–445.
Cacioppo, J. T., Berntson, G. G., Bechara, A., Tranel, D., & Hawkley,
L. C. (2011). Could an aging brain contribute to subjective
well-being? The value added by a social neuroscience perspective. In A. Todorov, S. Fiske, & D. Prentice (Eds.), Social
neuroscience: Toward understanding the underpinnings of the
social mind (pp. 249–277). New York: Oxford University Press.
Campbell, A., Ruffman, T., Murray, J. E., & Glue, P. (2014). Oxytocin
improves emotion recognition for older males. Neurobiology
of Aging, 35, 2246–2248.
Carstensen, L. L., Isaacowitz, D. M., & Charles, S. T. (1999). Taking
time seriously. A theory of socioemotional selectivity.
American Psychologist, 54, 165–181.
Carstensen, L. L. (2006). The influence of a sense of time on
human development. Science, 312, 1913–1915.
Colsher, P. L., & Wallace, R. B. (1991). Longitudinal application
of cognitive function measures in a defined population of
community-dwelling elders. Annals of Epidemiology, 1,
Colton, R. H., Casper, J. K., Leonard, R., Thibeault, S., & Kelly, R.
(2011). Understanding voice problems: A physiological perspective for diagnosis and treatment (4th ed.). Philadelphia: LWW.
Curtis, V., Aunger, R., & Rabie, T. (2004). Evidence that disgust
evolved to protect from risk of disease. Proceedings of the
Royal Society B: Biological Sciences, 271, S131–S133.
Decoster, W., & Debruyne, F. (1997). The ageing voice: Changes in
fundamental frequency, waveform stability and spectrum.
Acta Oto-Rhino-Laryngologica Belgica, 51, 105–112.
Dehqan, A., Scherer, R. C., Dashti, G., Ansari-Moghaddam, A., &
Fanaie, S. (2012). The effects of aging on acoustic parameters
of voice. Folia Phoniatrica et Logopaedica: Official Organization
of the International Association of Logopedics and Phoniatrics
(IALP), 64, 265–270.
Demenescu, L. R., Mathiak, K. A., & Mathiak, K. (2014). Age- and
gender-related variations of emotion recognition in pseudowords and faces. Experimental Aging Research, 40, 187–207.
Dijksterhuis, A., Aarts, H., Bargh, J. A., & van Knippenberg, A.
(2000). On the relation between associative strength and
automatic behavior. Journal of Experimental Social
Psychology, 36, 531–544.
Donges, U.-S., Kersting, A., & Suslow, T. (2012). Women’s greater
ability to perceive happy facial emotion automatically: Gender
differences in affective priming. PloS One, 7, e41745.
Dupuis, K., & Pichora-Fuller, M. K. (2015). Aging affects identification of vocal emotions in semantically neutral sentences.
Journal of Speech, Language, and Hearing Research: JSLHR,
58, 1061–1076. doi:10.1044/2015_JSLHR-H-14-0256
Ebner, N. C., He, Y., & Johnson, M. K. (2011). Age and emotion
affect how we look at a face: Visual scan patterns differ for
own-age versus other-age emotional faces. Cognition &
Emotion, 25, 983–997.
Ebner, N. C., & Johnson, M. K. (2009). Young and older emotional
faces: Are there age group differences in expression identification and memory? Emotion, 9, 329–339.
Ebner, N. C., Johnson, M. R., Rieckmann, A., Durbin, K. A., Johnson,
M. K., & Fischer, H. (2013). Processing own-age vs. other-age
faces: Neuro-behavioral correlates and effects of emotion.
NeuroImage, 78, 363–371.
Ebner, N. C., Kamin, H., Diaz, V., Cohen, R. A., & MacDonald, K.
(2015). Hormones as “difference makers” in cognitive and
socioemotional aging processes. Frontiers in Psychology, 5,
505. doi:10.3389/fpsyg.2014.01595
Fecteau, S., Armony, J. L., Joanette, Y., & Belin, P. (2005).
Judgment of emotional nonlinguistic vocalizations: Agerelated differences. Applied Neuropsychology, 12, 40–48.
Filley, C. M., & Cullum, C. M. (1994). Attention and vigilance
functions in normal aging. Applied Neuropsychology, 1, 29–
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental
state”. A practical method for grading the cognitive state of
patients for the clinician. Journal of Psychiatric Research, 12
(3), 189–198.
Fölster, M., Hess, U., Hühnel, I., & Werheid, K. (2015). Age-related
response bias in the decoding of sad facial expressions.
Behavioral Sciences, 5, 443–460.
French, N. R., & Steinberg, J. C. (1947). Factors governing the intelligibility of speech sounds. The Journal of the Acoustical Society
of America, 19, 90–119.
Gazzaley, A., Sheridan, M. A., Cooney, J. W., & D’Esposito, M.
(2007). Age-related deficits in component processes of
working memory. Neuropsychology, 21, 532–539.
Gordon, R. A., & Arvey, R. D. (2004). Age bias in laboratory and
field settings: A meta-analytic investigation1. Journal of
Applied Social Psychology, 34, 468–492.
Gracco, C., & Kahane, J. C. (1989). Age-related changes in the vestibular folds of the human larynx: A histomorphometric study.
Journal of Voice, 3, 204–212.
Gross, J. J., & John, O. P. (2003). Individual differences in two
emotion regulation processes: Implications for affectrelationships, and well-being. Journal of Personality and Social
Psychology, 85, 348–362.
Hall, J. A. (1978). Gender effects in decoding nonverbal cues.
Psychological Bulletin, 85, 845–857.
Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical
mediation analysis in the new millennium. Communication
Monographs, 76, 408–420.
He, Y., Ebner, N. C., & Johnson, M. K. (2011). What predicts the
own-age bias in face recognition memory? Social Cognition,
29, 97–109.
Hines, M. (2010). Sex-related variation in human behavior and the
brain. Trends in Cognitive Sciences, 14, 448–456.
Hoffmann, H., Kessler, H., Eppel, T., Rukavina, S., & Traue, H. C.
(2010). Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better
than men. Acta Psychologica, 135, 278–283.
Hung, A.-Y., & Cheng, Y. (2014). Sex differences in preattentive
perception of emotional voices and acoustic attributes.
Neuroreport, 25, 464–469.
Hunter, E. M., Phillips, L. H., & MacPherson, S. E. (2010). Effects of
age on cross-modal emotion perception. Psychology and
Aging, 25, 779–787.
Isaacowitz, D. M., Löckenhoff, C. E., Lane, R. D., Wright, R.,
Sechrest, L., Riedel, R., & Costa, P. T. (2007). Age differences
in recognition of emotion in lexical stimuli and facial
expressions. Psychology and Aging, 22, 147–159.
Isaacowitz, D. M., Wadlinger, H. A., Goren, D., & Wilson, H. R.
(2006). Is there an age-related positivity effect in visual
Downloaded by [University of Florida] at 08:19 26 October 2017
attention? A comparison of two methodologies. Emotion, 6,
Jiang, X., & Pell, M. D. (2015). On how the brain decodes vocal
cues about speaker confidence. Cortex; a journal devoted to
the study of the nervous system and behavior, 66, 9–34.
Kahane, J. C. (1987). Connective tissue changes in the larynx and
their effects on voice. Journal of Voice, 1, 27–30.
Kelley, N. J., Tang, D., & Schmeichel, B. J. (2014). Mortality salience
biases attention to positive versus negative images among
individuals higher in trait self-control. Cognition and
Emotion, 28, 550–559.
Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing
facial expressions. Acta Psychologica, 89, 149–163.
Koenig, A. M., & Eagly, A. H. (2005). Stereotype threat in men on a
test of social sensitivity. Sex Roles, 52, 489–496.
Krendl, A. C., & Ambady, N. (2010). Older adults’ decoding of
emotions: Role of dynamic versus static cues and agerelated cognitive decline. Psychology and Aging, 25, 788–793.
Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little
more than) working-memory capacity?! Intelligence, 14, 389–
Lambrecht, L., Kreifelts, B., & Wildgruber, D. (2012). Age-related
decrease in recognition of emotional facial and prosodic
expressions. Emotion, 12, 529–539. doi:10.1037/a0026827
Lambrecht, L., Kreifelts, B., & Wildgruber, D. (2014). Gender differences in emotion recognition: Impact of sensory modality and
emotional category. Cognition and Emotion, 28, 452–469.
Levenson, R. W., Carstensen, L. L., Friesen, W. V., & Ekman, P.
(1991). Emotion, physiology, and expression in old age.
Psychology and Aging, 6, 28–35.
Lima, C. F., Alves, T., Scott, S. K., & Castro, S. L. (2014). In the ear of
the beholder: How age shapes emotion processing in nonverbal vocalizations. Emotion, 14, 145–160.
Lindenberger, U., von Oertzen, T., Ghisletta, P., & Hertzog, C.
(2011). Cross-sectional age variance extraction: What’s
change got to do with it? Psychology and Aging, 26, 34–47.
Lipp, O. V., Craig, B. M., & Dat, M. C. (2015). A happy face advantage with male caucasian faces: It depends on the company
you keep. Social Psychological and Personality Science, 6,
Malatesta, C. Z., Izard, C. E., Culver, C., & Nicolich, M. (1987).
Emotion communication skills in young, middle-aged, and
older women. Psychology and Aging, 2, 193–203.
Malykhin, N. V., Bouchard, T. P., Camicioli, R., & Coupland, N. J.
(2008). Aging hippocampus and amygdala. Neuroreport, 19,
Mather, M., Canli, T., English, T., Whitfield, S., Wais, P., Ochsner, K.,
… Carstensen, L. L. (2004). Amygdala responses to emotionally valenced stimuli in older and younger adults.
Psychological Science, 15, 259–263.
Mather, M. (2012). The emotion paradox in the aging brain.
Annals of the New York Academy of Sciences, 1251, 33–49.
Mill, A., Allik, J., Realo, A., & Valk, R. (2009). Age-related differences
in emotion recognition ability: A cross-sectional study.
Emotion, 9, 619–630.
Mitchell, R. L. C., Kingston, R. A., & Barbosa Bouças, S. L. (2011). The
specificity of age-related decline in interpretation of emotion
cues from prosody. Psychology and Aging, 26, 406–414.
Mitchell, R. L. C., & Kingston, R. A. (2014). Age-related decline in
emotional prosody discrimination. Experimental Psychology,
61, 215–223.
Moser, T., Predoehl, F., & Starr, A. (2013). Review of hair cell
synapse defects in sensorineural hearing impairment.
Otology & Neurotology, 34, 995–1004.
Ohno, T., & Hirano, S. (2014). Treatment of aging vocal folds:
Novel approaches. Current Opinion in Otolaryngology & Head
and Neck Surgery, 22, 472–476.
Orbelo, D. M., Grim, M. A., Talbott, R. E., & Ross, E. D. (2005).
Impaired comprehension of affective prosody in elderly subjects is not predicted by age-related hearing loss or agerelated cognitive decline. Journal of Geriatric Psychiatry and
Neurology, 18, 25–32.
Oyler, A. L. (2012). Untreated hearing loss in adults—a growing
national epidemic. Retrieved from
Packer, D. J., & Chasteen, A. L. (2006). Looking to the future: How
possible aged selves influence prejudice toward older adults.
Social Cognition, 24, 218–247.
Paulmann, S., Pell, M. D., & Kotz, S. A. (2008). How aging affects
the recognition of emotional speech. Brain and Language,
104, 262–269.
Pichora-Fuller, M. K. (2003). Cognitive aging and auditory information processing. International Journal of Audiology, 42
(Suppl. 2), 2S26–2S32.
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling
strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–
Pressman, P. S., Noniyeva, Y., Bott, N., Dutt, S., Sturm, V., Miller, B.
L., & Kramer, J. H. (2016). Comparing volume loss in neuroanatomical regions of emotion versus regions of cognition in
healthy aging. PloS One, 11, e0158187.
Radloff, L. S. (1977). The CES-D Scale: A self-report depression
scale for research in the general population. Applied
Psychological Measurement, 1, 385–401.
Rhodes, M. G., & Anastasi, J. S. (2012). The own-age bias in face
recognition: A meta-analytic and theoretical review.
Psychological Bulletin, 138, 146–174.
Riddle, D. R. (Ed.). (2007). Brain aging: Models, methods, and mechanisms. Boca Raton, FL: CRC Press. Retrieved from http://www.
Riediger, M., Voelkle, M. C., Ebner, N. C., & Lindenberger, U. (2011).
Beyond “happy, angry, or sad?”: Age-of-poser and age-of-rater
effects on multi-dimensional emotion perception. Cognition
and Emotion, 25, 968–982.
Ruffman, T., Halberstadt, J., & Murray, J. (2009). Recognition of
facial, auditory, and bodily emotions in older adults. The
Journals of Gerontology Series B: Psychological Sciences and
Social Sciences, 64B, 696–703.
Ruffman, T., Henry, J. D., Livingstone, V., & Phillips, L. H. (2008). A
meta-analytic review of emotion recognition and aging:
Implications for neuropsychological models of aging.
Neuroscience & Biobehavioral Reviews, 32, 863–881.
Ruffman, T., Murray, J., Halberstadt, J., & Taumoepeau, M. (2010).
Verbosity and emotion recognition in older adults. Psychology
and Aging, 25, 492–497.
Ruffman, T., Sullivan, S., & Dittrich, W. (2009). Older adults’ recognition of bodily and auditory expressions of emotion.
Psychology and Aging, 24, 614–622.
Ryan, M., Murray, J., & Ruffman, T. (2009). Aging and the perception of emotion: Processing vocal expressions alone and with
faces. Experimental Aging Research, 36, 1–22.
Downloaded by [University of Florida] at 08:19 26 October 2017
Sander, D. (2012). The role of the amygdala in the appraising
brain. Behavioral and Brain Sciences, 35, 161.
Scheier, M. F., Carver, C. S., & Bridges, M. W. (1994).
Distinguishing optimism from neuroticism (and trait
anxiety, selfmastery and self-esteem): A re-evaluation of the
Life Orientation Test. Journal of Personality and Social
Psychology, 67, 1063–1078.
Schirmer, A. (2014). Emotion. (1st ed.). Thousand Oaks, CA: SAGE
Publications, Inc.
Schirmer, A., Chen, C.-B., Ching, A., Tan, L., & Hong, R. Y. (2013).
Vocal emotions influence verbal memory: Neural correlates
and interindividual differences. Cognitive, Affective, &
Behavioral Neuroscience, 13, 80–93.
Schirmer, A., Escoffier, N., Li, Q. Y., Li, H., Strafford-Wilson, J., & Li,
W.-I. (2008). What grabs his attention but not hers? Estrogen
correlates with neurophysiological measures of vocal
change detection. Psychoneuroendocrinology, 33, 718–727.
Schirmer, A., Seow, C. S., & Penney, T. B. (2013). Humans process
dog and human facial affect in similar ways. PLoS ONE, 8,
Schirmer, A., Striano, T., & Friederici, A. D. (2005). Sex differences
in the preattentive processing of vocal emotional expressions.
Neuroreport, 16, 635–639.
Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg, P. R., & Jacobs,
G. A. (1983). Manual for the State-Trait Anxiety Inventory. Palo
Alto, CA: Consulting Psychologists Press.
Stanley, J. T., & Isaacowitz, D. M. (2015). Caring more and knowing
more reduces age-related differences in emotion perception.
Psychology and Aging, 30, 383–395.
Sullivan, S., Campbell, A., Hutton, S. B., & Ruffman, T. (2017).
What’s good for the goose is not good for the gander: Age
and gender differences in scanning emotion faces. The
Journals of Gerontology: Series B, 72, 441–447.
Tsai, J. L., Levenson, R. W., & Carstensen, L. L. (2000). Autonomic,
subjective, and expressive responses to emotional films in
older and younger Chinese Americans and european
Americans. Psychology and Aging, 15, 684–693.
Wagner, H. L. (1993). On measuring performance in category
judgment studies of nonverbal behavior. Journal of
Nonverbal Behavior, 17, 3–28.
Watson, D., Clark, L. A., & Tellegan, A. (1988). Development and
validation of brief measures of positive and negative affect:
The PANAS scales. Journal of Personality and Social
Psychology, 54(6), 1063–1070.
Wechsler, D. (1955). Manual for the Wechsler adult intelligence
scale (Vol. VI). Oxford: Psychological Corp.
Wiese, H., Schweinberger, S. R., & Hansen, K. (2008). The age of
the beholder: ERP evidence of an own-age bias in face
memory. Neuropsychologia, 46, 2973–2985.
Williams, L. M., Mathersul, D., Palmer, D. M., Gur, R. C., Gur, R. E., &
Gordon, E. (2009). Explicit identification and implicit recognition of facial emotions: I. Age effects in males and females
across 10 decades. Journal of Clinical and Experimental
Neuropsychology, 31, 257–277.
Yamasoba, T., Lin, F. R., Someya, S., Kashio, A., Sakamoto, T., &
Kondo, K. (2013). Current concepts in age-related hearing
loss: Epidemiology and mechanistic pathways. Hearing
Research, 303, 30–38.
Без категории
Размер файла
1 703 Кб
2017, 1393399, 02699931
Пожаловаться на содержимое документа