miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Bänziger, Tanja
Publications (10 of 11) Show all publications
Hovey, D., Henningsson, S., Cortes, D. S., Bänziger, T., Zettergren, A., Melke, J., . . . Westberg, L. (2018). Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT2. Social Cognitive & Affective Neuroscience, 13(2), 173-181
Open this publication in new window or tab >>Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT2
Show others...
2018 (English)In: Social Cognitive & Affective Neuroscience, ISSN 1749-5016, E-ISSN 1749-5024, Vol. 13, no 2, p. 173-181Article in journal (Refereed) Published
Abstract [en]

The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n=309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.

Keywords
ARNT2, Emotion recognition, Oxytocin, Social cognition, Vasopressin
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33274 (URN)10.1093/scan/nsx141 (DOI)000427017200004 ()29194499 (PubMedID)2-s2.0-85042627662 (Scopus ID)
Available from: 2018-03-14 Created: 2018-03-14 Last updated: 2018-05-07Bibliographically approved
Juslin, P. N., Laukka, P. & Bänziger, T. (2018). The Mirror to Our Soul?: Comparisons of Spontaneous and Posed Vocal Expression of Emotion. Journal of nonverbal behavior, 42(1), 1-40
Open this publication in new window or tab >>The Mirror to Our Soul?: Comparisons of Spontaneous and Posed Vocal Expression of Emotion
2018 (English)In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 42, no 1, p. 1-40Article in journal (Refereed) Published
Abstract [en]

It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose–response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33350 (URN)10.1007/s10919-017-0268-x (DOI)
Available from: 2018-03-26 Created: 2018-03-26 Last updated: 2018-07-19Bibliographically approved
Flykt, A., Bänziger, T. & Lindeberg, S. (2017). Intensity of vocal responses to spider and snake pictures in fearful individuals. Australian journal of psychology, 69(3), 184-191
Open this publication in new window or tab >>Intensity of vocal responses to spider and snake pictures in fearful individuals
2017 (English)In: Australian journal of psychology, ISSN 0004-9530, E-ISSN 1742-9536, Vol. 69, no 3, p. 184-191Article in journal (Refereed) Published
Abstract [en]

Objective

Strong bodily responses have repeatedly been shown in participants fearful of spiders and snakes when they see pictures of the feared animal. In this study, we investigate if these fear responses affect voice intensity, require awareness of the pictorial stimuli, and whether the responses run their course once initiated.

Method

Animal fearful participants responded to arrowhead-shaped probes superimposed on animal pictures (snake, spider, or rabbit), presented either backwardly masked or with no masking. Their task was to say ‘up’ or ‘down’ as quickly as possible depending on the orientation of the arrowhead. Arrowhead probes were presented at two different stimulus onset asynchronies (SOA), 261 or 561 ms after picture onset. In addition to vocal responses, electrocardiogram, and skin conductance (SC) were recorded.

Results

No fear-specific effects emerged to masked stimuli, thereby providing no support for the notion that fear responses can be triggered by stimuli presented outside awareness. For the unmasked pictures, voice intensity was stronger and SC response amplitude was larger to probes superimposed on the feared animal than other animals, at both SOAs. Heart rate changes were greater during exposure to feared animals when probed at 561 ms, but not at 261 ms, which indicates that a fear response can change its course after initiation.

ConclusionExposure to pictures of the feared animal increased voice intensity. No support was found for responses without awareness. Observed effects on heart rate may be due to change in parasympathetic activation during fear response.

Keywords
ECG, fear, skin conductance, snake, spider, voice intensity
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-29717 (URN)10.1111/ajpy.12137 (DOI)000409559500005 ()2-s2.0-84979774590 (Scopus ID)
Available from: 2016-12-21 Created: 2016-12-21 Last updated: 2018-02-27Bibliographically approved
Holding, B. C., Laukka, P., Fischer, H., Bänziger, T., Axelsson, J. & Sundelin, T. (2017). Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies. Sleep, 40(11), Article ID UNSP zsx145.
Open this publication in new window or tab >>Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies
Show others...
2017 (English)In: Sleep, ISSN 0161-8105, E-ISSN 1550-9109, Vol. 40, no 11, article id UNSP zsx145Article in journal (Refereed) Published
Abstract [en]

Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.

Keywords
Sleep deprivation, emotion, emotion recognition, perception, social
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-32567 (URN)10.1093/sleep/zsx145 (DOI)000417043000005 ()2-s2.0-85044532634 (Scopus ID)
Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2018-04-16Bibliographically approved
Bänziger, T. (2016). Accuracy of judging emotions. In: Hall, Judith A.; Schmid Mast, Marianne; West, Tessa V. (Ed.), The Social Psychology of Perceiving Others Accurately: (pp. 23-51). Cambridge University Press
Open this publication in new window or tab >>Accuracy of judging emotions
2016 (English)In: The Social Psychology of Perceiving Others Accurately / [ed] Hall, Judith A.; Schmid Mast, Marianne; West, Tessa V., Cambridge University Press , 2016, p. 23-51Chapter in book (Other academic)
Place, publisher, year, edition, pages
Cambridge University Press, 2016
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33351 (URN)10.1017/CBO9781316181959.002 (DOI)9781316181959 (ISBN)
Available from: 2018-03-26 Created: 2018-03-26 Last updated: 2018-03-26Bibliographically approved
Bhatara, A., Laukka, P., Boll-Avetisyan, N., Granjon, L., Elfenbein, H. A. & Bänziger, T. (2016). Second Language Ability and Emotional Prosody Perception. PLoS ONE, 11(6), Article ID e0156855.
Open this publication in new window or tab >>Second Language Ability and Emotional Prosody Perception
Show others...
2016 (English)In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 11, no 6, article id e0156855Article in journal (Refereed) Published
Abstract [en]

The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-28474 (URN)10.1371/journal.pone.0156855 (DOI)000377218700066 ()27253326 (PubMedID)2-s2.0-84973664669 (Scopus ID)
Available from: 2016-07-21 Created: 2016-07-21 Last updated: 2017-11-28Bibliographically approved
Bänziger, T., Hosoya, G. & Scherer, K. R. (2015). Path Models of Vocal Emotion Communication. PLoS ONE, 10(9), Article ID e0136675.
Open this publication in new window or tab >>Path Models of Vocal Emotion Communication
2015 (English)In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 9, article id e0136675Article in journal (Refereed) Published
Abstract [en]

We propose to use a comprehensive path model of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naive listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness). While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended path modeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research) and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars).

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-26467 (URN)10.1371/journal.pone.0136675 (DOI)000360437700055 ()26325076 (PubMedID)2-s2.0-84943338822 (Scopus ID)
Available from: 2015-12-15 Created: 2015-12-15 Last updated: 2017-12-01Bibliographically approved
Bänziger, T., Patel, S. & Scherer, K. R. (2014). The Role of Perceived Voice and Speech Characteristics in Vocal Emotion Communication.. Journal of nonverbal behavior, 38(1), 31-52
Open this publication in new window or tab >>The Role of Perceived Voice and Speech Characteristics in Vocal Emotion Communication.
2014 (English)In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 38, no 1, p. 31-52Article in journal (Refereed) Published
Abstract [en]

Aiming at a more comprehensive assessment of nonverbal vocal emotion communication, this article presents the development and validation of a new rating instrument for the assessment of perceived voice and speech features. In two studies, using two different sets of emotion portrayals by German and French actors, ratings of perceived voice and speech characteristics (loudness, pitch, intonation, sharpness, articulation, roughness, instability, and speech rate) were obtained from non-expert (untrained) listeners. In addition, standard acoustic parameters were extracted from the voice samples. Overall, highly similar patterns of results were found in both studies. Rater agreement (reliability) reached highly satisfactory levels for most features. Multiple discriminant analysis results reveal that both perceived vocal features and acoustic parameters allow a high degree of differentiation of the actor-portrayed emotions. Positive emotions can be classified with a higher hit rate on the basis of perceived vocal features, confirming suggestions in the literature that it is difficult to find acoustic valence indicators. The results show that the suggested scales (Geneva Voice Perception Scales) can be reliably measured and make a substantial contribution to a more comprehensive assessment of the process of emotion inferences from vocal expression.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-27229 (URN)10.1007/s10919-013-0165-x (DOI)
Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2017-11-30Bibliographically approved
Bänziger, T., Mortillaro, M. & Scherer, K. R. (2012). Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. . Emotion, 12(5), 1161-1179
Open this publication in new window or tab >>Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception.
2012 (English)In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 12, no 5, p. 1161-1179Article in journal (Refereed) Published
Abstract [en]

Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-27230 (URN)10.1037/a0025827 (DOI)
Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2018-11-06Bibliographically approved
Mehu, M., Mortillaro, M., Bänziger, T. & Scherer, K. R. (2012). Reliable facial muscles activation enhances recognizability and credibility of emotional expression.. Emotion, 12(4), 701-715
Open this publication in new window or tab >>Reliable facial muscles activation enhances recognizability and credibility of emotional expression.
2012 (English)In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 12, no 4, p. 701-715Article in journal (Refereed) Published
Abstract [en]

We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-27231 (URN)10.1037/a0026717 (DOI)
Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2017-11-30Bibliographically approved
Organisations

Search in DiVA

Show all publications