miun.sePublications
Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bhatara, Anjali
    et al.
    CNRS, UMR 8242, Lab Psychol Percept, Paris, France.
    Laukka, Petri
    Stockholm Univ, Dept Psychol, S-10691 Stockholm, Sweden.
    Boll-Avetisyan, Natalie
    Univ Potsdam, Dept Linguist, Potsdam, Germany.
    Granjon, Lionel
    CNRS, UMR 8242, Lab Psychol Percept, Paris, France.
    Elfenbein, Hillary Anger
    Washington Univ, John M Olin Sch Business, St Louis, MO 63130 USA.
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Second Language Ability and Emotional Prosody Perception2016In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 11, no 6, article id e0156855Article in journal (Refereed)
    Abstract [en]

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.

  • 2.
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Accuracy of judging emotions2016In: The Social Psychology of Perceiving Others Accurately / [ed] Hall, Judith A.; Schmid Mast, Marianne; West, Tessa V., Cambridge University Press , 2016, p. 23-51Chapter in book (Other academic)
  • 3.
    Bänziger, Tanja
    et al.
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology. Mid Sweden University.
    Hosoya, Georg
    Free Univ Berlin, Dept Educ Sci & Psychol, Berlin, Germany..
    Scherer, Klaus R.
    Univ Geneva, Swiss Ctr Affect Sci, Geneva, Switzerland..
    Path Models of Vocal Emotion Communication2015In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 9, article id e0136675Article in journal (Refereed)
    Abstract [en]

    We propose to use a comprehensive path model of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naive listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness). While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended path modeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research) and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars).

  • 4.
    Bänziger, Tanja
    et al.
    University of Geneva, Switzerland; Uppsala University.
    Mortillaro, Marcello
    Scherer, Klaus R.
    Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. 2012In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 12, no 5, p. 1161-1179Article in journal (Refereed)
    Abstract [en]

    Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

  • 5.
    Bänziger, Tanja
    et al.
    University of Geneva, Switzerland.
    Patel, Sona
    University of Geneva, Switzerland.
    Scherer, Klaus R.
    University of Geneva, Switzerland.
    The Role of Perceived Voice and Speech Characteristics in Vocal Emotion Communication.2014In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 38, no 1, p. 31-52Article in journal (Refereed)
    Abstract [en]

    Aiming at a more comprehensive assessment of nonverbal vocal emotion communication, this article presents the development and validation of a new rating instrument for the assessment of perceived voice and speech features. In two studies, using two different sets of emotion portrayals by German and French actors, ratings of perceived voice and speech characteristics (loudness, pitch, intonation, sharpness, articulation, roughness, instability, and speech rate) were obtained from non-expert (untrained) listeners. In addition, standard acoustic parameters were extracted from the voice samples. Overall, highly similar patterns of results were found in both studies. Rater agreement (reliability) reached highly satisfactory levels for most features. Multiple discriminant analysis results reveal that both perceived vocal features and acoustic parameters allow a high degree of differentiation of the actor-portrayed emotions. Positive emotions can be classified with a higher hit rate on the basis of perceived vocal features, confirming suggestions in the literature that it is difficult to find acoustic valence indicators. The results show that the suggested scales (Geneva Voice Perception Scales) can be reliably measured and make a substantial contribution to a more comprehensive assessment of the process of emotion inferences from vocal expression.

  • 6.
    Bänziger, Tanja
    et al.
    University of Geneva, Switzerland.
    Scherer, Klaus, R.
    University of Geneva, Switzerland.
    Hall, Judith, A.
    Northeastern University, USA.
    Rosenthal, Robert
    University of California, USA.
    Introducing the MiniPONS: A Short Multichannel Version of the Profile of Nonverbal Sensitivity (PONS).2011In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 35, no 3, p. 189-204Article in journal (Refereed)
    Abstract [en]

    Despite extensive research activity on the recognition of emotional expression, there are only few validated tests of individual differences in this competence (generally considered as part of nonverbal sensitivity and emotional intelligence). This paper reports the development of a short, multichannel, version (MiniPONS) of the established Profile of Nonverbal Sensitivity (PONS) test. The full test has been extensively validated in many different cultures, showing substantial correlations with a large range of outcome variables. The short multichannel version (64 items) described here correlates very highly with the full version and shows reasonable construct validity through significant correlations with other tests of emotion recognition ability. Based on these results, the role of nonverbal sensitivity as part of a latent trait of emotional competence is discussed and the MiniPONS is suggested as a convenient method to perform a rapid screening of this central socioemotional competence.

  • 7.
    Flykt, Anders
    et al.
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Lindeberg, Sofie
    Curtin University, Perth, Australia.
    Intensity of vocal responses to spider and snake pictures in fearful individuals2017In: Australian journal of psychology, ISSN 0004-9530, E-ISSN 1742-9536, Vol. 69, no 3, p. 184-191Article in journal (Refereed)
    Abstract [en]

    Objective

    Strong bodily responses have repeatedly been shown in participants fearful of spiders and snakes when they see pictures of the feared animal. In this study, we investigate if these fear responses affect voice intensity, require awareness of the pictorial stimuli, and whether the responses run their course once initiated.

    Method

    Animal fearful participants responded to arrowhead-shaped probes superimposed on animal pictures (snake, spider, or rabbit), presented either backwardly masked or with no masking. Their task was to say ‘up’ or ‘down’ as quickly as possible depending on the orientation of the arrowhead. Arrowhead probes were presented at two different stimulus onset asynchronies (SOA), 261 or 561 ms after picture onset. In addition to vocal responses, electrocardiogram, and skin conductance (SC) were recorded.

    Results

    No fear-specific effects emerged to masked stimuli, thereby providing no support for the notion that fear responses can be triggered by stimuli presented outside awareness. For the unmasked pictures, voice intensity was stronger and SC response amplitude was larger to probes superimposed on the feared animal than other animals, at both SOAs. Heart rate changes were greater during exposure to feared animals when probed at 561 ms, but not at 261 ms, which indicates that a fear response can change its course after initiation.

    ConclusionExposure to pictures of the feared animal increased voice intensity. No support was found for responses without awareness. Observed effects on heart rate may be due to change in parasympathetic activation during fear response.

  • 8.
    Holding, Benjamin C.
    et al.
    Karolinska Inst, Dept Clin Neurosci, Stockholm.
    Laukka, Petri
    Stockholm Univ, Dept Psychol, Stockholm,.
    Fischer, Hakan
    Stockholm Univ, Dept Psychol, Stockholm.
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Axelsson, John
    Karolinska Inst, Dept Clin Neurosci, Stockholm; Stockholm Univ, Stress Res Inst, Stockholm.
    Sundelin, Tina
    Karolinska Inst, Dept Clin Neurosci, Stockholm; Stockholm Univ, Dept Psychol, Stockholm.
    Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies2017In: Sleep, ISSN 0161-8105, E-ISSN 1550-9109, Vol. 40, no 11, article id UNSP zsx145Article in journal (Refereed)
    Abstract [en]

    Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.

  • 9.
    Hovey, Daniel
    et al.
    University of Gothenburg, Gothenburg.
    Henningsson, Susanne
    University of Gothenburg, Gothenburg.
    Cortes, Diana S.
    Stockholm University, Stockholm.
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology.
    Zettergren, Anna
    University of Gothenburg, Gothenburg.
    Melke, Jonas
    University of Gothenburg, Gothenburg.
    Fischer, Håkan
    Stockholm University, Stockholm.
    Laukka, Petri
    Stockholm University, Stockholm.
    Westberg, Lars
    University of Gothenburg, Gothenburg.
    Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT22018In: Social Cognitive & Affective Neuroscience, ISSN 1749-5016, E-ISSN 1749-5024, Vol. 13, no 2, p. 173-181Article in journal (Refereed)
    Abstract [en]

    The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n=309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.

  • 10. Juslin, Patrik N.
    et al.
    Laukka, Petri
    Bänziger, Tanja
    Mid Sweden University, Faculty of Human Sciences, Department of Psychology. Uppsala University.
    The Mirror to Our Soul?: Comparisons of Spontaneous and Posed Vocal Expression of Emotion2018In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 42, no 1, p. 1-40Article in journal (Refereed)
    Abstract [en]

    It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose–response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.

  • 11.
    Mehu, Marc
    et al.
    University of Geneva, Switzerland.
    Mortillaro, Marcello
    University of Geneva, Switzerland.
    Bänziger, Tanja
    University of Geneva, Switzerland.
    Scherer, Klaus R.
    University of Geneva, Switzerland.
    Reliable facial muscles activation enhances recognizability and credibility of emotional expression.2012In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 12, no 4, p. 701-715Article in journal (Refereed)
    Abstract [en]

    We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf