Mid Sweden University

miun.sePublications
Change search
Link to record
Permanent link

Direct link
Bänziger, Tanja
Publications (10 of 16) Show all publications
Döllinger, L., Högman, L. B., Laukka, P., Bänziger, T., Makower, I., Fischer, H. & Hau, S. (2023). Trainee psychotherapists’ emotion recognition accuracy improves after training: emotion recognition training as a tool for psychotherapy education. Frontiers in Psychology, 14, Article ID 1188634.
Open this publication in new window or tab >>Trainee psychotherapists’ emotion recognition accuracy improves after training: emotion recognition training as a tool for psychotherapy education
Show others...
2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1188634Article in journal (Refereed) Published
Abstract [en]

Introduction: Psychotherapists’ emotional and empathic competencies have a positive influence on psychotherapy outcome and alliance. However, it is doubtful whether psychotherapy education in itself leads to improvements in trainee psychotherapists’ emotion recognition accuracy (ERA), which is an essential part of these competencies. Methods: In a randomized, controlled, double-blind study (N = 68), we trained trainee psychotherapists (57% psychodynamic therapy and 43% cognitive behavioral therapy) to detect non-verbal emotional expressions in others using standardized computerized trainings – one for multimodal emotion recognition accuracy and one for micro expression recognition accuracy – and compared their results to an active control group one week after the training (n = 60) and at the one-year follow up (n = 55). The participants trained once weekly during a three-week period. As outcome measures, we used a multimodal emotion recognition accuracy task, a micro expression recognition accuracy task and an emotion recognition accuracy task for verbal and non-verbal (combined) emotional expressions in medical settings. Results: The results of mixed multilevel analyses suggest that the multimodal emotion recognition accuracy training led to significantly steeper increases than the other two conditions from pretest to the posttest one week after the last training session. When comparing the pretest to follow-up differences in slopes, the superiority of the multimodal training group was still detectable in the unimodal audio modality and the unimodal video modality (in comparison to the control training group), but not when considering the multimodal audio-video modality or the total score of the multimodal emotion recognition accuracy measure. The micro expression training group showed a significantly steeper change trajectory from pretest to posttest compared to the control training group, but not compared to the multimodal training group. However, the effect vanished again until the one-year follow-up. There were no differences in change trajectories for the outcome measure about emotion recognition accuracy in medical settings. Discussion: We conclude that trainee psychotherapists’ emotion recognition accuracy can be effectively trained, especially multimodal emotion recognition accuracy, and suggest that the changes in unimodal emotion recognition accuracy (audio-only and video-only) are long-lasting. Implications of these findings for the psychotherapy education are discussed. 

Place, publisher, year, edition, pages
Frontiers Media SA, 2023
Keywords
emotion in psychotherapy, emotion recognition accuracy, micro expression recognition, multimodal emotion recognition, psychotherapy education, trainee psychotherapists, training emotion recognition
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-49090 (URN)10.3389/fpsyg.2023.1188634 (DOI)001041808800001 ()2-s2.0-85166672073 (Scopus ID)
Available from: 2023-08-17 Created: 2023-08-17 Last updated: 2023-08-18Bibliographically approved
Cortes, D. S., Tornberg, C., Bänziger, T., Elfenbein, H. A., Fischer, H. & Laukka, P. (2021). Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations. Scientific Reports, 11(1), Article ID 2647.
Open this publication in new window or tab >>Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations
Show others...
2021 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 11, no 1, article id 2647Article in journal (Refereed) Published
Abstract [en]

Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study. 

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-41121 (URN)10.1038/s41598-021-82135-1 (DOI)000616803100030 ()33514829 (PubMedID)2-s2.0-85099953253 (Scopus ID)
Available from: 2021-02-10 Created: 2021-02-10 Last updated: 2022-09-15
Flykt, A., Hörlin, T., Linder, F., Wennstig, A.-K., Sayeler, G., Hess, U. & Bänziger, T. (2021). Exploring Emotion Recognition and the Understanding of Others’ Unspoken Thoughts and Feelings when Narrating Self-Experienced Emotional Events. Journal of nonverbal behavior, 45, 67-81
Open this publication in new window or tab >>Exploring Emotion Recognition and the Understanding of Others’ Unspoken Thoughts and Feelings when Narrating Self-Experienced Emotional Events
Show others...
2021 (English)In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 45, p. 67-81Article in journal (Refereed) Published
Abstract [en]

Emotion decoding competence can be addressed in different ways. In this study, clinical psychology, nursing, or social work students narrated a 2.5–3 min story about a self-experienced emotional event and also listened to another student’s story. Participants were video recorded during the session. Participants then annotated their own recordings regarding their own thoughts and feelings, and they rated recordings by other participants regarding their thoughts and feelings [empathic accuracy, EA, task]. Participants further completed two emotion recognition accuracy (ERA) tests that differed in complexity. The results showed that even though significant correlations were found between the emotion recognition tests, the tests did not positively predict empathic accuracy scores. These results raise questions regarding the extent to which ERA tests tap the competencies that underlie EA. Different possibilities to investigate the consequences of method choices are discussed. 

Keywords
Emotion recognition, Empathic accuracy, Narratives, Self-experienced emotional events
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-41118 (URN)10.1007/s10919-020-00340-4 (DOI)000610913300001 ()2-s2.0-85099868477 (Scopus ID)
Available from: 2021-02-10 Created: 2021-02-10 Last updated: 2021-02-23
Laukka, P., Bänziger, T., Israelsson, A., Cortes, D. S., Tornberg, C., Scherer, K. R. & Fischer, H. (2021). Investigating individual differences in emotion recognition ability using the ERAM test. Acta Psychologica, 220, Article ID 103422.
Open this publication in new window or tab >>Investigating individual differences in emotion recognition ability using the ERAM test
Show others...
2021 (English)In: Acta Psychologica, ISSN 0001-6918, E-ISSN 1873-6297, Vol. 220, article id 103422Article in journal (Refereed) Published
Abstract [en]

Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research. 

Keywords
Emotion recognition test, Emotion understanding, Empathy, Meta-cognitive judgments, Multimodal expressions, Personality, Sex differences
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-43355 (URN)10.1016/j.actpsy.2021.103422 (DOI)000706372300017 ()2-s2.0-85115971216 (Scopus ID)
Available from: 2021-10-12 Created: 2021-10-12 Last updated: 2021-10-28
Dollinger, L., Laukka, P., Hogman, L. B., Bänziger, T., Makower, I., Fischer, H. & Hau, S. (2021). Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions. Frontiers in Psychology, 12, Article ID 708867.
Open this publication in new window or tab >>Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions
Show others...
2021 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 12, article id 708867Article in journal (Refereed) Published
Abstract [en]

Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.

Place, publisher, year, edition, pages
FRONTIERS MEDIA SA, 2021
Keywords
emotion recognition, emotion recognition training, multimodal emotion recognition, micro expression recognition, nonverbal communication
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-42980 (URN)10.3389/fpsyg.2021.708867 (DOI)000691407500001 ()34475841 (PubMedID)2-s2.0-85114149552 (Scopus ID)
Available from: 2021-09-09 Created: 2021-09-09 Last updated: 2022-02-10
Hovey, D., Henningsson, S., Cortes, D. S., Bänziger, T., Zettergren, A., Melke, J., . . . Westberg, L. (2018). Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT2. Social Cognitive & Affective Neuroscience, 13(2), 173-181
Open this publication in new window or tab >>Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT2
Show others...
2018 (English)In: Social Cognitive & Affective Neuroscience, ISSN 1749-5016, E-ISSN 1749-5024, Vol. 13, no 2, p. 173-181Article in journal (Refereed) Published
Abstract [en]

The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n=309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.

Keywords
ARNT2, Emotion recognition, Oxytocin, Social cognition, Vasopressin
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33274 (URN)10.1093/scan/nsx141 (DOI)000427017200004 ()29194499 (PubMedID)2-s2.0-85042627662 (Scopus ID)
Available from: 2018-03-14 Created: 2018-03-14 Last updated: 2018-05-07Bibliographically approved
Juslin, P. N., Laukka, P. & Bänziger, T. (2018). The Mirror to Our Soul?: Comparisons of Spontaneous and Posed Vocal Expression of Emotion. Journal of nonverbal behavior, 42(1), 1-40
Open this publication in new window or tab >>The Mirror to Our Soul?: Comparisons of Spontaneous and Posed Vocal Expression of Emotion
2018 (English)In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 42, no 1, p. 1-40Article in journal (Refereed) Published
Abstract [en]

It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose–response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.

National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33350 (URN)10.1007/s10919-017-0268-x (DOI)
Available from: 2018-03-26 Created: 2018-03-26 Last updated: 2018-07-19Bibliographically approved
Flykt, A., Bänziger, T. & Lindeberg, S. (2017). Intensity of vocal responses to spider and snake pictures in fearful individuals. Australian journal of psychology, 69(3), 184-191
Open this publication in new window or tab >>Intensity of vocal responses to spider and snake pictures in fearful individuals
2017 (English)In: Australian journal of psychology, ISSN 0004-9530, E-ISSN 1742-9536, Vol. 69, no 3, p. 184-191Article in journal (Refereed) Published
Abstract [en]

Objective

Strong bodily responses have repeatedly been shown in participants fearful of spiders and snakes when they see pictures of the feared animal. In this study, we investigate if these fear responses affect voice intensity, require awareness of the pictorial stimuli, and whether the responses run their course once initiated.

Method

Animal fearful participants responded to arrowhead-shaped probes superimposed on animal pictures (snake, spider, or rabbit), presented either backwardly masked or with no masking. Their task was to say ‘up’ or ‘down’ as quickly as possible depending on the orientation of the arrowhead. Arrowhead probes were presented at two different stimulus onset asynchronies (SOA), 261 or 561 ms after picture onset. In addition to vocal responses, electrocardiogram, and skin conductance (SC) were recorded.

Results

No fear-specific effects emerged to masked stimuli, thereby providing no support for the notion that fear responses can be triggered by stimuli presented outside awareness. For the unmasked pictures, voice intensity was stronger and SC response amplitude was larger to probes superimposed on the feared animal than other animals, at both SOAs. Heart rate changes were greater during exposure to feared animals when probed at 561 ms, but not at 261 ms, which indicates that a fear response can change its course after initiation.

ConclusionExposure to pictures of the feared animal increased voice intensity. No support was found for responses without awareness. Observed effects on heart rate may be due to change in parasympathetic activation during fear response.

Keywords
ECG, fear, skin conductance, snake, spider, voice intensity
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-29717 (URN)10.1111/ajpy.12137 (DOI)000409559500005 ()2-s2.0-84979774590 (Scopus ID)
Available from: 2016-12-21 Created: 2016-12-21 Last updated: 2018-02-27Bibliographically approved
Holding, B. C., Laukka, P., Fischer, H., Bänziger, T., Axelsson, J. & Sundelin, T. (2017). Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies. Sleep, 40(11), Article ID UNSP zsx145.
Open this publication in new window or tab >>Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies
Show others...
2017 (English)In: Sleep, ISSN 0161-8105, E-ISSN 1550-9109, Vol. 40, no 11, article id UNSP zsx145Article in journal (Refereed) Published
Abstract [en]

Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.

Keywords
Sleep deprivation, emotion, emotion recognition, perception, social
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-32567 (URN)10.1093/sleep/zsx145 (DOI)000417043000005 ()2-s2.0-85044532634 (Scopus ID)
Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2020-07-09Bibliographically approved
Bänziger, T. (2016). Accuracy of judging emotions. In: Hall, Judith A.; Schmid Mast, Marianne; West, Tessa V. (Ed.), The Social Psychology of Perceiving Others Accurately: (pp. 23-51). Cambridge University Press
Open this publication in new window or tab >>Accuracy of judging emotions
2016 (English)In: The Social Psychology of Perceiving Others Accurately / [ed] Hall, Judith A.; Schmid Mast, Marianne; West, Tessa V., Cambridge University Press , 2016, p. 23-51Chapter in book (Other academic)
Place, publisher, year, edition, pages
Cambridge University Press, 2016
National Category
Psychology
Identifiers
urn:nbn:se:miun:diva-33351 (URN)10.1017/CBO9781316181959.002 (DOI)9781316181959 (ISBN)
Available from: 2018-03-26 Created: 2018-03-26 Last updated: 2018-03-26Bibliographically approved
Organisations

Search in DiVA

Show all publications