Tags

Type your tag names separated by a space and hit enter

Emotions in [a]: a perceptual and acoustic study.
Logoped Phoniatr Vocol. 2006; 31(1):43-8.LP

Abstract

The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples.

Authors+Show Affiliations

MediaTeam, University of Oulu and Academy of Finland. juhani.toivanen@ee.oulu.fiNo affiliation info availableNo affiliation info availableNo affiliation info availableNo affiliation info availableNo affiliation info availableNo affiliation info available

Pub Type(s)

Journal Article
Research Support, Non-U.S. Gov't

Language

eng

PubMed ID

16517522

Citation

Toivanen, Juhani, et al. "Emotions in [a]: a Perceptual and Acoustic Study." Logopedics, Phoniatrics, Vocology, vol. 31, no. 1, 2006, pp. 43-8.
Toivanen J, Waaramaa T, Alku P, et al. Emotions in [a]: a perceptual and acoustic study. Logoped Phoniatr Vocol. 2006;31(1):43-8.
Toivanen, J., Waaramaa, T., Alku, P., Laukkanen, A. M., Seppänen, T., Väyrynen, E., & Airas, M. (2006). Emotions in [a]: a perceptual and acoustic study. Logopedics, Phoniatrics, Vocology, 31(1), 43-8.
Toivanen J, et al. Emotions in [a]: a Perceptual and Acoustic Study. Logoped Phoniatr Vocol. 2006;31(1):43-8. PubMed PMID: 16517522.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Emotions in [a]: a perceptual and acoustic study. AU - Toivanen,Juhani, AU - Waaramaa,Teija, AU - Alku,Paavo, AU - Laukkanen,Anne-Maria, AU - Seppänen,Tapio, AU - Väyrynen,Eero, AU - Airas,Matti, PY - 2006/3/7/pubmed PY - 2006/7/25/medline PY - 2006/3/7/entrez SP - 43 EP - 8 JF - Logopedics, phoniatrics, vocology JO - Logoped Phoniatr Vocol VL - 31 IS - 1 N2 - The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples. SN - 1401-5439 UR - https://www.unboundmedicine.com/medline/citation/16517522/Emotions_in_[a]:_a_perceptual_and_acoustic_study_ L2 - https://www.tandfonline.com/doi/full/10.1080/14015430500293926 DB - PRIME DP - Unbound Medicine ER -