Publications

Detailed Information

Affective Latent Representation of Acoustic and Lexical Features for Emotion Recognition

DC Field Value Language
dc.contributor.authorKim, Eesung-
dc.contributor.authorSong, Hyungchan-
dc.contributor.authorShin, Jong Won-
dc.date.accessioned2024-07-24T01:09:30Z-
dc.date.available2024-07-24T01:09:30Z-
dc.date.created2024-07-22-
dc.date.issued2020-05-
dc.identifier.citationSENSORS, Vol.20 No.9-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://hdl.handle.net/10371/204853-
dc.description.abstractIn this paper, we propose a novel emotion recognition method based on the underlying emotional characteristics extracted from a conditional adversarial auto-encoder (CAAE), in which both acoustic and lexical features are used as inputs. The acoustic features are generated by calculating statistical functionals of low-level descriptors and by a deep neural network (DNN). These acoustic features are concatenated with three types of lexical features extracted from the text, which are a sparse representation, a distributed representation, and an affective lexicon-based dimensions. Two-dimensional latent representations similar to vectors in the valence-arousal space are obtained by a CAAE, which can be directly mapped into the emotional classes without the need for a sophisticated classifier. In contrast to the previous attempt to a CAAE using only acoustic features, the proposed approach could enhance the performance of the emotion recognition because combined acoustic and lexical features provide enough discriminant power. Experimental results on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus showed that our method outperformed the previously reported best results on the same corpus, achieving 76.72% in the unweighted average recall.-
dc.language영어-
dc.publisherMDPI-
dc.titleAffective Latent Representation of Acoustic and Lexical Features for Emotion Recognition-
dc.typeArticle-
dc.identifier.doi10.3390/s20092614-
dc.citation.journaltitleSENSORS-
dc.identifier.wosid000537106200178-
dc.identifier.scopusid2-s2.0-85084405467-
dc.citation.number9-
dc.citation.volume20-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorShin, Jong Won-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.subject.keywordPlusSPEECH-
dc.subject.keywordPlusROBUST-
dc.subject.keywordAuthoremotion recognition-
dc.subject.keywordAuthorconditional adversarial autoencoder-
dc.subject.keywordAuthorlatent representation-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area DC-DC and AC-DC power conversion, DC-DC 및 AC-DC 전력변환, converter modeling, high-density, high-frequency power conversion, 고밀도 고주파 전력 변환, 컨버터 모델링

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share