Publications

Detailed Information

MULTIMODAL SPEECH EMOTION RECOGNITION USING AUDIO AND TEXT

DC Field Value Language
dc.contributor.authorYoon, Seunghyun-
dc.contributor.authorByun, Seokhyun-
dc.contributor.authorJung, Kyomin-
dc.date.accessioned2022-10-26T07:21:55Z-
dc.date.available2022-10-26T07:21:55Z-
dc.date.created2022-10-21-
dc.date.issued2018-12-
dc.identifier.citation2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), pp.112-118-
dc.identifier.issn2639-5479-
dc.identifier.urihttps://hdl.handle.net/10371/186820-
dc.description.abstractSpeech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.-
dc.language영어-
dc.publisherIEEE-
dc.titleMULTIMODAL SPEECH EMOTION RECOGNITION USING AUDIO AND TEXT-
dc.typeArticle-
dc.identifier.doi10.1109/SLT.2018.8639583-
dc.citation.journaltitle2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)-
dc.identifier.wosid000463141800017-
dc.identifier.scopusid2-s2.0-85063097574-
dc.citation.endpage118-
dc.citation.startpage112-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorJung, Kyomin-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share