Publications

Detailed Information

Expressive Text-to-Speech using Style Tag

Cited 3 time in Web of Science Cited 5 time in Scopus
Authors

Kim, Minchan; Cheon, Sung Jun; Choi, Byoung Jin; Kim, Jong Jin; Kim, Nam Soo

Issue Date
2021-08
Publisher
ISCA-INT SPEECH COMMUNICATION ASSOC
Citation
INTERSPEECH 2021, pp.4663-4667
Abstract
As recent text-to-speech (TTS) systems have been rapidly improved in speech quality and generation speed, many researchers now focus on a more challenging issue: expressive TTS. To control speaking styles, existing expressive TTS models use categorical style index or reference speech as style input. In this work, we propose StyleTagging-TTS (ST-TTS), a novel expressive TTS model that utilizes a style tag written in natural language. Using a style-tagged TTS dataset and a pre-trained language model, we modeled the relationship between linguistic embedding and speaking style domain, which enables our model to work even with style tags unseen during training. As style tag is written in natural language, it can control speaking style in a more intuitive, interpretable, and scalable way compared with style index or reference speech. In addition, in terms of model architecture, we propose an efficient non-autoregressive (NAR) TTS architecture with single-stage training. The experimental result shows that ST-TTS outperforms the existing expressive TTS model, Tacotron2-GST in speech quality and expressiveness.
ISSN
2308-457X
URI
https://hdl.handle.net/10371/186865
DOI
https://doi.org/10.21437/Interspeech.2021-465
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share