Trudy SPIIRAN
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Informatics and Automation:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Trudy SPIIRAN, 2018, Issue 60, Pages 216–240
DOI: https://doi.org/10.15622/sp.60.8
(Mi trspy1028)
 

This article is cited in 3 scientific papers (total in 3 papers)

Artificial Intelligence, Knowledge and Data Engineering

Style-code method for multi-style parametric text-to-speech synthesis

S. B. Suzića, T. V. Delića, S. J. Ostrogonacb, S. V. Đurića, D. J. Pekarab

a University of Novi Sad
b AlfaNum – Speech Technologies
Abstract: Modern text-to-speech systems generally achieve good intelligibility. The one of the main drawbacks of these systems is the lack of expressiveness in comparison to natural human speech. It is very unpleasant when automated system conveys positive and negative message in completely the same way. The introduction of parametric methods in speech synthesis gave possibility to easily change speaker characteristics and speaking styles. In this paper a simple method for incorporating styles into synthesized speech by using style codes is presented.
The proposed method requires just a couple of minutes of target style and moderate amount of neutral speech. It is successfully applied to both hidden Markov models and deep neural networks-based synthesis, giving style code as additional input to the model. Listening tests confirmed that better style expressiveness is achieved by deep neural networks synthesis compared to hidden Markov model synthesis. It is also proved that quality of speech synthesized by deep neural networks in a certain style is comparable with the speech synthesized in neutral style, although the neutral-speech-database is about 10 times bigger. DNN based TTS with style codes are further investigated by comparing the quality of speech produced by single-style modeling and multi-style modeling systems. Objective and subjective measures confirmed that there is no significant difference between these two approaches.
Keywords: text-to-speech synthesis, expressive speech synthesis, deep neural networks, speech style, style code, one-hot vector.
Funding agency Grant number
Ministry of Education, Science and Technical Development of Serbia TR32035
The research is supported by the Ministry of Education Science and Technological Development of the Republic of Serbia (grant TR32035).
Received: 30.07.2018
Bibliographic databases:
Document Type: Article
UDC: 006.72
Language: English
Citation: S. B. Suzić, T. V. Delić, S. J. Ostrogonac, S. V. Ðurić, D. J. Pekar, “Style-code method for multi-style parametric text-to-speech synthesis”, Tr. SPIIRAN, 60 (2018), 216–240
Citation in format AMSBIB
\Bibitem{SuzDelOst18}
\by S.~B.~Suzi{\'c}, T.~V.~Deli{\'c}, S.~J.~Ostrogonac, S.~V.~{\DJ}uri{\'c}, D.~J.~Pekar
\paper Style-code method for multi-style parametric text-to-speech synthesis
\jour Tr. SPIIRAN
\yr 2018
\vol 60
\pages 216--240
\mathnet{http://mi.mathnet.ru/trspy1028}
\crossref{https://doi.org/10.15622/sp.60.8}
\elib{https://elibrary.ru/item.asp?id=36266201}
Linking options:
  • https://www.mathnet.ru/eng/trspy1028
  • https://www.mathnet.ru/eng/trspy/v60/p216
  • This publication is cited in the following 3 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Informatics and Automation
    Statistics & downloads:
    Abstract page:166
    Full-text PDF :33
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024