Trudy SPIIRAN
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Informatics and Automation:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Trudy SPIIRAN, 2019, Issue 18, volume 1, Pages 30–56
DOI: https://doi.org/10.15622/sp.18.1.30-56
(Mi trspy1038)
 

This article is cited in 16 scientific papers (total in 16 papers)

Artificial Intelligence, Knowledge and Data Engineering

Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification

O. V. Verkholyaka, H. Kayab, A. A. Karpova

a St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)
b Namık Kemal University
Abstract: Recently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB  (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances.
Keywords: speech emotion recognition, computational paralinguistics, affective computing, feature representation, context modelling, artificial neural networks, long short-term memory.
Funding agency Grant number
Russian Science Foundation 18-11-00145
This research is supported by RSF (project no. 18-11-00145).
Received: 24.08.2018
Bibliographic databases:
Document Type: Article
UDC: 004.89
Language: English
Citation: O. V. Verkholyak, H. Kaya, A. A. Karpov, “Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification”, Tr. SPIIRAN, 18:1 (2019), 30–56
Citation in format AMSBIB
\Bibitem{VerKayKar19}
\by O.~V.~Verkholyak, H.~Kaya, A.~A.~Karpov
\paper Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification
\jour Tr. SPIIRAN
\yr 2019
\vol 18
\issue 1
\pages 30--56
\mathnet{http://mi.mathnet.ru/trspy1038}
\crossref{https://doi.org/10.15622/sp.18.1.30-56}
\elib{https://elibrary.ru/item.asp?id=37286131}
Linking options:
  • https://www.mathnet.ru/eng/trspy1038
  • https://www.mathnet.ru/eng/trspy/v18/i1/p30
  • This publication is cited in the following 16 articles:
    Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Informatics and Automation
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024