|
This article is cited in 16 scientific papers (total in 16 papers)
Artificial Intelligence, Knowledge and Data Engineering
Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification
O. V. Verkholyaka, H. Kayab, A. A. Karpova a St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS)
b Namık Kemal University
Abstract:
Recently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances.
Keywords:
speech emotion recognition, computational paralinguistics, affective computing, feature representation, context modelling, artificial neural networks, long short-term memory.
Received: 24.08.2018
Citation:
O. V. Verkholyak, H. Kaya, A. A. Karpov, “Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification”, Tr. SPIIRAN, 18:1 (2019), 30–56
Linking options:
https://www.mathnet.ru/eng/trspy1038 https://www.mathnet.ru/eng/trspy/v18/i1/p30
|
Statistics & downloads: |
Abstract page: | 197 | Full-text PDF : | 73 |
|