Novel dual-channel long short-term memory compressed capsule networks for emotion recognition
Yükleniyor...
Dosyalar
Tarih
2022
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Pergamon-Elsevier Science Ltd
Erişim Hakkı
info:eu-repo/semantics/openAccess
Özet
Recent analysis on speech emotion recognition (SER) has made considerable advances with the use of MFCC's spectrogram features and the implementation of neural network approaches such as convolutional neural networks (CNNs). The fundamental issue of CNNs is that the spatial information is not recorded in spectrograms. Capsule networks (CapsNet) have gained gratitude as alternatives to CNNs with their larger capacities for hierarchical representation. However, the concealed issue of CapsNet is the compression method that is employed in CNNs cannot be directly utilized in CapsNet. To address these issues, this research introduces a text-independent and speaker-independent SER novel architecture, where a dual-channel long short-term memory compressed-CapsNet (DC-LSTM COMP-CapsNet) algorithm is proposed based on the structural features of CapsNet. Our proposed novel classifier can ensure the energy efficiency of the model and adequate compression method in speech emotion recognition, which is not delivered through the original structure of a CapsNet. Moreover, the grid search (GS) approach is used to attain optimal solutions. Results witnessed an improved performance and reduction in the training and testing running time. The speech datasets used to evaluate our algorithm are: Arabic Emirati-accented corpus, English speech under simulated and actual stress (SUSAS) corpus, English Ryerson audio-visual database of emotional speech and song (RAVDESS) corpus, and crowd-sourced emotional multimodal actors dataset (CREMA-D). This work reveals that the optimum feature extraction method compared to other known methods is MFCCs delta-delta. Using the four datasets and the MFCCs delta-delta, DC-LSTM COMP-CapsNet surpasses all the state-of-the-art systems, classical classifiers, CNN, and the original CapsNet. Using the Arabic Emirati-accented corpus, our results demonstrate that the proposed work yields average emotion recognition accuracy of 89.3% compared to 84.7%, 82.2%, 69.8%, 69.2%, 53.8%, 42.6%, and 31.9% based on CapsNet, CNN, support vector machine (SVM), multi-layer perceptron (MLP), k-nearest neighbor (KNN), radial basis function (RBF), and naive Bayes (NB), respectively.
Açıklama
The authors of this work would like to express their gratitude and gratitude to the University of Sharjah for their assistance through the two competitive research projects entitled Emirati-Accented Speaker and Emotion Recognition Based on Deep Neural Network, No. 19020403139, and Investigation and Analysis of Emirati-Accented Corpus in Neutral and Abnormal Talking Environments for Engineering Applications using Shallow and Deep Classifiers, No. 20020403159.
Anahtar Kelimeler
Capsule Networks, Convolutional Neural Network, Deep Neural Network, Dual-Channel, Emotion Recognition, LSTM
Kaynak
Expert Systems with Applications
WoS Q Değeri
Q1
Scopus Q Değeri
Q1
Cilt
188
Sayı
Künye
Shahin, I., Hindawi, N., Nassif, A. B., Alhudhaif, A., & Polat, K. (2022). Novel dual-channel long short-term memory compressed capsule networks for emotion recognition. Expert Systems with Applications, 188, 116080.