Emotional Inference from Speech Signals Informed by Multiple Stream DNNs Based Non-Local Attention Mechanism

Authors

  • Manh-Hung Ha Vietnam National University, Hanoi image/svg+xml
  • Duc-Chinh Nguyen Vietnam National University
  • Long Quang Chan Vietnam National University, Hanoi image/svg+xml
  • Oscal T.C. Chen Vietnam National University, Hanoi image/svg+xml

DOI:

https://doi.org/10.4108/eetinis.v11i4.4734

Keywords:

Convolution Neural Network, LSTM, Attention mechanism, Emotion, Classification

Abstract

It is difficult to determine whether a person is depressed due to the symptoms of depression not being apparent. However, the voice can be one of the ways in which we can acknowledge signs of depression. Understanding human emotions in natural language plays a crucial role for intelligent and sophisticated applications. This study proposes deep learning architecture to recognize the emotions of the speaker via audio signals, which can help diagnose patients who are depressed or prone to depression, so that treatment and prevention can be started as soon as possible. Specifically, Mel-frequency cepstral coefficients (MFCC) and Short Time Fourier Transform (STFT) are adopted to extract features from the audio signal. The multiple streams of the proposed DNNs model, including CNN-LSTM based on an attention mechanism, are discussed within this research. Leveraging a pretrained model, the proposed experimental results yield an accuracy rate of 93.2% on the EmoDB dataset. Further optimization remains a potential avenue for future development. It is hoped that this research will contribute to potential application in the fields of medical treatment and personal well-being.

Downloads

Download data is not yet available.

References

[1] Depressive disorder (depression), https://www.who.int/news-room/fact-sheets/detail/depression.

[2] Mental Health Conditions: Depression and Anxiety, https://www.cdc.gov/tobacco/campaign/tips/diseases/depression-anxiety.html

[3] Europe's mental health crisis: Which country uses the most antidepressants? https://www.euronews.com/next/2023/09/09/europes-mental-health-crisis-in-data-which-country-uses-the-most-antidepressants

[4] M. -H. Ha and O. T. -C. Chen, "Deep Neural Networks Using Residual Fast-Slow Refined Highway and Global Atomic Spatial Attention for Action Recognition and Detection," IEEE Access, 2021, doi: 10.1109/ACCESS.2021.3134694.

[5] M Gil, SS Kim, EJ Min - Frontiers in Public Health, 2022, “Machine learning models for predicting risk of depression in Korean college students: Identifying family and individual factors”

[6] K. Wang, N. An, B. N. Li, Y. Zhang and L. Li, "Speech Emotion Recognition Using Fourier Parameters," in IEEE Transactions on Affective Computing, vol. 6, no. 1, pp. 69-75, 1 Jan.-March 2015, doi: 10.1109/TAFFC.2015.2392101.

[7] K. Wang, N. An, B. N. Li, Y. Zhang and L. Li, "Speech Emotion Recognition Using Fourier Parameters," in IEEE Transactions on Affective Computing, vol. 6, no. 1, pp. 69-75, 1 Jan.-March 2015, doi: 10.1109/TAFFC.2015.2392101.

[8] A. M. Badshah, J. Ahmad, N. Rahim and S. W. Baik, "Speech Emotion Recognition from Spectrograms with Deep Convolutional Neural Network," 2017 International Conference on Platform Technology and Service (PlatCon), Busan, Korea (South), 2017, pp. 1-5, doi: 10.1109/PlatCon.2017.7883728.

[9] Ma, Xingchen & Yang, Hongyu & Chen, Qiang & Huang, di. (2016). DepAudioNet: An Efficient Deep Model for Audio based Depression Classification. 35-42. 10.1145/2988257.2988267.

[10] A. M. Badshah, J. Ahmad, N. Rahim and S. W. Baik, "Speech Emotion Recognition from Spectrograms with Deep Convolutional Neural Network," 2017 International Conference on Platform Technology and Service (PlatCon), Busan, Korea (South), 2017, pp. 1-5, doi: 10.1109/PlatCon.2017.7883728.

[11] Fayek, Haytham & Lech, Margaret & Cavedon, Lawrence. (2017). Evaluating deep learning architectures for Speech Emotion Recognition. Neural Networks. 92. 10.1016/j.neunet.2017.02.013.

[12] Tripathi, Samarth & Beigi, Homayoon. (2018). Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning.

[13] Zhao, Jianfeng & Mao, Xia & Chen, Lijiang. (2019). Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomedical Signal Processing and Control. 47. 312-323. 10.1016/j.bspc.2018.08.035.

[14] Manh-Hung Ha and Osacl T C Chen "Non-local Spatiotemporal Correlation Attention for action recognition” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2022

[15] Busso, C., Bulut, M., Lee, CC. et al. IEMOCAP: interactive emotional dyadic motion capture database. Lang Resources & Evaluation 42, 335–359 (2008). https://doi.org/10.1007/s10579-008-9076-6

[16] Krizhevsky A., Sutskever I., Hinton G. E. (2012). “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 1097–1105. 10.1145/3065386

[17] Dahl G. E., Yu D., Deng L., Acero A. (2011). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 20, 30–42. 10.1109/TASL.2011.2134090

[18] M. H. Pham, F. M. Noori and J. Torresen, "Emotion Recognition using Speech Data with Convolutional Neural Network," 2021 IEEE 2nd International Conference on Signal, Control and Communication (SCC), Tunis, Tunisia, 2021, pp. 182-187, doi: 10.1109/SCC53769.2021.9768372.

[19] A. Yadav and D. K. Vishwakarma, "A Multilingual Framework of CNN and Bi-LSTM for Emotion Classification," 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 2020, pp. 1-6, doi: 10.1109/ICCCNT49239.2020.9225614.

[20] Haider, Fasih, and Saturnino Luz. "Affect Recognition Through Scalogram and Multi-Resolution Cochleagram Features." 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021. 2021.

[21] Rudd, David Hason, Huan Huo, and Guandong Xu. "Leveraged mel spectrograms using harmonic and percussive components in speech emotion recognition." Pacific-Asia Conference on Knowledge Discovery and Data Mining. Cham: Springer International Publishing, 2022.

[22] Sadok, Samir, Simon Leglaive, and Renaud Séguier. "A vector quantized masked autoencoder for speech emotion recognition." arXiv preprint arXiv:2304.11117 (2023).

Downloads

Published

02-08-2024

How to Cite

Ha, M.-H., Nguyen, D.-C., Chan, L. Q., & Chen, O. T. (2024). Emotional Inference from Speech Signals Informed by Multiple Stream DNNs Based Non-Local Attention Mechanism. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 11(4). https://doi.org/10.4108/eetinis.v11i4.4734