Effective Facial Expression Recognition System Using Machine Learning

Authors

  • Dheeraj Hebri Srinivas Institute of Technology
  • Ramesh Nuthakki Atria Institute of Techonology
  • Ashok Kumar Digal Rama Devi Women's University image/svg+xml
  • K G S Venkatesan MEGHA Institute of Engineering Technology for Women
  • Sonam Chawla O. P. Jindal Global University image/svg+xml
  • C Raghavendra Reddy Mohan Babu University

DOI:

https://doi.org/10.4108/eetiot.5362

Keywords:

Facial Expression Recognition, Machine Learning, K-Nearest Neighbor, Long Short Term Memory

Abstract

The co Facial expression recognition (FER) is a topic that has seen a lot of study in computer vision and machine learning. In recent years, deep learning techniques have shown remarkable progress on FER tasks. With this abstract, A Novel Is Advised By Us FER method that combines combined use of k-nearest neighbours and long short-term memory algorithms better efficiency and accurate facial expression recognition. The proposed system features two primary steps—feature extraction and classification—to get results. When extracting features, we extract features from the facial images using the Local Binary Patterns (LBP) algorithm. LBP is a simple yet powerful feature extraction technique that captures texture information from the image. In the classification stage, we use the KNN and LSTM algorithms for facial expression recognition. KNN is a simple and effective classification algorithm that finds the k closest to the given value neighbours to the test training-set-sample and assigning it to the class that is most frequent among its neighbours. However, KNN has limitations in handling temporal information. To address this limitation, we propose to use LSTM, which is a subclass of RNNs capable of capturing temporal relationships in time series data. The LSTM network takes as input the LBP features of a sequence of facial images and processes them through a series of LSTM cells to estimate the ultimate coding of the phrase. We examine the planned and system on two publicly available records: the CK+ and the Oulu-CASIA datasets. According on the experimental findings, the proposed system achieves performance at the cutting edge on both datasets. The proposed system performs better than other state-of-the-art methods, including those that use deep learning systems, quantitatively, in terms of F1-score and precision.In conclusion, the proposed FER system that combines KNN and LSTM algorithms achieves high accuracy and an F1 score in recognising facial expressions from sequences of images. This system can be used in many contexts, including human-computer interaction, emotion detection, and behaviour analysis.

Downloads

Download data is not yet available.
<br data-mce-bogus="1"> <br data-mce-bogus="1">

References

Fathima, A. Jainul and K. Vaidehi. “Review on Facial Expression Recognition System Using Machine Learning Techniques.” Learning and Analytics in Intelligent Systems (2019): n. pag. DOI: https://doi.org/10.1007/978-3-030-24318-0_70

Bhatti, Yusra Khalid et al. “Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine.” Computational Intelligence and Neuroscience 2021 (2021): n. pag. DOI: https://doi.org/10.1155/2021/5570870

Saurav, Sumeet et al. “Facial Expression Recognition Using Dynamic Local Ternary Patterns With Kernel Extreme Learning Machine Classifier.” IEEE Access 9 (2021): 120844-120868. DOI: https://doi.org/10.1109/ACCESS.2021.3108029

Mahmud, Firoz and Al-Amin Mamun. “Facial Expression Recognition System Using Extreme Learning Machine.” (2017).

Srinivasa Rao, C., Tilak Babu, S.B.G. (2016). Image Authentication Using Local Binary Pattern on the Low Frequency Components. In: Satapathy, S., Rao, N., Kumar, S., Raj, C., Rao, V., Sarma, G. (eds) Microelectronics, Electromagnetics and Telecommunications. Lecture Notes in Electrical Engineering, vol 372. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2728-1_49 DOI: https://doi.org/10.1007/978-81-322-2728-1_49

Lencioni, Gabriel Carreira et al. “Pain assessment in horses using automatic facial expression recognition through deep learning-based modeling.” PLoS ONE 16 (2021): n. pag. DOI: https://doi.org/10.1371/journal.pone.0258672

Roland Aigner, Andreas Pointner, Thomas Preindl, Rainer Danner, and Michael Haller. 2021. TexYZ: Embroidering Enameled Wires for Three Degree-ofFreedom Mutual Capacitive Sensing. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 499, 12 pages. https://doi.org/10.1145/3411764.3445479 DOI: https://doi.org/10.1145/3411764.3445479

Ozgur Atalay. 2018. Textile-based, interdigital, capacitive, soft-strain sensor for wearable applications. Materials 11, 5 (2018), 768. https://doi.org/10.3390/ ma11050768 DOI: https://doi.org/10.3390/ma11050768

James R Averill. 1999. Individual differences in emotional creativity: Structure and correlates. Journal of personality 67, 2 (1999), 331–371. DOI: https://doi.org/10.1111/1467-6494.00058

John N Bassili. 1979. Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. Journal of personality and social psychology 37, 11 (1979), 2049. DOI: https://doi.org/10.1037//0022-3514.37.11.2049

Dario Bombari, Petra C Schmid, Marianne Schmid Mast, Sandra Birri, Fred W Mast, and Janek S Lobmaier. 2013. Emotion recognition: The role of featural and configural face information. Quarterly Journal of Experimental Psychology 66, 12 (2013), 2426–2442. DOI: https://doi.org/10.1080/17470218.2013.789065

Leah Buechley and Michael Eisenberg. 2009. Fabric PCBs, Electronic Sequins, and Socket Buttons: Techniques for e-Textile Craft. Personal Ubiquitous Comput. 13, 2 (feb 2009), 133–150. https://doi.org/10.1007/s00779-007-0181-0 DOI: https://doi.org/10.1007/s00779-007-0181-0

Claus-Christian Carbon. 2020. Wearing face masks strongly confuses counterparts in reading emotions. Frontiers in psychology 11 (2020), 566886. DOI: https://doi.org/10.3389/fpsyg.2020.566886

Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2, 3, Article 27 (may 2011), 27 pages. https://doi.org/10.1145/1961189.1961199 DOI: https://doi.org/10.1145/1961189.1961199

Tuochao Chen, Yaxuan Li, Songyun Tao, Hyunchul Lim, Mose Sakashita, Ruidong Zhang, Francois Guimbretiere, and Cheng Zhang. 2021. NeckFace: Continuously Tracking Full Facial Expressions on Neck-Mounted Wearables. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 2, Article 58 (jun 2021), 31 pages. https: //doi.org/10.1145/3463511 DOI: https://doi.org/10.1145/3463511

Downloads

Published

11-03-2024

How to Cite

[1]
D. Hebri, R. Nuthakki, A. K. Digal, K. G. S. Venkatesan, S. Chawla, and C. Raghavendra Reddy, “Effective Facial Expression Recognition System Using Machine Learning”, EAI Endorsed Trans IoT, vol. 10, Mar. 2024.