Algorithms used for facial emotion recognition: a systematic review of the literature

.


Introduction
People convey emotions through facial expressions by contracting and relaxing their facial muscles (1).In other words, facial expression faithfully conveys people's mood, feelings, and state of mind.Joy, sadness, fear, attraction, rejection, and all feelings are transmitted through the Tiznado Ubillús et al. expression on our face.On the other hand, for facial emotion recognition, artificial intelligence (AI) has been successfully applied in various cases, combined with computer vision (2).Similarly, we have a machine that imitates cognitive functions associated with the human mind, such as learning and problem-solving (3).Additionally, Machine Learning (ML) and Deep Learning (DL), as parts of artificial intelligence (AI), aim to reduce the aspect of human engineering.One of the most common approaches to this is artificial neural networks, which are a mathematical system based on how neurons work in the human brain (3).Facial Emotion Recognition (FER) through the field of Human-Computer Interaction (HCI) is applied in areas such as autopilot systems, education, medical treatment, psychological treatment, surveillance, and psychological analysis in computer vision (2).For example, in the learning process, humans have aspects related to negative emotions with cognitive teaching processes, which can result in low academic performance that can affect the learner when it is not synchronized with the activity (4).We currently live in a constantly changing society, and with the desire to develop technology without limits, artificial intelligence is positioned as one of the areas having the greatest impact.Furthermore, selecting the appropriate tools that meet the intended requirements, such as using the Python programming language (5).Technology allows for the identification of the emotional recognition profile in pediatric subjects with Attention Deficit Hyperactivity Disorder (ADHD) using a database of images of children (6).The new adaptation method for cross-domain emotion classification uses Convolutional Neural Networks (CNN) to enable emotion classification (7).The CoSTGA model (8) leverages the benefits of merging representations of semantic trend features and multi-head attention.Another method allows designing and identifying facial expressions characterized by the mouth, facial pose, and visual speech using a supervised learning method based on Machine Learning (ML), specifically Support Vector Machine (SVM) based on labels in the upper and lower face regions (9).The Efficient-SwishNet model is lightweight, efficient, and effective in developing a facial emotion recognition system that handles variations in lighting, face angle, gender, race, background scenarios, and uses a dataset of people from diverse geographic regions (10).There is an educational theory framework supported by communication processes that allows for the improvement of teaching strategies by implementing facial recognition patterns to capture emotions through artificial vision devices or cameras in the classroom, marked with biometric techniques, where the goal is to analyze and collect information that allows for adjustments in teaching and learning processes (11).In this context, future research is expected to improve the quality of emotion detection by improving the quality of the dataset and the model used (12).Machine learning algorithms have played a vital role in facial emotion recognition systems from images.These algorithms are considered the main part of such systems.However, it is necessary to investigate other machine learning algorithms in facial emotion recognition (13) because deficiencies in identification limit the discrimination of extracted structural features.The goal is to obtain more discriminative structural information from expression images adaptively, without relying on landmarks and without the need for prior knowledge to design a fixed facial structure (14).There are specific limitations regarding datasets (iMiGUE and GroupWalk), so it is proposed to solve specific tasks to optimize the data extracted by emotions on Saturdays during the interview and also techniques related to general perception, both positive and negative.Regarding the GroupWalk dataset, there are limitations related to sample quality and image resolution.Although the variation in camera placement blurs the motion caused by distortions in the videos, when these artifacts are directly written into the image, the accuracy of the models could be affected (15).We still lack datasets for other specific tasks in the literature.For example, for scenarios in nature, a security camera viewpoint would be ideal, as it would allow the reuse of this footage for emotion recognition.Technology has developed algorithms that allow for facial emotion recognition; however, it is necessary to investigate other machine learning algorithms in facial emotion recognition because there are identification deficiencies that limit the discrimination of extracted structural features.Therefore, the purpose of this article was to analyze the most commonly used algorithms for facial emotion recognition through a systematic literature review, following the PRISMA method.

Emotions and Their Characteristics:
Darwin (1899) (16) relates emotions to muscular activity and describes how different emotions trigger the action of various muscles, such as the raising of eyebrows and the downturned corners of the mouth in individuals experiencing pain or anxiety.However, it is concluded that these muscular responses were limited to a single muscle.Emotions are a psychologically vital process, underlying human behavior motivation (17).Emotions serve as our internal compass, being psychophysiological reactions to significant internal or external events for the organism and fulfill three main functions: i) Adaptive, it is the primary system for evolution and adaptation to environmental conditions available to humans, preparing the organism for action: fight, flight, caring for offspring, ii) They serve to communicate one's own mood to peers, predict and influence their behavior, iii) Motivational, they facilitate motivated behaviors to achieve a goal (17).2.2.Expressions: Muscular movements of expression are partly related to imaginary objects and partly to imaginary sensory impressions.In this proposition lies the key to understanding all expressive muscular movements (16).Therefore, expressions and gestures can be parameterized to generate a valuation scale and allow for the adjustment of a valence factor, with the purpose of more accurately detecting the meaning of facial expression, be it neutral, pleasant or unpleasant, sad, happy, angry, to name a few, and it can be sensed by some expression recognizer implement through patterns of artificial intelligence for subsequent analysis (11).Subsequently, these facial expressions become a key mechanism for conveying and understanding emotions, an inevitable part of human-computer interaction, and a key technology in the field of artificial intelligence (14).Furthermore, ELsayed et al. (2023) argue that there are various categories of emotions that can be classified as anger, happiness, fear, surprise, contempt, disgust, and sadness (18).

Gestures:
Darwin (1899) (16) mentions that when observing infants, they exhibit many emotions with extraordinary strength, while after life, some of our expressions cease to have the pure source and simply stop flowing.Thus, the author proposes the three Principles that explain most of the expressions and gestures involuntarily used by humans and lower animals under the influence of various emotions and sensations.In conclusion, specific movements of characteristics and gestures are truly expressive of certain states of mind (16).

Types of Emotions and Expressions:
The model proposed by Ekman defined the six basic human emotions as happiness, anger, disgust, fear, sadness, and surprise.In the dimensional model, emotion is assessed through continuous numerical scales to determine valence and arousal (2).In contrast, Goleman (2021) identified seven matching expressions: happiness, sadness, anger, fear, surprise, contempt, and disgust.This implies that there is a relationship between expressions and emotions according to arguments from both authors (19).

Conte
Next, we proceed to mention the classifications that according to Goleman (2021) mentions it in the types of expressions: -Happiness: it is described as a much more relaxed expression and where the individual is smiling: the expression of the mouth will be lifted, to expose the teeth or it can be closed and you can identify a wrinkle from the nose to the outer lip, lifting the cheeks.The lower eyelid is usually wrinkled or twitching, and in a genuine smile (19).
-Sadness: Involves in identifying that the inner corners of the individual's eyebrows are raised upward and inward, creating wrinkles, also the lips are pursed, and the jaw is usually tense and pulled upward.Of all the expressions, this is the most difficult to fake (19).
-Anger: It is expressed by low eyebrows, designed to hood the eyes and are usually close together, creating wrinkles between them and then tense and the eyes are usually looking intensely at the object with much anger, tight lips and intense gaze.It may include square opening of the mouth and lower jaw forced forward (19).
-Fear: It is determined in being able to notice that the eyebrows that are raised and tucked inward.They are usually straight rather than curved or arched.The eyes will be wide open, although white will only be seen at the top of the iris and not all the way around.The mouth is usually open, but there is tension around the lips, tightening it (19).
-Surprise: Characterized by raised and rounded eyebrows: the arches will be curved.

Artificial Intelligence (AI)
Despite the growing interest in AI in academia, industry, and public institutions, there is no standard definition of what AI entails.Some approaches have described AI in relation to human intelligence or intelligence in general (21).Artificial intelligence has been successfully applied in several fields, one of which is computer vision (2).Artificial intelligence (AI) is a branch of computer science, in which a machine mimics the cognitive functions associated with the human mind (3).

2.8.
Taxonomy of Artificial Intelligence: Machine Learning (ML) is an application of AI, based on the idea that we should let machines learn by themselves.What this means is that more data is available for machines to analyze and "learn".Going deeper into ML, Deep Learning (DL) is about reducing the human engineering aspect (3).Deep Learning (DL) focuses on making the machine classify information with the same method as the human brain.Learning technique or model applying neural networks that includes layers with higher difficulty (22) (23) (2).In (Gokani, 2017) (3) artificial intelligence along with hierarchical relationship it has with machine learning, on the other hand Samoili, et al. (2020) (21) propose a taxonomy of domain and subdomain related perception and subdomains with computational vision.

Neural Networks:
Neural networks, also known as Artificial Neural Networks (ANN) or Simulated Neural Networks (SNN), are a subset of machine learning and form the backbone of Deep Learning algorithms.Their name and structure are inspired by the human brain and mimic the way biological neurons signal each other.Artificial Neural Networks (ANNs) are made up of layers of nodes, containing an input layer, one or more hidden layers and an output layer.Each node, or artificial neuron, is connected to another and has an associated weight and threshold.If the output of an individual node is above the specified threshold value, that node is activated and sends data to the next layer in the network.Otherwise, no data is passed to the next layer of the network.Neural networks rely on training data to learn and improve their accuracy over time.However, once these learning algorithms are fine-tuned, they are powerful computing and artificial intelligence tools, allowing us to classify and cluster data at high speed.Voice recognition or image recognition tasks can take minutes versus hours for manual identification by human experts.One of the best known neural networks is Google's search algorithm.

Types of neural networks
The perceptron is the oldest neural network, created by Frank Rosenblatt in 1958.Forward propagating neural networks or Multilayer Perceptrons (MLP) are the networks we have mainly focused on in this paper.They consist of an input layer, a hidden layer(s) and an output layer.Although these neural networks are also known as MLPs, it is important to note that they are actually made up of sigmoid neurons, not perceptrons, since most real-world problems are nonlinear.Convolutional Neural Networks (CNNs) are similar to forward propagation networks, but are typically used for image recognition, pattern recognition and/or computer vision.These networks leverage the principles of linear algebra, particularly matrix multiplication, to identify patterns within an image.Recurrent Neural Networks (RNN) are identified by their feedback loops.These learning algorithms are primarily used with time series data to make predictions about future outcomes, for example, stock market predictions or sales forecasts.

Neural Networks Versus Deep Learning
Deep Learning and neural networks tend to be used interchangeably in conversation, which can be confusing.It is worth noting that "Deep" in Deep Learning only refers to the depth of layers in a neural network.A neural network that consists of more than three layers (including inputs and output) can be considered a Deep Learning algorithm.A neural network that has only two or three layers is a basic neural network (24).

Convolutional Neural Networks (CNN)
CNN extracts image information through the input layer and then uses convolution kernels to perform convolution operations on the image information and add bias values to form a local feature map (14).Neural networks are a subset of machine learning and the core of deep learning algorithms.They are composed of layers of nodes, which include an input layer, one or more hidden layers, and an output layer.Each node is connected to another and has an associated weight and threshold.If the output of an individual node is above the specified threshold value, that node becomes active and sends data to the next layer in the network.Otherwise, no data is passed to the next network layer (25).Convolutional neural networks are distinguished from other neural networks by their superior performance with image, speech, or audio signal inputs.They consist of three main types of layers: -Convolutional layer -Clustering layer -Fully connected layer The convolutional layer is the first layer of a convolutional network.While convolutional layers may be followed by other convolutional layers or grouping layers, the final layer is the fully connected layer.With each layer, the CNN increases in complexity, identifying larger and larger parts of the image.The first layers focus on simple features, such as colors and edges.As the image data progresses through the layers, the CNN begins to recognize larger elements or shapes until it finally identifies the expected object (25).

Convolutional layer
The convolutional layer is the main building block of a CNN, and is where most of the computations are performed.It requires a few components, such as input data, a filter, and a feature map.Suppose the input is a color image composed of a 3D pixel array.The feature detector is a twodimensional (2D) array of weights representing a portion of the image.Although its size can vary, the size of the filter is usually a 3x3 matrix; this also determines the size of the receptive field (25).A Convolutional Neural Network (CNN) has one or more convolutional layers, they are grouping layers and fully connected and are used in image recognition.CNN can be applied to 2D and 3D data arrays, and uses image processing after collecting data that have different formats, i.e., natural, false, grayscale (18).

Layer grouping
Layer clustering, also known as subsampling, allows for dimensionality reduction by reducing the number of input parameters.Similar to the convolutional layer, the grouping operation sweeps the entire input with a filter, but the difference is that this filter has no weight.Instead, the kernel applies an aggregation function to the values within the receptive field and thus fills the output matrix.There are two main types of grouping: Maximum clustering: as the filter traverses the input, it selects the pixel with the highest value to send to the output matrix.This approach is more commonly used than average clustering.Average clustering: as the filter moves through the input, it calculates the average value within the receptive field to send to the output matrix.Although a lot of information is lost in the clustering layer, clustering has a number of benefits for the CNN.It helps reduce complexity, improves efficiency and limits the risk of overfitting.

Fully connected layer
The name fully connected layer accurately describes the layer itself.As mentioned above, the pixel values of the input image are not directly connected to the output layer in partially connected layers.However, in the fully connected layer, each node of the output layer is directly connected to a node of the previous layer (25).

2.9.
Descriptors Descriptors are specific values that are used in digital images in order to characterize the objects present.They serve, for example, to discard unwanted objects or stains, to differentiate the shape of objects (26).

Histograms of Orientation Gradients (HOG)
In images of real scenes it is difficult to use basic descriptors to detect objects or even people.Therefore, more advanced descriptors, such as HOG (26) types, are used.The HOG technique is based on the accumulation of gradient directions across the image pixel for a certain region called "Cell".In the following histogram construction it considers a dimension that provides a series of feature vectors to be considered as input for the classification process (13).In the model the image is represented by means of the HOGs (27).

Haar Cascade
Haar descriptors are among the most efficient descriptors used, due to their low computational cost.They are used by the famous Viola-Jones face detection algorithm (26).2.9.3.Transfer Learning Driven Facial Emotion Recognition for Advanced Driver Assistance System (TLDFER-ADAS) The TLDFER-ADAS technique initially performs contrast enhancement procedures to improve image quality (29).

Local Binary Patterns (LBP)
Creates binary code at each pixel in the neighborhood according to the central pixel value.The feature selection algorithm used is derived from LBP for all available images, LBP extracts features and CNN classifies images into groups according to facial expressions (18).It compares the intensity of pixels with their neighbors to obtain Local Binary Patterns (LBP) (27).

Classifiers:
There are multiple types of classifiers of which the most important and the ones with the lowest computational cost are going to be cited and explained here (26).Type of classifiers: -SVM -K-Neighbors -Neural Networks -Adaboost

Support Vector Machines (SVM)
The Support Vector Machines algorithm is a linear classifier.It uses maximum margin hyperplanes which are more robust and give less classification errors.This system is efficient in the case of nonlinear data and resistant to overfitting (26).K-Neighbors This algorithm is based on the distance that exists between the input sample with the different samples that are already classified by classes.Depending on the number of neighbors it will determine the class of the sample.This will correspond to the most dominant class among the K-nearest neighbors (26).

Neural Networks
There are different types of neural networks.In this case, the pattern recognition ones have been investigated, since this will be the type of classifier that will be used to classify the different emotions that we have.Artificial neural networks have an established number of neurons, which can vary according to the user's choice and makes the model fit better to what is desired.However, having a greater number of neurons does not mean that the result obtained is better.In our case, we will explain later the number of neurons we spent to perform the different tests and we will see how a higher number of neurons does not mean a better result (26).

AdaBoost
Boosting classification is used to increase the performance of decision trees.This technique is used because it improves on difficult decisions, i.e. as it goes through the different learning stages, it discards errors and then later focuses on those and has a better hit rate (26).The Adaboost classifier operates under the cascade classifier architecture.The cascade is organized in stages.For each stage, a set of positive samples and negative samples are prepared, from which a number of weak classifiers are selected that together form a strong classifier.The weak classifiers are chosen from within the representation feature space (Haar, LBP or HOG) (27).Softmax It is commonly used in image classification tasks to measure the difference between the predicted probability distribution and the probability distribution of the actual labels by crossentropy (28).Fast Learning Network (FLN) is a novel proposed dualparallel forward parallel artificial neural network.The FLN algorithm is based on least-squares methods (13).

Viola-Jones Algorithm
Viola-Jones is an algorithm that has been defined as one of the most efficient face detectors currently available, thanks to its low computational cost and high hit rate.This algorithm was created in 2001 by Paul Viola and Michael Jones.It was originally created to detect objects competitively in real time.It is used to detect people, although it can also be trained to detect other types of objects (26) (22).2.12.MobileNetV3 Model It is a lightweight network proposed by the Google team.Unlike other networks, MobileNetV3 adopts depthseparable convolution instead of traditional convolution, resulting in a significant decrease in parameters and a substantial reduction in computational costs (28).

Databases
After an extensive study of the database to be used, the one that had the most subjects and provided the best conditions was chosen (26).

Facial Emotion Recognition (FER)
Traditional FER has first-order feature descriptors based on geometry and appearance, as well as higher-order feature descriptors, such as the covariance matrix (14).

RAF-DB
It is provided by the Laboratory of Pattern Recognition and Intelligent Systems (PRIS Lab) of Beijing University of Posts and Telecommunications.The database consists of more than 300,000 facial images extracted from the Internet, which are classified into seven categories: surprise, fear, disgust, anger, sadness, happiness and neutrality (2).

Methods
For the search of information in scientific papers, following the systematic review of the literature, according to the PRISMA method, the most relevant papers were selected to identify the issues related to facial emotion recognition, the degree of accuracy, which data sets are the most used, what kind of emotions exist in both positive and negative ones and what indicators could be added to the new emotions.These issues are part of the research questions, as shown in Table 3.The information gathering process was carried out through different databases such as: Scopus, Web of Science (WOS), Association for Computing Machiner (ACM) and other papers through manual search.The keywords or search engines that allowed us to access the required information were: "Emotions", "Facial Emotion", "Computer Vision", "deep learning", "machine learning", "detection", "recognition", "identification", "classification", using Boolean AND, OR and NOT indicators in the search process.The search results have been applied to the "titleabs-key" field in Scopus and in the "All Fields" field in Web Of Science (WOS) and also in the "All" field in Association for Computing Machiner (ACM).The search equation being those shown below: (("Emotions" OR "Facial Emotion") AND ("Computer Vision" OR "deep learning" OR "machine learning") AND ("detection" OR "recognition" OR "identification" OR "classification") AND NOT ("Sentiment analysis" OR "musical" OR "Speech" OR "pet" OR "dog")).Inclusion criteria were: i) open access and full-text papers, ii) papers published in indexed journals in the period from January 2022 to June 2023.The exclusion criteria were papers that: i) contained gray literature, ii) only the ABSTRACT was accessible, iii) were duplicated, iv) were not in line with the research objective.

Results and Discussion
In the analysis of the results of the review papers, a total of 38 papers selected between 2022 and 2023 were obtained, focused on the topic of facial emotion recognition.Haider et al. (35)  (

Taxonomy:
Gokani (2017) mentions the concepts of artificial intelligence (3) along with hierarchical relationship it has with machine learning, instead (21) a taxonomy of domain and subdomain related to perception and subdomains of computational vision is proposed.

Most frequently used algorithms for facial emotion recognition
It is determined that the most frequent algorithms by the authors are SVM and SoftMax with a total of 17.65% for each one, the same that they use for their classification in their implementation of the proposed project for the system.The most frequently used descriptors by the researchers are Histograms of Orientation Gradients (HOG) as shown in Table 8 (13).Method, model and dataset for using the best degree of pressure in facial emotion recognition.

Identification of the types of emotions being classified
It is described that there are seven types of emotion classifications.

Proposal of an indicator that relates facial coloration and facial emotion recognition.
In one of his contributions to physiology, documented by Sir Charles Bell in 1806 in the first edition of his work and later in the third edition of "Anatomy and Philosophy of Expression", a seemingly small but crucial detail is revealed.Bell notes that the muscles around the eyes contract involuntarily during intense respiratory efforts, thus playing a role in protecting these delicate organs from blood pressure.However, Mr. Bain's more profound approach stands out in two of his works, where he considers expression as an intrinsic component of feeling (16,(40)(41)(42)(43).The physiological signaling approach is highlighted in the study by Lozano et al. (2020).Their Artificial Neural Network identifies the dominant dimension of the emotional circumflex model and ensures consistency among the various emotions detected by the different modules of the system.For example, in the case of a user's partial emotions, such as fear (facial detection), aroused (behavioral detection), and nervous (physiological signals), a coherent connection is established in the analysis (44)(45)(46)(47).In this study, we seek to test the additional hypothesis that facial color is capable of communicating emotion effectively, even without the influence of facial movements.The idea put forward is that a face can convey emotional information to observers by altering blood flow or blood composition in the network of blood vessels closest to the skin surface, as illustrated in Figure 3 (1).It also records that, in relation to facial blushing in Australians, four informants maintain that those with dark skin tones, similar to Africans, rarely show blushing.A fifth informant, however, suggests that intense blushing is only observed in individuals with very dirty skin.Three observers confirm that blushing does occur.Mr. S. Wilson adds that this occurs in situations of strong emotions and when the skin is not excessively dark from prolonged exposure and lack of cleansing.Mr. Liang also reports that embarrassment often causes blushing, which sometimes extends to the neck.He adds that embarrassment also manifests itself in eye movements from side to side.Since Mr. Liang was a teacher in a native school, his observations possibly focused more on children, who tend to blush more than adults.In the choice of algorithms, SVM and SoftMax have emerged as the predominant choices, playing a crucial role in achieving optimal levels of accuracy in model training.These algorithms, with their robustness and ability to deal with complex data, have proven to be fundamental pillars in the field of facial emotion recognition.
It is noteworthy to mention that emotions are effectively analyzed in surveillance, smart homes, computer games, tracking of depressive patients, psychoanalysis (48).It is important to track emotional micro-expressions getting peculiarities such as face movements in space-time from videos (49).It is highlighted the idea that the use of facial recognition can be focused in three areas such as: robotized machines, marketing and safe citizenship (53)(54)(55).In the marketing of services, technology applied to the recognition of emotions and facial features significantly helps physicians, both in the diagnosis of the various types of autism, as well as in the treatment to optimize the quality of life of children and young people suffering from this disability (56-57).
There are models, methods, and sets of databases that allow for maximum accuracy in this task.The synergy between [VGG19 + our network] and the CK+ database has proven to be exceptionally successful, achieving a staggering 99.20% accuracy.This achievement highlights the crucial importance of finding the optimal configuration for optimal performance in facial emotion interpretation.
There is an interesting trend in the literature: a more accentuated focus on negative emotions such as anger, sadness, fear and disgust, compared to positive emotions such as joy and surprise.However, it is essential to mention that the valence of the emotion surprise still persists as an area of debate and discussion in the scientific community (31).
Facial emotion recognition provides us with an in-depth and comprehensive overview of the techniques, methods, and emerging trends in this discipline.With the convergence of psychology and technology, this research not only expands our knowledge of how machines can capture and understand human emotions, but also opens new doors to explore how this understanding can be applied in various fields and applications, such as humancomputer interaction, mental health, and artificial intelligence.Given that there are many realities, it is advisable to proceed with caution in order to decide moderately (47).The path traced by these studies promises to further unravel the mysteries of facial expressions and their connections with human emotions, generating a profound and lasting impact on the field of science and technology.

4. 1 . Proposed techniques and algorithm used for facial emotion recognition 4 . 1 . 1 .
Techniques and TaxonomyThe results are shown in Table4where 36.84% of the papers use Deep Learning for the development of the techniques and complementation model that they apply in their research projects.

Figure 1 :
Figure 1: Statistical graph of the most frequent algorithm.

Figure 2 :
Figure 2: Statistical plot of data set related to emotions.

Figure 3 :
Figure 3: Image Emotions are the execution of a series of calculations by the nervous system.

Figure 4 :
Figure 4: Proposal of a new model to measure the level of embarrassment.

Figure 5 :
Figure 5: Proposal of a new model to measure the level of shame.

Figure 6 :
Figure 6: Proposal of a new model to see the relationship between basic emotions and positive or negative emotions according to the information.

Table 4 :
Results of the Artificial Intelligence Techniques

Table 5
(23)s that the majority using the Deep Learning technique also use the method, model and dataset called MobileNetV3(30)and ResNet-18(23).

Table 5 :
Results of the Artificial Intelligence Techniques

Table 6 :
Artificial Intelligence Taxonomy Result

Table 7 :
Frequency result of the most used Algorithm in facial emotion recognition.

Table 8 :
Frequency result of the most frequently used descriptors in facial emotion recognition.

Table 9 :
Result of Methods and Models with accuracies for facial emotion detection.

Table 10 :
Data Sets for Facial Emotion Recognition This time the best dataset is used which is CK+ with a frequency most used by the authors representing 16.67%.Result of the papers focused on the datasets with frequency of usefulness in facial emotions.

Table 11 :
Description of the characteristics of the Emotions data set.

. Classification and characteristics of the types of facial emotions
It is described that there are 2 types of basic emotions that share the same characteristic with facial expressions.

Table 12 :
Results of the papers that determined the number of emotions found.

Table 13 :
Results of the papers focused on the types of emotion classifications.

Table 14 :
Results of the papers focused on positive and negative emotions.