Performance Comparison of Convolutional and Multiclass Neural Network for Learning Style Detection from Facial Images
DOI:
https://doi.org/10.4108/eai.20-10-2021.171549Keywords:
Learning Style, Artificial Neural Network, Facial Images, VARK learning-Style Model, Deep LearningAbstract
Improving the accuracy of learning style detection models is a primary concern in the area of automatic detection of learning style, which can be achieved either through, attribute/feature selection or classification algorithm. However, the role of facial expression in improving accuracy has not been fully explored in the research domain. On the other hand, deep learning solutions have become a new approach for solving complex problems using Deep Neural networks (DNNs); these DNNs have deep architectures that are capable of decomposing problems into multiple processing layers, enabling and devising multiple mapping of complex problems functions. In this paper, we investigate and compare the performance of Convolutional Neural Network (CNN) and MultiClass Neural Network (MCNN) for classification of learners into VARK learning-style dimensions (i.e Visual, Aural, Reading Kinaesthetic, including Neutral class) based on facial images. The performances of the two networks were evaluated and compared using square mean error MSE for training and accuracy metric for testing. The results show that MCNN offers better and robust classification performance of VARK learning style based on facial images. Finally, this paper has demonstrated a potential of a new method for automatic classification of VARK LS based on Facial Expressions (FEs). Based on the experimental results of the models, this approach can benefit both researchers and users of adaptive e-learning systems to uncover the potential of using FEs as identifier learning styles for recommendations and personalization of learning environments.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 EAI Endorsed Transactions on Scalable Information Systems
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.