Facial mask-wearing prediction and adaptive gender classification using convolutional neural networks

Authors

DOI:

https://doi.org/10.4108/eetinis.v11i2.4318

Keywords:

Gender classification, face biometrics, facial occlusions, mask-wearing, convolutional neural networks, explainable artifical intelligence

Abstract

The world has lived an exceptional time period caused by the Coronavirus pandemic. To limit Covid-19 propagation, governments required people to wear a facial mask outside. In facial data analysis, mask-wearing on the human face creates predominant occlusion hiding the important oral region and causing more challenges for human face recognition and categorisation. The appropriation of existing solutions by taking into consideration the masked context is indispensable for researchers. In this paper, we propose an approach for mask-wearing prediction and adaptive facial human-gender classification. The proposed approach is based on convolutional neural networks (CNNs). Both mask-wearing and gender information are crucial for various possible applications. Experimentation shows that mask-wearing is very well detectable by using CNNs and justifies its use as a prepossessing step. It also shows that retraining with masked faces is indispensable to keep up gender classification performances. In addition, experimentation proclaims that in a controlled face-pose with acceptable image quality' context, the gender attribute remains well detectable. Finally, we show empirically that the adaptive proposed approach improves global performance for gender prediction in a mixed context.

Downloads

Download data is not yet available.

Author Biographies

Mohamed Oulad-Kaddour, École Nationale Supérieure d'Informatique

Mohamed Oulad-Kaddour Laboratoire de la Communication dans les Systèmes Informatiques, Ecole Nationale Supèrieure d’Informatique, Oued-Smar, Algiers, Algeria Mohamed Oulad-Kaddour received the engineering and magister degrees from Ecole Nationale Supèrieure d’Informatique (ESI), Algiers, in 2011 and 2015, respectively, where he is currently pursuing the Ph.D. degree. He is an Assistant Professor with Ecole Nationale Supèrieure des Travaux Publics (ENSTP), Algiers. He is writing the Ph.D. in collaboration with the Face Recognition and Artificial Vision (FRAV) Research Group, Universidad Rey Juan Carlos, Madrid. His research interests include image classification, biometric categorization, machine learning, and image processing.

Daniel Palacios-Alonso, King Juan Carlos University

Daniel Palacios-Alonso Escuela Tècnica Superior de Ingeniería Informática, Universidad Rey Juan Carlos, Campus de Mostoles, Madrid, Spain Daniel Palacios-Alonso was born in Madrid, Spain. He received the B.S. and M.S. degrees in computer science and the Ph.D. degree in advanced computation from Universidad Politécnica de Madrid (UPM), in 2009 and 2017, respectively. He was a Team Leader at a technological consulting firm for five years. Since 2013, he has been a member of the Neuromrphic Speech Processing Laboratory, Center for Biomedical Technology. He is currently an Associate Professor with Universidad Rey Juan Carlos (URJC). He is also the Head of the Bioinspired Systems and Applications Group (SA-BIO). His research interests include stress and emotional states, neurodegenerative diseases, such as Parkinson’s, ALS, and Alzheimer’s, among others, artificial vision, pattern recognition, and biomedical signal processing. He was a recipient of several best paper awards, including ICPRS 2016, BIOSIGNALS 2019, and JID 2020, and the Doctoral Consortium Award from the Spanish Association of Artificial Intelligence, in 2013. He is a reviewer of national and international journal articles.

Cristina Conde, King Juan Carlos University

Cristina Conde Vilda Escuela Tècnica Superior de Ingeniería Informática, Universidad Rey Juan Carlos, Campus de Mostoles, Madrid, Spain Cristina Conde Vilda received the B.S. degree in physics (electronics) from the Complutense University of Madrid, in 1999, and the Ph.D. degree from Universidad Rey Juan Carlos, Madrid, in 2006. She has worked in the private sector for several years. In 2001, she joined Universidad Rey Juan Carlos, as an Assistant Professor. For seven years, she was the Vice Dean of Studies with the Computer Science School. She is currently a Full Professor. She has coordinated several national and European projects. Her research interests include image and video analysis, pattern recognition, and machine learning in both classical and biologically inspired computation.

Enrique Cabello, King Juan Carlos University

Enrique Cabello Escuela Tècnica Superior de Ingeniería Informática, Universidad Rey Juan Carlos, Campus de Mostoles, Madrid, Spain Enrique Cabello (Member, IEEE) received the B.S. degree in physics (electronics) from the University of Salamanca and the Ph.D. degree from the Polytechnic University of Madrid. In 1990, he joined the Computer Science Department, University of Salamanca. He joined Universidad Rey Juan Carlos, in 1998, where he has been the Head of the Face Recognition and Artificial Vision Group, since 2001. He is currently a Full Professor. His research interests include image and video analysis, pattern recognition, and machine learning using classic and bioinspired approaches.

References

Bhattacharya, S., Maddikunta, P.K.R., Pham, Q., Gadekallu, T.R., Krishnan, S., Chiranji Lal Chowdhary, C.L., Alazab, M.and Piran, M. (2021) Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Sustainable Cities and Society 65, doi: 10.1016/j.scs.2020.102589. DOI: https://doi.org/10.1016/j.scs.2020.102589

Wang, L., Zhong Qiu Lin. Z.Q. Wong, A. (2020) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci Rep 10, doi: 10.1038/s41598-020-76550-z. DOI: https://doi.org/10.1038/s41598-020-76550-z

Ng, CB., Tay, YH. and Goi, BM. (2015) A review of facial gender recognition. Pattern Anal Applic 18: 739–755, doi: 10.1007/s10044-015-0499-6. DOI: https://doi.org/10.1007/s10044-015-0499-6

Benenson, R. (2014). Occlusion Detection. In: Ikeuchi, K. (eds) Computer Vision. Springer, doi: 10.1007/978-0-387-31439-6_135. DOI: https://doi.org/10.1007/978-0-387-31439-6_135

Das, A., Ansari, W. and Basak, R. (2020) Covid-19 Face Mask Detection Using TensorFlow, Keras and OpenCV. IEEE 17th India Council International Conference (INDICON): 1-5, doi: 10.1109/INDICON49873.2020.9342585. DOI: https://doi.org/10.1109/INDICON49873.2020.9342585

Deng, J., Guo, J., Zhou, Y. Yu, J., Kotsia, I., and Zafeiriou, S. (2020) RetinaFace: Single-stage Dense Face Localisation in the Wild. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): 5202-5211, doi: 10.1109/CVPR42600.2020.00525. DOI: https://doi.org/10.1109/CVPR42600.2020.00525

Zhang, L., Verma, B., Tjondronegoro, D. and Chandran, V. (2018) Facial Expression Analysis under Partial Occlusion: A Survey. ACM Comput. Surv. 51 (2), doi: 10.1145/3158369. DOI: https://doi.org/10.1145/3158369

Ghazi, M. and Ekenel, K. (2016) A comprehensive analysis of deep learning based representation for face recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): 102-109, doi: 10.1109/CVPRW.2016.20. DOI: https://doi.org/10.1109/CVPRW.2016.20

Trigueros, D., Meng, L. and Hartnett, M. (2018) Enhancing convolutional neural networks for face recognition with occlusion maps and batch triplet loss. Image and Vision Computing 79: 99–108, doi: 10.1016/j.imavis.2018.09.011. DOI: https://doi.org/10.1016/j.imavis.2018.09.011

Rai, P. and Khanna, P. (2014) A gender classification system robust to occlusion using Gabor features based (2D)2PCA, J. Vis. Commun. Image R. 25: 1118–1129, doi: 10.1016/j.jvcir.2014.03.009 DOI: https://doi.org/10.1016/j.jvcir.2014.03.009

Wu, G., Tao, J., and Xu, X. (2019) Occluded Face Recognition Based on the Deep Learning. The 31th Chinese Control and Decision Conference, Nanchang, China: 793-797, doi: 10.1109/CCDC.2019.8832330. DOI: https://doi.org/10.1109/CCDC.2019.8832330

Lin, L.E. and Lin C.H. (2021) Data augmentation with occluded facial features for age and gender estimation. IET Biometrics, doi: 10.1049/bme2.12030 DOI: https://doi.org/10.1049/bme2.12030

Hsu, CY., Lin, LE. and Lin, C.H. (2021) Age and gender recognition with random occluded data augmentation on facial images.Multimed Tools Appl 80: 11631–11653, doi:10.1007/s11042-020-10141-y. DOI: https://doi.org/10.1007/s11042-020-10141-y

Juefei-Xu, F., Verma, E., Goel, P., Cherodian, A., and Savvides, M. (2016) DeepGender: Occlusion and Low Resolution Robust Facial Gender Classification via Progressively Trained Convolutional Neural Networks with Attention. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW): 136-145, doi:10.1109/CVPRW.2016.24. DOI: https://doi.org/10.1109/CVPRW.2016.24

Li, Y., Zeng, J., Shan, S. and Chen, X. (2019) Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism. IEEE Transactions on Image Processing 28: 2439-2450, doi:10.1109/TIP.2018.2886767. DOI: https://doi.org/10.1109/TIP.2018.2886767

Afifi, M. and Abdelhamed, A. (2019) AFIF4: Deep gender classification based on AdaBoost-based fusion of isolated facial features and foggy faces. J. Vis. Commun. Image R.62: 77-86, doi:10.1016/j.jvcir.2019.05.001. DOI: https://doi.org/10.1016/j.jvcir.2019.05.001

Learned-Miller, E., Huang, G.B., RoyChowdhury, A., Li, H. andHua, G. (2016) Labeled Faces in theWild: A Survey. In Advances in Face Detection and Facial Image Analysis, Springer: 189-248, doi: 10.1007/978-3-319-25958-1_8. DOI: https://doi.org/10.1007/978-3-319-25958-1_8

Rouhsedaghat, M., Wang, Y., Ge, X., Hu, Sh., You, S. and Kuo, C.J. (2021) Face-Hop: A light-weight low-resolution face gender classification method. In Proc. Int.Workshops Challenges, Springer: 169–183, doi: 10.1007/978-3-030- 68793-9_12. DOI: https://doi.org/10.1007/978-3-030-68793-9_12

Cabani, A., Hammoudi, K., Benhabiles, H. and Melkemi, M. (2021) MaskedFace-Net – A dataset of correctly/incorrectly masked face images in the context of COVID-19. Smart Health 19, 10.1016/j.smhl.2020.100144. DOI: https://doi.org/10.1016/j.smhl.2020.100144

Selvaraju, R R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D. (2017) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. IEEE International Conference on Computer Vision (ICCV): 618-626, doi: 10.1109/ICCV.2017.74. DOI: https://doi.org/10.1109/ICCV.2017.74

Jia, S., Lansdall-Welfare, T. and Cristianini, N. (2016) Gender Classification by Deep Learning on Millions of Weakly Labelled Images. IEEE 16th International Conference on Data Mining Workshops (ICDMW): 462-467, doi: 10.1109/ICDMW.2016.0072. DOI: https://doi.org/10.1109/ICDMW.2016.0072

Song, L., Gong, D., Li, Z., Liu, C. and Liu, W. (2019) Occlusion Robust Face Recognition Based on Mask Learning With Pairwise Differential Siamese Network. IEEE/CVF International Conference on Computer Vision (ICCV): 773-782, doi: 10.1109/ICCV.2019.00086. DOI: https://doi.org/10.1109/ICCV.2019.00086

Zeng, D., Veldhuis, R., and Spreeuwers, L. (2021) A survey of face recognition techniques under occlusion. IET Biometrics, doi: 10.1049/bme2.12029. DOI: https://doi.org/10.1049/bme2.12029

Karras, T., Laine, S. and Aila, T. (2019) A Style-Based Generator Architecture for Generative Adversarial Networks. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): 4396-4405, doi:10.1109/CVPR.2019.00453. DOI: https://doi.org/10.1109/CVPR.2019.00453

Levi, G. and Hassncer T. (2015) Age and gender classification using convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition Workshops, :34-42, doi: 10.1109/CVPRW.2015.7301352. DOI: https://doi.org/10.1109/CVPRW.2015.7301352

Annalakshmi, M., Roomi, S.M.M. and Naveedh, A.S. (2019) A hybrid technique for gender classification with SLBP and HOG features. Cluster Comput 22 (Suppl 1): 11–20, doi: 10.1007/s10586-017-1585-x. DOI: https://doi.org/10.1007/s10586-017-1585-x

[Online] FEI, Centro Universitario da FEI, FEI Face Database. Available online: fei.edu.br/ cet/facedatabase

Alzubaidi, L., Zhang, J., Humaidi, A.J. et al. (2021) Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data 8, doi: 10.1186/s40537-021-00444-8. DOI: https://doi.org/10.1186/s40537-021-00444-8

Zhang, N., Paluri, M., Ranzato M.A. Darrell T., Bourdev L. (2014) Panda: Pose aligned networks for deep attribute modeling. EEE Conference on Computer Vision and Pattern Recognition: 1637-1644, doi: 10.1109/CVPR.2014.212. DOI: https://doi.org/10.1109/CVPR.2014.212

Lee, B., Gilani, S.Z., Hassan, G.M. and Mian, A. (2019) Facial Gender Classification — Analysis using Convolutional Neural Networks. Digital Image Computing: Techniques and Applications (DICTA): 1-8, doi: 10.1109/DICTA47822.2019.8946109. DOI: https://doi.org/10.1109/DICTA47822.2019.8946109

Rajeev R., Vishal M.P. and Rama C. (2019) HyperFace: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Trans. Pattern. Anal. Mach. Intell.41 (1): 121-135, doi:10.1109/TPAMI.2017.2781233. DOI: https://doi.org/10.1109/TPAMI.2017.2781233

Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. ArXiv preprint, arXiv:1704.04861.

Tan, M and Le, Q V. (2019) EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. ArXiv preprint, arXiv:1905.11946.

Gurnani, A., Gajjar, V., Mavani, V.and Khandhediya, Y. (2018) VEGAC: Visual Saliency-based Age, Gender, and Facial Expression Classification Using Convolutional Neural Networks. ArXiv preprint, arXiv:1803.05719.

Dong, X., Shen, J., Yu,D., Wang, W., Liu, J. and Huang, H. (2017) Occlusion-Aware Real-Time Object Tracking. IEEE Transactions on Multimedia 19 (4): 763-771, doi: 10.1109/TMM.2016.2631884. DOI: https://doi.org/10.1109/TMM.2016.2631884

Girshick, R. (2015) Fast R-CNN, Proceedings of the IEEE International Conference on Computer Vision (ICCV): 1440-1448, doi: 10.1109/ICCV.2015.169. DOI: https://doi.org/10.1109/ICCV.2015.169

Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016) You Only Look Once: Unified, Real-Time Object Detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 779-788, doi: 10.1109/CVPR.2016.91. DOI: https://doi.org/10.1109/CVPR.2016.91

Law, H. and Deng, J. (2020) CornerNet: Detecting Objects as Paired Keypoints. Int J Comput Vis 128: 642–656, doi: 10.1007/s11263-019-01204-1 DOI: https://doi.org/10.1007/s11263-019-01204-1

Wang, Z., Wang, G., Huang, B., Xiong, Z., Hong, Q., Wu, H., Yi, P., Jiang, K., Wang, N., Pei, Y., Chen, H., Miao, Y., Huang, Z., Liang, J. (2020) Masked face recognition dataset and application. ArXiv preprint, arXiv:2003.09093.

Montero, D., Nieto, M., Leskovsky, P. and Aginako, N. (2021) Boosting Masked Face Recognition with Multi- Task ArcFace. International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): 184-189, doi: 10.1109/SITIS57111.2022.00042. DOI: https://doi.org/10.1109/SITIS57111.2022.00042

Vu, H.N., Nguyen, M.H. and Pham, C. (2022) Masked face recognition with convolutional neural networks and local binary patterns. Appl Intell 52: 5497–5512, doi: 10.1007/s10489-021-02728-1. DOI: https://doi.org/10.1007/s10489-021-02728-1

Oulad-Kaddour, M., Haddadou, H., Conde, C., Palacios-Alonso, D., Benatchba, K. and Cabello, E. (2023) Deep Learning-Based Gender Classification by Training With Fake Data. IEEE Access 11: 120766-120779, doi: 10.1109/ACCESS.2023.3328210. DOI: https://doi.org/10.1109/ACCESS.2023.3328210

Oulad-Kaddour, M., Haddadou, H., Conde, C., Palacios-Alonso, D., and Cabello, E. (2023) Real-world human gender classification from oral region using convolutional neural netwrok. ADCAIJ, 11(3): 249–261, doi:10.14201/adcaij.27797. DOI: https://doi.org/10.14201/adcaij.27797

Cheng, Ch. (2022) Real-Time Mask Detection Based on SSD-MobileNetV2. ArXiv preprint, arXiv:2208.13333. DOI: https://doi.org/10.1109/AUTEEE56487.2022.9994442

Zhang, H., Tang, J., Wu, P., Li, H., Zeng, N. (2023) A novel attention-based enhancement framework for face mask detection in complicated scenarios. Signal Processing: Image Communication 116, doi: 10.1016/j.image.2023.116985. DOI: https://doi.org/10.1016/j.image.2023.116985

Karkkainen, K. and Joo, J. (2021) FairFace: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV): 1547–1557, doi:, doi:10.1109/WACV48630.2021.00159. DOI: https://doi.org/10.1109/WACV48630.2021.00159

Downloads

Published

13-03-2024

How to Cite

Oulad-Kaddour, M., Haddadou, H., Palacios-Alonso, D., Conde, C., & Cabello, E. (2024). Facial mask-wearing prediction and adaptive gender classification using convolutional neural networks. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 11(2), e3. https://doi.org/10.4108/eetinis.v11i2.4318