A Resource-Efficient Deep Learning: Fast Hand Gestures on Microcontrollers

Authors

DOI:

https://doi.org/10.4108/eetinis.v11i3.5616

Keywords:

Human-Computer Interaction, Microcontroller, TinyML, Intelligent system, Hand gesture recognition, Embedded, Communications

Abstract

Hand gesture recognition using a camera provides an intuitive and promising means of human-computer interaction and allows operators to execute commands and control machines with simple gestures. Research in hand gesture recognition-based control systems has garnered significant attention, yet the deploying of microcontrollers into this domain remains relatively insignificant. In this study, we propose a novel approach utilizing micro-hand gesture recognition built on micro-bottleneck Residual and micro-bottleneck Conv blocks. Our proposed model, comprises only 42K parameters, is optimized for size to facilitate seamless operation on resource-constrained hardware. Benchmarking conducted on STM32 microcontrollers showcases remarkable efficiency, with the model achieving an average prediction time of just 269ms, marking a 7× faster over the state-of-art model. Notably, despite its compact size and enhanced speed, our model maintains competitive performance result, achieving an accuracy of 99.6% on the ASL dataset and 92% on OUHANDS dataset. These findings underscore the potential for deploying advanced control methods on compact, cost-effective devices, presenting promising avenues for future research and industrial applications.

Downloads

Download data is not yet available.

Downloads

Published

03-07-2024

How to Cite

Tran Mach, T. K., Nguyen Van, K., & Le, M. (2024). A Resource-Efficient Deep Learning: Fast Hand Gestures on Microcontrollers. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 11(3). https://doi.org/10.4108/eetinis.v11i3.5616