A Resource-Efficient Deep Learning: Fast Hand Gestures on Microcontrollers
DOI:
https://doi.org/10.4108/eetinis.v11i3.5616Keywords:
Human-Computer Interaction, Microcontroller, TinyML, Intelligent system, Hand gesture recognition, Embedded, CommunicationsAbstract
Hand gesture recognition using a camera provides an intuitive and promising means of human-computer interaction and allows operators to execute commands and control machines with simple gestures. Research in hand gesture recognition-based control systems has garnered significant attention, yet the deploying of microcontrollers into this domain remains relatively insignificant. In this study, we propose a novel approach utilizing micro-hand gesture recognition built on micro-bottleneck Residual and micro-bottleneck Conv blocks. Our proposed model, comprises only 42K parameters, is optimized for size to facilitate seamless operation on resource-constrained hardware. Benchmarking conducted on STM32 microcontrollers showcases remarkable efficiency, with the model achieving an average prediction time of just 269ms, marking a 7× faster over the state-of-art model. Notably, despite its compact size and enhanced speed, our model maintains competitive performance result, achieving an accuracy of 99.6% on the ASL dataset and 92% on OUHANDS dataset. These findings underscore the potential for deploying advanced control methods on compact, cost-effective devices, presenting promising avenues for future research and industrial applications.
Downloads
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2024 EAI Endorsed Transactions on Industrial Networks and Intelligent Systems
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.