A Mobile Lens: Voice-Assisted Smartphone Solutions for the Sightless to Assist Indoor Object Identification

Authors

  • Talal Saleem Asia Pacific University of Technology & Innovation image/svg+xml
  • V. Sivakumar Asia Pacific University of Technology & Innovation image/svg+xml

DOI:

https://doi.org/10.4108/eetiot.6450

Keywords:

Artificial intelligence, Deep learning, Indoor object identification, Mobile applications for the blind, Visual impairment

Abstract

Every aspect of life is organized around sight. For visually impaired individuals, accidents often occur while walking due to collisions with people or walls. To navigate and perform daily tasks, visually impaired people typically rely on white cane sticks, assistive trained guide dogs, or volunteer individuals. However, guide dogs are expensive, making them unaffordable for many, especially since 90% of fully blind individuals live in low-income countries. Vision is crucial for participating in school, reading, walking, and working. Without it, people struggle with independent mobility and quality of life. While numerous applications are developed for the general public, there is a significant gap in mobile on-device intelligent assistance for visually challenged people. Our custom mobile deep learning model shows object classification accuracy of 99.63%. This study explores voice-assisted smartphone solutions as a cost-effective and efficient approach to enhance the independent mobility, navigation, and overall quality of life for visually impaired or blind individuals.

Downloads

Download data is not yet available.
<br data-mce-bogus="1"> <br data-mce-bogus="1">

References

G. Qiao, H. Song, B. Prideaux, and S. (Sam) Huang, “The ‘unseen’ tourism: Travel experience of people with visual impairment,” Ann. Tour. Res., vol. 99, p. 103542, 2023. DOI: https://doi.org/10.1016/j.annals.2023.103542

R. Bourne et al., “Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study.,” Lancet Glob. Heal. - Elsevier, no. December, 2020.

M. Biswas et al., “Prototype Development of an Assistive Smart-Stick for the Visually Challenged Persons,” Proc. 2nd Int. Conf. Innov. Pract. Technol. Manag. ICIPTM 2022, vol. 2, pp. 477–482, 2022. DOI: https://doi.org/10.1109/ICIPTM54933.2022.9754183

Y. Tange, T. Konishi, and H. Katayama, “Development of vertical obstacle detection system for visually impaired individuals,” ACM Int. Conf. Proceeding Ser., no. 1, 2019. DOI: https://doi.org/10.1145/3325291.3325372

L. He, R. Wang, and X. Xu, “PneuFetch: Supporting Blind and Visually Impaired People to Fetch Nearby Objects via Light Haptic Cues,” Ext. Abstr. 2020 CHI Conf. Hum. Factors Comput. Syst., pp. 1–9, 2020. DOI: https://doi.org/10.1145/3334480.3383095

Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,” Proc. IEEE, vol. 111, no. 3, pp. 257–276, 2023. DOI: https://doi.org/10.1109/JPROC.2023.3238524

Z. Huang et al., “Making accurate object detection at the edge: review and new approach,” Artif. Intell. Rev., vol. 55, no. 3, pp. 2245–2274, 2022. DOI: https://doi.org/10.1007/s10462-021-10059-3

B. Chen et al., “MnasFPN: Learning latency-aware pyramid architecture for object detection on mobile devices,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 13604–13613, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.01362

M. Shimakawa, K. Matsushita, I. Taguchi, C. Okuma, and K. Kiyota, “Smartphone apps of obstacle detection for visually impaired and its evaluation,” ACM Int. Conf. Proceeding Ser., pp. 143–148, 2019. DOI: https://doi.org/10.1145/3325291.3325381

S. Yu, H. Lee, and J. Kim, “Street crossing aid using light-weight CNNs for the visually impaired,” Proc. IEEE/CVF Int. Conf. Comput. Vis. Work., 2019. DOI: https://doi.org/10.1109/ICCVW.2019.00317

S. A. Jakhete, P. Bagmar, A. Dorle, A. Rajurkar, and P. Pimplikar, “Object Recognition App for Visually Impaired,” 2019 IEEE Pune Sect. Int. Conf. PuneCon 2019, pp. 18–21, 2019. DOI: https://doi.org/10.1109/PuneCon46936.2019.9105670

S. Vaidya, N. Shah, N. Shah, and R. Shankarmani, “Real-Time Object Detection for Visually Challenged People,” Proc. Int. Conf. Intell. Comput. Control Syst. ICICCS 2020, no. Iciccs, pp. 311–316, 2020. DOI: https://doi.org/10.1109/ICICCS48265.2020.9121085

D. Ramalingam, S. Tiwari, and H. Seth, “Vision connect: A smartphone based object detection for visually impaired people,” Int. Conf. Comput. Vis. Bio Inspired Comput. Springer, Cham, 2020. DOI: https://doi.org/10.1007/978-3-030-37218-7_92

H. Nguyen, M. Nguyen, Q. Nguyen, S. Yang, and H. Le, “Web-based object detection and sound feedback system for visually impaired people,” 2020 Int. Conf. Multimed. Anal. Pattern Recognition, MAPR 2020, 2020. DOI: https://doi.org/10.1109/MAPR49794.2020.9237770

Downloads

Published

28-06-2024

How to Cite

[1]
T. Saleem and V. Sivakumar, “A Mobile Lens: Voice-Assisted Smartphone Solutions for the Sightless to Assist Indoor Object Identification”, EAI Endorsed Trans IoT, vol. 10, Jun. 2024.