Pedestrian Perception Tracking in Complex Environment of Unmanned Vehicles Based on Deep Neural Networks

Authors

  • Ruru Liu Shanghai Maritime University image/svg+xml
  • Feng Hong Chizhou University image/svg+xml
  • Zuo Sun Anhui Research Center of Semiconductor Industry Generic Technology

DOI:

https://doi.org/10.4108/ew.5793

Keywords:

YOLOv4, Driverless Vehicles, Complex Scene Perception

Abstract

INTRODUCTION: In recent years, machine learning and deep learning have emerged as pivotal technologies with transformative potential across various industries. Among these, the automobile industry stands out as a significant arena for the application of these technologies, particularly in the development of smart cars with unmanned driving systems. This article delves into the extensive research conducted on the detection technology employed by autonomous vehicles to navigate road conditions, a critical aspect of driverless car technology.

OBJECTIVES: The primary aim of this research is to explore and highlight the intricacies of road condition detection for autonomous vehicles. Emphasizing the importance of this key component in the development of driverless cars, we aim to provide insights into cutting-edge algorithms that enhance the capabilities of these vehicles, ultimately contributing to their widespread adoption.

METHODS: In addressing the challenge of road condition detection, we introduce the TidyYOLOv4 algorithm. This algorithm, deemed more advantageous than YOLOv4, particularly excels in pedestrian recognition within urban traffic environments. Its real-time capabilities make it a suitable choice for detecting pedestrians on the road under dynamic conditions.

RESULTS: The application of the TidyYOLOv4 algorithm in autonomous vehicles has yielded promising results, especially in enhancing pedestrian recognition in urban traffic settings. The algorithm's real-time functionality proves crucial in ensuring the timely detection of pedestrians on the road, thereby improving the overall safety and efficiency of autonomous vehicles.

CONCLUSION: In conclusion, the detection of road conditions is a critical aspect of autonomous vehicle technology, with implications for safety and efficiency. The TidyYOLOv4 algorithm emerges as a noteworthy advancement, outperforming its predecessor YOLOv4 in pedestrian recognition within urban traffic environments. As companies continue to invest in driverless technology, leveraging such advanced algorithms becomes imperative for the successful deployment of autonomous vehicles in real-world scenarios.

Downloads

Download data is not yet available.

Author Biographies

Ruru Liu, Shanghai Maritime University

Intelligent Perception and Computing Research Center of Chizhou University, Chizhou University

Feng Hong, Chizhou University

Anhui Research Center of Semiconductor Industry Generic Technology

References

Jin Q, Cui H, Sun C, et al.Domain adaptation based self-correction model for COVID-19 infection segmentation in CT images[J].Expert Systems with Applications,2021, 176. DOI: https://doi.org/10.1016/j.eswa.2021.114848

Li W, Raj A N J, Tjahjadi T, et al.Digital hair removal by deep learning for skin lesion segmentation[J].Pattern Recognition,2021, 117. DOI: https://doi.org/10.1016/j.patcog.2021.107994

Niehues S M, Adams L C, Gaudin R A, et al.Deep-Learning-Based Diagnosis of Bedside Chest X-ray in Intensive Care and Emergency Medicine[J].Investigative radiology,2021, 56 (8): 525-534. DOI: https://doi.org/10.1097/RLI.0000000000000771

Owais M, Yoon H S, Mahmood T, et al.Light-weighted ensemble network with multilevel activation visualization for robust diagnosis of COVID19 pneumonia from large-scale chest radiographic database[J].Applied Soft Computing,2021, 108. DOI: https://doi.org/10.1016/j.asoc.2021.107490

Onan A, Tocoglu M a L P.A Term Weighted Neural Language Model and Stacked Bidirectional LSTM Based Framework for Sarcasm Identification[J].Ieee Access,2021, 9: 7701-7722. DOI: https://doi.org/10.1109/ACCESS.2021.3049734

Roh Y, Heo G, Whang S E.A Survey on Data Collection for Machine Learning: A Big Data-AI Integration Perspective[J].Ieee Transactions on Knowledge and Data Engineering,2021, 33 (4): 1328-1347. DOI: https://doi.org/10.1109/TKDE.2019.2946162

Wen S, Wei H, Yang Y, et al.Memristive LSTM Network for Sentiment Analysis[J].Ieee Transactions on Systems Man Cybernetics-Systems,2021, 51 (3): 1794-1804.

Yang Z-L, Zhang S-Y, Hu Y-T, et al.VAE-Stega: Linguistic Steganography Based on Variational Auto-Encoder[J].Ieee Transactions on Information Forensics and Security,2021, 16: 880-895. DOI: https://doi.org/10.1109/TIFS.2020.3023279

Burnett K, Qian J, Du X, et al.Zeus: A system description of the two-time winner of the collegiate SAE autodrive competition[J].Journal of Field Robotics,2021, 38 (1): 139-166. DOI: https://doi.org/10.1002/rob.21958

Burnett K, Samavi S, Waslander S L, et al.aUToTrack: a lightweight object detection and tracking system for the SAE autodrive challenge arXiv[J].arXiv,2019: 8 pp.-8 pp. DOI: https://doi.org/10.1109/CRV.2019.00036

Samak T V, Samak C V, Ming X.AutoDRIVE Simulator: A Simulator for Scaled Autonomous Vehicle Research and Education arXiv[J].arXiv,2021: 8 pp.-8 pp. DOI: https://doi.org/10.1145/3483845.3483846

Wen J, Chen B, Tang W, et al.Harsh-Environmental-Resistant Triboelectric Nanogenerator and Its Applications in Autodrive Safety Warning[J].Advanced Energy Materials,2018, 8 (29). DOI: https://doi.org/10.1002/aenm.201801898

WangNa. Research on pedestrian detection algorithm and its security in unmanned driving [D],NanJing University.2020.

Dai J, Li Y, He K, et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks,2016: arXiv:1605.06409.

Girshick R, Donahue J, Darrell T, et al.: Rich feature hierarchies for accurate object detection and semantic segmentation,2014 Ieee Conference on Computer Vision and Pattern Recognition, New York: Ieee,2014: 580-587. DOI: https://doi.org/10.1109/CVPR.2014.81

Girshick R J a E-P. Fast R-CNN,2015: arXiv:1504.08083. DOI: https://doi.org/10.1109/ICCV.2015.169

He K, Zhang X, Ren S, et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,2014: arXiv:1406.4729. DOI: https://doi.org/10.1007/978-3-319-10578-9_23

Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection,2015: arXiv:1506.02640. DOI: https://doi.org/10.1109/CVPR.2016.91

Redmon J, Farhadi A J a E-P. YOLO9000: Better, Faster, Stronger,2016: arXiv:1612.08242. DOI: https://doi.org/10.1109/CVPR.2017.690

Redmon J, Farhadi A J a E-P. YOLOv3: An Incremental Improvement,2018: arXiv:1804.02767.

Bochkovskiy A, Wang C-Y, Liao H-Y M J a E-P. YOLOv4: Optimal Speed and Accuracy of Object Detection,2020: arXiv:2004.10934.

Downloads

Published

15-04-2024

How to Cite

1.
Liu R, Hong F, Sun Z. Pedestrian Perception Tracking in Complex Environment of Unmanned Vehicles Based on Deep Neural Networks. EAI Endorsed Trans Energy Web [Internet]. 2024 Apr. 15 [cited 2024 Nov. 22];11. Available from: https://publications.eai.eu/index.php/ew/article/view/5793