Using Deep Neural Networks to Classify Symbolic Road Markings for Autonomous Vehicles
DOI:
https://doi.org/10.4108/eetinis.v9i31.985Keywords:
convolutional neural networks, symbol road marking, autonomous cars, intelligent systems, system design, embedded systemsAbstract
To make autonomous cars as safe as feasible for all road users, it is essential to interpret as many sources of trustworthy information as possible. There has been substantial research into interpreting objects such as traffic lights and pedestrian information, however, less attention has been paid to the Symbolic Road Markings (SRMs). SRMs are essential information that needs to be interpreted by autonomous vehicles, hence, this case study presents a comprehensive model primarily focused on classifying painted symbolic road markings by using a region of interest (ROI) detector and a deep convolutional neural network (DCNN). This two-stage model has been trained and tested using an extensive public dataset. The two-stage model investigated in this research includes SRM classification by using Hough lines where features were extracted and the CNN model was trained and tested. An ROI detector is presented that crops and segments the road lane to eliminate nonessential features of the image. The investigated model is robust, achieving up to 92.96 percent accuracy with 26.07 and 40.1 frames per second (FPS) using ROI scaled and raw images, respectively.
Downloads
References
Ranganathan, D. Ilstrup and T. Wu, "Light-weight localization for vehicles using road markings," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 921-927, doi: 10.1109/IROS.2013.6696460 DOI: https://doi.org/10.1109/IROS.2013.6696460
Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587). DOI: https://doi.org/10.1109/CVPR.2014.81
R. Girshick, "Fast R-CNN", Proc. IEEE Int. Conf. Comput. Vis., pp. 1440-1448, Dec. 2015.
S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks", IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017. DOI: https://doi.org/10.1109/TPAMI.2016.2577031
T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan and S. Belongie, "Feature pyramid networks for object detection", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 936-944, Jul. 2017 DOI: https://doi.org/10.1109/CVPR.2017.106
K. He, G. Gkioxari, P. Dollár and R. Girshick, "Mask R-CNN", Proc. IEEE Int. Conf. Comput. Vis., pp. 2980-2988, Oct. 2017. DOI: https://doi.org/10.1109/ICCV.2017.322
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, et al., "SSD: Single shot MultiBox detector", Proc. Eur. Conf. Comput. Vis., pp. 21-37, Oct. 2016. DOI: https://doi.org/10.1007/978-3-319-46448-0_2
J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement", arXiv:1804.02767v1, 2018, [online] Available: https://arxiv.org/abs/1804.02767.
Girshick, R., 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448). DOI: https://doi.org/10.1109/ICCV.2015.169
Cambridge-Driving Labeled Video Database (CamVid), Oct. 2018, [online] Available: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/.
Daimler Urban Segmentation Dataset, Jan. 2019, [online] Available: http://www.6d-vision.com/scene-labeling.
The Malaga Stereo and Laser Urban Data Set—MRPT, Oct. 2018, [online] Available: https://www.mrpt.org/MalagaUrbanDataset.
A. Geiger, P. Lenz, C. Stiller and R. Urtasun, "Vision meets Robotics: The KITTI Dataset", International Journal of Robotics Research, vol. 32, no. 11, 2013. [online] Available: http://cvrr.ucsd.edu/vivachallenge/index.php/traffic-light/traffic-light-detection/. DOI: https://doi.org/10.1177/0278364913491297
T. Wu and A. Ranganathan, "A practical system for road marking detection and recognition," 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares, 2012, pp. 25-30, doi: 10.1109/IVS.2012.6232144. DOI: https://doi.org/10.1109/IVS.2012.6232144
T. M. Hoang, S. H. Nam and K. R. Park, "Enhanced Detection and Recognition of Road Markings Based on Adaptive Region of Interest and Deep Learning," in IEEE Access, vol. 7, pp. 109817-109832, 2019, doi: 10.1109/ACCESS.2019.2933598. DOI: https://doi.org/10.1109/ACCESS.2019.2933598
T.-Y. Lin, P. Goyal, R. Girshick, K. He and P. Dollár, "Focal loss for dense object detection", Proc. IEEE Int. Conf. Comput. Vis., pp. 2999-3007, Oct. 2017. DOI: https://doi.org/10.1109/ICCV.2017.324
R. Grompone von Gioi, J. Jakubowicz, J. Morel and G. Randall, "LSD: A Fast Line Segment Detector with a False Detection Control," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010, doi: 10.1109/TPAMI.2008.300. DOI: https://doi.org/10.1109/TPAMI.2008.300
X. Lu, J. Yao, K. Li and L. Li, "CannyLines: A parameter-free line segment detector," 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, 2015, pp. 507-511, doi: 10.1109/ICIP.2015.7350850. DOI: https://doi.org/10.1109/ICIP.2015.7350850
Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, "Gradient based learning applied to document recognition", PIEEE, vol. 86, no. 11, pp. 2278-2324, 1998. DOI: https://doi.org/10.1109/5.726791
Ouyang, Z., Niu, J., Liu, Y. and Guizani, M., “Deep CNN-Based Real-Time Traffic Light Detector for Self-Driving Vehicles,” IEEE Transactions on Mobile Computing, 19(2), 2020 DOI: https://doi.org/10.1109/TMC.2019.2892451
A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet classification with deep convolutional neural networks", Proc. Int. Conf. Neural Inf. Process. Syst., pp. 1097-1105, 2012.
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J. and Keutzer, K., 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360.
C. Szegedy et al., "Going deeper with convolutions", Proc. 14th Int. IEEE Conf. Intell. Transp. Syst., pp. 1609-1615, 2014
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 EAI Endorsed Transactions on Industrial Networks and Intelligent Systems
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.