Automated Damage Inspection in Vehicle Headlights Using U-Net and Resnet50

  • Kevila Cezario de Morais IFES
  • Karin Satie Komati IFES
  • Kelly Assis de Souza Gazolli IFES

Resumo


The image analysis of vehicle damage is a procedure performed by insurance companies to determine whether the policy covers the service or not. In the case of damages in the headlights, the company receives a picture of the vehicle and a specialist analyzes the damage. This article proposes a system for the detection and classification of vehicle headlight images to automate the inspection. The method is based first on the U-Net structure for detecting the headlight in the image and then on the Resnet50 structure for classifying the damage. The U-Net database is made up of 2,000 vehicle images and 2,000 masks with headlight detection. Resnet50’s database is made up of 2,000 images divided into 4 classes: broken, blurred, infiltrated, or undamaged. The results obtained in the test had an IOU of 70 percent in the detection and an accuracy of 76 percent in the classification.

Palavras-chave: Insurance, Headlight, detection, Classification, Resnet50, U-Net

Referências

BRASIL, “Lei nº 9.503, de 23 de setembro de 1997 compilado. institui o código de trânsito brasileiro.,” 2023.

S. Indolia, A. K. Goswami, S. P. Mishra, and P. Asopa, “Conceptual understanding of convolutional neural network-a deep learning approach,” Procedia computer science, vol. 132, pp. 679–688, 2018.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” IEEE Access, vol. 8, pp. 54564–54573, 2020.

D. Dais, I. Bal, E. Smyrou, and V. Sarhosis, “Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning,” Automation in Construction, Elsevier: v. 125, n. 1, 2021.

S. Singh and S. Prasad, “Techniques and challenges of face recognition: A critical review,” Procedia Computer Science: v. 143, n. 1, pp. 536–543, 2018.

A. Pathak, M. Pandey, and S. Rautaray, “Application of deep learning for object detection,” Procedia Computer Science: v. 132,, n. 1, pp. 1706–1717, 2018.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (Las Vegas, NV, USA), pp. 770–778, 2016.

J. Chai, H. Zeng, A. Li, and E. Ngai, “Deep learning in computer vision: A critical review of emerging techniques and application scenarios,” Machine Learning with Applications: v. 6,, n. 1, 2021.

A. Karn, “Artificial intelligence in computer vision,” International Journal of Engineering Applied Sciences and Technology, vol. 6, pp. 249–254, 07 2021.

J. Wu, B. Peng, Z. Huang, and J. Xie, “Research on computer vision-based object detection and classification,” in Computer and Computing Technologies in Agriculture VI (D. Li and Y. Chen, eds.), (Berlin, Heidelberg), pp. 183–188, Springer Berlin Heidelberg, 2013.

L. Lu, “Improved yolov8 detection algorithm in security inspection image,” 2023.

A. Naumann, F. Hertlein, L. Dörr, S. Thoma, and K. Furmans, “Literature review: Computer vision applications in transportation logistics and warehousing,” 2023.

X. Xie, F. Shi, J. Niu, and X. Tang, “Breast ultrasound image classification and segmentation using convolutional neural networks,” in Advances in Multimedia Information Processing–PCM 2018: 19th Pacific-Rim Conference on Multimedia, Hefei, China, September 21-22, 2018, Proceedings, Part III 19, pp. 200–211, Springer, 2018.

V. Kumar, H. Arora, J. Sisodia, et al., “Resnet-based approach for detection and classification of plant leaf diseases,” in 2020 international conference on electronics and sustainable communication systems (ICESC), pp. 495–502, IEEE, 2020.

E. C. Tetila, B. B. Machado, G. Astolfi, N. A. de Souza Belete, W. P. Amorim, A. R. Roel, and H. Pistori, “Detection and classification of soybean pests using deep learning with uav images,” Computers and Electronics in Agriculture, vol. 179, p. 105836, 2020.

K. Chen, G. Reichard, X. Xu, and A. Akanmu, “Automated crack segmentation in close-range building façade inspection images using deep learning techniques,” Journal of Building Engineering, vol. 43, p. 102913, 2021.

T. U. Ahmed, M. S. Hossain, M. J. Alam, and K. Andersson, “An integrated cnn-rnn framework to assess road crack,” in 2019 22nd International Conference on Computer and Information Technology (ICCIT), pp. 1–6, IEEE, 2019.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.

R. C. Staudemeyer and E. R. Morris, “Understanding lstm – a tutorial into long short-term memory recurrent neural networks,” 2019.

Z. Qu, J. Mei, L. Liu, and D.-Y. Zhou, “Crack detection of concrete pavement with cross-entropy loss function and improved vgg16 network model,” IEEE Access, vol. 8, pp. 54564–54573, 2020.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

D. Jurado-Rodríguez, J. M. Jurado, L. Pádua, A. Neto, R. Munoz-Salinas, and J. J. Sousa, “Semantic segmentation of 3d car parts using uav-based images,” Computers & Graphics, vol. 107, pp. 93–103, 2022.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” 2015.

H. Jung, M.-K. Choi, J. Jung, J.-H. Lee, S. Kwon, and W. Young Jung, “Resnet-based vehicle classification and localization in traffic surveillance systems,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW), (Honolulu, HI, USA), pp. 61–67, 2017.

R. Watkins, N. Pears, and S. Manandhar, “Vehicle classification using resnets, localisation and spatially-weighted pooling,” arXiv preprint arXiv:1810.10329, 2018.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015.

R. Galina, T. Melo, and K. Komati, “Pavement crack segmentation using a u-net based neural network,” in Anais do XVII Workshop de Visão Computacional, (Porto Alegre, RS, Brasil), pp. 76–81, SBC, 2021.

E. Coltri, G. Costa, K. Silva, P. Martim, and L. Bergamasco, “Automatic segmentation and roi detection in cardiac mri of cardiomyopathy using q-sigmoid as preprocessing step,” in Anais do XVII Workshop de Visão Computacional, (Porto Alegre, RS, Brasil), pp. 143–147, SBC, 2021.

B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: a database and web-based tool for image annotation,” International journal of computer vision, vol. 77, pp. 157–173, 2008.

S. Jadon, “A survey of loss functions for semantic segmentation,” in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), IEEE, oct 2020.

M. Rahman and Y. Wang, “Optimizing intersection-over-union in deep neural networks for image segmentation,” vol. 10072, pp. 234–244, 12 2016.
Publicado
13/11/2023
MORAIS, Kevila Cezario de; KOMATI, Karin Satie; GAZOLLI, Kelly Assis de Souza. Automated Damage Inspection in Vehicle Headlights Using U-Net and Resnet50. In: WORKSHOP DE VISÃO COMPUTACIONAL (WVC), 18. , 2023, São Bernardo do Campo/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 18-23. DOI: https://doi.org/10.5753/wvc.2023.27526.

Artigos mais lidos do(s) mesmo(s) autor(es)

Obs.: Esse plugin requer que pelo menos um plugin de estatísticas/relatórios esteja habilitado. Se o seu plugins de estatísticas oferece mais que uma métrica, então, por favor, também selecione uma métrica principal na página de configurações administrativas do site e/ou da revista.