Quantifying the impact of image degradation on Deep Learning models in face recognition systems

  • Leandro Dias Carneiro Instituto de Criminalística da Polícia Civil do Distrito Federal
  • Flavio de Barros Vidal Universidade de Brasília


Significant advancements in computer vision, particularly in facial recognition systems, have been witnessed in recent years. However, it is imperative to comprehend how these systems perform under real-world conditions, specifically when confronted with degraded images. This paper presents a comprehensive analysis of the impact of image degradation on facial recognition systems that rely on deep neural networks. The study evaluates three facial detection algorithms and eight facial recognition algorithms, with experiments conducted on four diverse datasets. A total of 14 types of image degradations, encompassing pure and mixed variations, were employed at six different intensity levels. Three distinct types of image pairs were generated to encompass various scenarios. The primary objective of this research is to enhance the understanding and assessment of facial recognition system outcomes, thereby strengthening the overall analysis of these systems. On average, the models had a minimum impact of 17% and a maximum of 43% for the datasets used in the experiment.
Palavras-chave: face recognition, image face quality, face degradation


Aljarrah, I. A. (2021). Effect of image degradation on performance of convolutional neural networks. International Journal of Communication Networks and Information Security, 13(2):215–219.

Amos, B., Ludwiczuk, B., and Satyanarayanan, M. (2016). Openface: A general-purpose face recognition library with mobile applications. Technical report, CMU-CS-16-118, CMU School of Computer Science.

Ashraf, M. Image degradation and noise.

Bansal, A., Ranjan, R., Castillo, C. D., and Chellappa, R. (2021). Deep cnn face recognition: Looking at the past and the future. Deep Learning-Based Face Analytics, pages 1–20.

Boutros, F., Huber, M., Siebke, P., Rieber, T., and Damer, N. (2022). Sface: Privacy-friendly and accurate face recognition using synthetic data. arXiv preprint arXiv:2206.10520.

de Freitas Pereira, T., Schmidli, D., Linghu, Y., Zhang, X., Marcel, S., and Günther, M. (2022). Eight years of face recognition research: Reproducibility, achievements and open issues. arXiv, (2208.04040).

Deng, J., Guo, J., Ververas, E., Kotsia, I., and Zafeiriou, S. (2020). Retinaface: Single-shot multi-level face localisation in the wild. In CVPR.

Deng, J., Guo, J., Xue, N., and Zafeiriou, S. (2019). Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690–4699.

Doc, I. (2008). 9303, parts i, ii, iii, machine readable travel documents specifications.

Grm, K., Sˇtruc, V., Artiges, A., Caron, M., and Ekenel, H. K. (2018). Strengths and weaknesses of deep learning models for face recognition against image degradations. Iet Biometrics, 7(1):81–89.

Huang, G. B., Ramesh, M., Berg, T., and Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst.

Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. (2016). Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360.

Karahan, S., Yildirum, M. K., Kirtac, K., Rende, F. S., Butun, G., and Ekenel, H. K. (2016). How image degradations affect deep cnn-based face recognition? In 2016 international conference of the biometrics special interest group (BIOSIG), pages 1–5. IEEE.

King, D. E. (2009). Dlib-ml: A machine learning toolkit. The Journal of Machine Learning Research, 10:1755–1758.

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc.

Liu, D., Cheng, B., Wang, Z., Zhang, H., and Huang, T. S. (2019). Enhance visual recognition under adverse conditions via deep networks. IEEE Transactions on Image Processing, 28(9):4401–4412.

Parkhi, O. M., Vedaldi, A., and Zisserman, A. (2015). Deep face recognition. In Proceedings of the British Machine Vision Conference (BMVC), pages 41.1–41.12. BMVA Press.

Pei, Y., Huang, Y., Zou, Q., Zang, H., Zhang, X., and Wang, S. (2018). Effects of image degradations to cnn-based image classification. arXiv preprint arXiv:1810.05552.

Roy, P., Ghosh, S., Bhattacharya, S., and Pal, U. (2018). Effects of degradations on deep neural network architectures. arXiv preprint arXiv:1807.10108.

Schlett, T., Rathgeb, C., Henniger, O., Galbally, J., Fierrez, J., and Busch, C. (2022). Face image quality assessment: A literature survey. ACM Computing Surveys (CSUR), 54(10s):1–49.

Schroff, F., Kalenichenko, D., and Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Serengil, S. I. Serengil - deepface.

Sun, Y., Chen, Y., Wang, X., and Tang, X. (2014). Deep learning face representation by joint identification-verification. Advances in neural information processing systems, 27.

Taigman, Y., Yang, M., Ranzato, M., and Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1701–1708.

Tang, P., Wang, H., and Kwong, S. (2017). G-ms2f: Googlenet based multi-stage feature fusion of deep cnn for scene recognition. Neurocomputing, 225:188–197.

Zhang, K., Zhang, Z., Li, Z., and Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499–1503.
CARNEIRO, Leandro Dias; VIDAL, Flavio de Barros. Quantifying the impact of image degradation on Deep Learning models in face recognition systems. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 20. , 2023, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 212-226. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2023.233907.