Além do Desempenho: Um Estudo da Confiabilidade de Detectores de Deepfakes
Resumo
Deepfakes são mídias sintéticas geradas por inteligência artificial, com aplicações positivas na educação e na criatividade, mas também com impactos negativos graves, como fraudes, desinformação e violações de privacidade. Apesar dos avanços em técnicas de detecção, ainda há escassez de métodos de avaliação abrangentes que considerem aspectos além do desempenho em classificação. Este trabalho propõe um framework de avaliação da confiabilidade baseado em quatro pilares: transferibilidade, robustez, interpretabilidade e eficiência computacional. A análise de cinco métodos do estado da arte revelou avanços significativos, mas também limitações críticas.Referências
Abdullah, S. M., Cheruvu, A., Kanchi, S., Chung, T., Gao, P., Jadliwala, M. O., e Viswanath, B. (2024). Deepfake detection: Challenges in the age of custom generative models and adversarial attacks. [link]. Papers with Code.
Bammey, Q. (2024). Synthbuster: Towards detection of diffusion model generated images. IEEE Open Journal of Signal Processing, 5:1–9.
Bishop, C. M. e Bishop, H. (2024). Deep Learning: Foundations and Concepts. Springer Nature.
Caporusso, N. (2021). Deepfakes for the good: A beneficial application of contentious artificial intelligence technology. In International Conference on Applied Human Factors and Ergonomics, pages 235–241.
Chen, R. et al. (2020). SimSwap: An efficient framework for high fidelity face swapping. In ACM International Conference on Multimedia, page 2003–2011.
Deepfakes (2017). Deepfakes. [link]. Accessed: 2025-03-07.
Diniz, M. M. e Rocha, A. (2024). Open-set deepfake detection to fight the unknown. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 13091–13095.
dos Santos, M., Laroca, R., Ribeiro, R. O., Neves, J., e Menotti, D. (2024). Multi-feature aggregation in diffusion models for enhanced face super-resolution. In Conference on Graphics, Patterns and Images (SIBGRAPI), pages 1–6.
Ezeakunne, U., Eze, C., e Liu, X. (2024). Data-driven fairness generalization for deepfake detection. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP). To appear.
Giudice, O., Guarnera, L., e Battiato, S. (2021). Fighting deepfakes by detecting GAN DCT anomalies. Journal of Imaging, 7(8).
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., e Bengio, Y. (2014). Generative adversarial nets. In International Conference on Neural Information Processing Systems (NeurIPS), page 2672–2680.
Google, D. (2025). Synthid — deepmind’s tool for watermarking and identifying ai-generated content. Accessed: 2025-07-25.
Hernandez-Ortega, J., Tolosana, R., Fierrez, J., e Morales, A. (2022). Deepfakes detection based on heart rate estimation: Single-and multi-frame. In Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks, pages 255–273. Springer International Publishing.
Ho, J., Jain, A., e Abbeel, P. (2020). Denoising diffusion probabilistic models. In International Conference on Neural Information Processing Systems.
Jeong, Y., Kim, D., Ro, Y., e Choi, J. (2022). FrePGAN: Robust deepfake detection using frequency-level perturbations. In AAAI Conference on Artificial Intelligence, pages 1060–1068.
Karras, T., Laine, S., e Aila, T. (2021). A style-based generator architecture for generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12):4217–4228.
Kato, G., Fukuhara, Y., Isogawa, M., Tsunashima, H., Kataoka, H., e Morishima, S. (2023). Scapegoat generation for privacy protection from deepfake. arXiv:2303.02930. Available at [link].
Kim, K., Kim, Y., Cho, S., Seo, J., Nam, J., Lee, K., Kim, S., e Lee, K. (2025). DiffFace: Diffusion-based face swapping with facial guidance. Pattern Recognition, 163:111451.
Kong, C., Li, H., e Wang, S. (2023). Enhancing general face forgery detection via vision transformer with low-rank adaptation. In IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), pages 102–107.
Kong, C., Luo, A., Bao, P., Li, H., Wan, R., Zheng, Z., Rocha, A., e Kot, A. C. (2024). Open-set deepfake detection: A parameter-efficient adaptation method with forgery style mixture. arXiv preprint 2408.12791.
Kundu, R. et al. (2025). TruthLens: Explainable deepfake detection for face manipulated and fully synthetic data. arXiv preprint arXiv:2503.15867.
Li, L., Bao, J., Yang, H., Chen, D., e Wen, F. (2019). Faceshifter: Towards high fidelity and occlusion aware face swapping. arXiv preprint arXiv:1912.13457.
Li, Y., Chang, M.-C., e Lyu, S. (2018). In Ictu Oculi: Exposing AI created fake videos by detecting eye blinking. In IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–7.
Li, Y. e Lyu, S. (2019). Exposing deepfake videos by detecting face warping artifacts. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
Luo, A., Kong, C., Huang, J., Hu, Y., Kang, X., e Kot, A. C. (2024). Beyond the prior forgery knowledge: Mining critical clues for general face forgery detection. IEEE Transactions on Information Forensics and Security, 19:1168–1182.
Melappalayam, R. (2024). Data-driven compliance: A new frontier for anti-bribery and anti-corruption risk programs. Fraud Magazine. Available at [link].
Moura, C. S. F. T. (2021). Detecção de deepfakes a partir de técnicas de visão computacional e aprendizado de máquina. Master’s thesis, Universidade Estadual de Campinas.
Murphy, G., Ching, D., Twomey, J., e Linehan, C. (2023). Face/Off: Changing the face of movies with deepfakes. PLoS ONE, 18(7):e0287503.
Pedulla, N. R. L. (2022). Interpretabilidade de modelos de aprendizagem de máquina no mercado de seguros. Trabalho de Conclusão de Curso (Bacharelado em Ciência da Computação) – Centro de Informática, Universidade Federal de Pernambuco.
Pei, G. et al. (2024). Deepfake generation and detection: A benchmark and survey. arXiv preprint arXiv:2403.17881.
Pontorno, O. et al. (2024). On the exploitation of DCT-traces in the generative-AI domain. In IEEE International Conference on Image Processing (ICIP), pages 3806–3812.
Roe, J., Perkins, M., e Furze, L. (2024). Deepfakes and higher education: A research agenda and scoping review of synthetic media. Journal of University Teaching and Learning Practice, 21(10):1–22.
Ruiz, N., Bargal, S. A., e Sclaroff, S. (2020). Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In European Conference on Computer Vision (ECCV), pages 236–251.
Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., e Niessner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1–11.
Schroff, F., Kalenichenko, D., e Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815–823.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., e Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV), pages 618–626.
Sharma, S. (2024). South korean woman loses £40k in elon musk romance scam involving deepfake video. The Independent. Available at [link].
Shen, H. et al. (2025). Efficient diffusion models: A survey. arXiv preprint arXiv:2502.06805.
Suwajanakorn, S., Seitz, S. M., e Kemelmacher-Shlizerman, I. (2017). Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics, 36(4):1–13.
Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., e Wei, Y. (2024). Frequency-aware deepfake detection: improving generalizability through frequency space domain learning. In AAAI Conference on Artificial Intelligence, pages 5052–5060.
Tan, M. e Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105–6114.
Tariq, S., Lee, S., e Woo, S. S. (2021). One detector to rule them all: Towards a general deep-fake attack detection framework. In The Web Conference (WWW), page 3625–3637.
Trinh, L., Tsang, M., Rambhatla, S., e Liu, Y. (2021). Interpretable and trustworthy deep-fake detection via dynamic prototypes. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1972–1982.
van der Maaten, L. e Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605.
Wang, T., Liao, X., Chow, K. P., Lin, X., e Wang, Y. (2024a). Deepfake detection: A comprehensive survey from the reliability perspective. ACM Comput. Surv., 57(3).
Wang, Y. et al. (2024b). Computation-efficient deep learning for computer vision: A survey. Cybernetics and Intelligence, pages 1–24.
Xie, Y., Xu, H., Song, G., Wang, C., Shi, Y., e Luo, L. (2024). X-Portrait: Expressive portrait animation with hierarchical motion attention. In SIGGRAPH 2024.
Zhao, W., Rao, Y., Shi, W., Liu, Z., Zhou, J., e Lu, J. (2023). DiffSwap: High-fidelity and controllable face swapping via 3D-aware masked diffusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8568–8577.
Zhou, Y. e Lim, S.-N. (2021). Joint audio-visual deepfake detection. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 14780–14789.
Zi, B., Chang, M., Chen, J., Ma, X., e Jiang, Y.-G. (2020). Wilddeepfake: A challenging real-world dataset for deepfake detection. In Proceedings of the 28th ACM International Conference on Multimedia, pages 2382–2390.
Bammey, Q. (2024). Synthbuster: Towards detection of diffusion model generated images. IEEE Open Journal of Signal Processing, 5:1–9.
Bishop, C. M. e Bishop, H. (2024). Deep Learning: Foundations and Concepts. Springer Nature.
Caporusso, N. (2021). Deepfakes for the good: A beneficial application of contentious artificial intelligence technology. In International Conference on Applied Human Factors and Ergonomics, pages 235–241.
Chen, R. et al. (2020). SimSwap: An efficient framework for high fidelity face swapping. In ACM International Conference on Multimedia, page 2003–2011.
Deepfakes (2017). Deepfakes. [link]. Accessed: 2025-03-07.
Diniz, M. M. e Rocha, A. (2024). Open-set deepfake detection to fight the unknown. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 13091–13095.
dos Santos, M., Laroca, R., Ribeiro, R. O., Neves, J., e Menotti, D. (2024). Multi-feature aggregation in diffusion models for enhanced face super-resolution. In Conference on Graphics, Patterns and Images (SIBGRAPI), pages 1–6.
Ezeakunne, U., Eze, C., e Liu, X. (2024). Data-driven fairness generalization for deepfake detection. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP). To appear.
Giudice, O., Guarnera, L., e Battiato, S. (2021). Fighting deepfakes by detecting GAN DCT anomalies. Journal of Imaging, 7(8).
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., e Bengio, Y. (2014). Generative adversarial nets. In International Conference on Neural Information Processing Systems (NeurIPS), page 2672–2680.
Google, D. (2025). Synthid — deepmind’s tool for watermarking and identifying ai-generated content. Accessed: 2025-07-25.
Hernandez-Ortega, J., Tolosana, R., Fierrez, J., e Morales, A. (2022). Deepfakes detection based on heart rate estimation: Single-and multi-frame. In Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks, pages 255–273. Springer International Publishing.
Ho, J., Jain, A., e Abbeel, P. (2020). Denoising diffusion probabilistic models. In International Conference on Neural Information Processing Systems.
Jeong, Y., Kim, D., Ro, Y., e Choi, J. (2022). FrePGAN: Robust deepfake detection using frequency-level perturbations. In AAAI Conference on Artificial Intelligence, pages 1060–1068.
Karras, T., Laine, S., e Aila, T. (2021). A style-based generator architecture for generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12):4217–4228.
Kato, G., Fukuhara, Y., Isogawa, M., Tsunashima, H., Kataoka, H., e Morishima, S. (2023). Scapegoat generation for privacy protection from deepfake. arXiv:2303.02930. Available at [link].
Kim, K., Kim, Y., Cho, S., Seo, J., Nam, J., Lee, K., Kim, S., e Lee, K. (2025). DiffFace: Diffusion-based face swapping with facial guidance. Pattern Recognition, 163:111451.
Kong, C., Li, H., e Wang, S. (2023). Enhancing general face forgery detection via vision transformer with low-rank adaptation. In IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), pages 102–107.
Kong, C., Luo, A., Bao, P., Li, H., Wan, R., Zheng, Z., Rocha, A., e Kot, A. C. (2024). Open-set deepfake detection: A parameter-efficient adaptation method with forgery style mixture. arXiv preprint 2408.12791.
Kundu, R. et al. (2025). TruthLens: Explainable deepfake detection for face manipulated and fully synthetic data. arXiv preprint arXiv:2503.15867.
Li, L., Bao, J., Yang, H., Chen, D., e Wen, F. (2019). Faceshifter: Towards high fidelity and occlusion aware face swapping. arXiv preprint arXiv:1912.13457.
Li, Y., Chang, M.-C., e Lyu, S. (2018). In Ictu Oculi: Exposing AI created fake videos by detecting eye blinking. In IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–7.
Li, Y. e Lyu, S. (2019). Exposing deepfake videos by detecting face warping artifacts. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
Luo, A., Kong, C., Huang, J., Hu, Y., Kang, X., e Kot, A. C. (2024). Beyond the prior forgery knowledge: Mining critical clues for general face forgery detection. IEEE Transactions on Information Forensics and Security, 19:1168–1182.
Melappalayam, R. (2024). Data-driven compliance: A new frontier for anti-bribery and anti-corruption risk programs. Fraud Magazine. Available at [link].
Moura, C. S. F. T. (2021). Detecção de deepfakes a partir de técnicas de visão computacional e aprendizado de máquina. Master’s thesis, Universidade Estadual de Campinas.
Murphy, G., Ching, D., Twomey, J., e Linehan, C. (2023). Face/Off: Changing the face of movies with deepfakes. PLoS ONE, 18(7):e0287503.
Pedulla, N. R. L. (2022). Interpretabilidade de modelos de aprendizagem de máquina no mercado de seguros. Trabalho de Conclusão de Curso (Bacharelado em Ciência da Computação) – Centro de Informática, Universidade Federal de Pernambuco.
Pei, G. et al. (2024). Deepfake generation and detection: A benchmark and survey. arXiv preprint arXiv:2403.17881.
Pontorno, O. et al. (2024). On the exploitation of DCT-traces in the generative-AI domain. In IEEE International Conference on Image Processing (ICIP), pages 3806–3812.
Roe, J., Perkins, M., e Furze, L. (2024). Deepfakes and higher education: A research agenda and scoping review of synthetic media. Journal of University Teaching and Learning Practice, 21(10):1–22.
Ruiz, N., Bargal, S. A., e Sclaroff, S. (2020). Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In European Conference on Computer Vision (ECCV), pages 236–251.
Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., e Niessner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1–11.
Schroff, F., Kalenichenko, D., e Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815–823.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., e Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV), pages 618–626.
Sharma, S. (2024). South korean woman loses £40k in elon musk romance scam involving deepfake video. The Independent. Available at [link].
Shen, H. et al. (2025). Efficient diffusion models: A survey. arXiv preprint arXiv:2502.06805.
Suwajanakorn, S., Seitz, S. M., e Kemelmacher-Shlizerman, I. (2017). Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics, 36(4):1–13.
Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., e Wei, Y. (2024). Frequency-aware deepfake detection: improving generalizability through frequency space domain learning. In AAAI Conference on Artificial Intelligence, pages 5052–5060.
Tan, M. e Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105–6114.
Tariq, S., Lee, S., e Woo, S. S. (2021). One detector to rule them all: Towards a general deep-fake attack detection framework. In The Web Conference (WWW), page 3625–3637.
Trinh, L., Tsang, M., Rambhatla, S., e Liu, Y. (2021). Interpretable and trustworthy deep-fake detection via dynamic prototypes. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1972–1982.
van der Maaten, L. e Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605.
Wang, T., Liao, X., Chow, K. P., Lin, X., e Wang, Y. (2024a). Deepfake detection: A comprehensive survey from the reliability perspective. ACM Comput. Surv., 57(3).
Wang, Y. et al. (2024b). Computation-efficient deep learning for computer vision: A survey. Cybernetics and Intelligence, pages 1–24.
Xie, Y., Xu, H., Song, G., Wang, C., Shi, Y., e Luo, L. (2024). X-Portrait: Expressive portrait animation with hierarchical motion attention. In SIGGRAPH 2024.
Zhao, W., Rao, Y., Shi, W., Liu, Z., Zhou, J., e Lu, J. (2023). DiffSwap: High-fidelity and controllable face swapping via 3D-aware masked diffusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8568–8577.
Zhou, Y. e Lim, S.-N. (2021). Joint audio-visual deepfake detection. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 14780–14789.
Zi, B., Chang, M., Chen, J., Ma, X., e Jiang, Y.-G. (2020). Wilddeepfake: A challenging real-world dataset for deepfake detection. In Proceedings of the 28th ACM International Conference on Multimedia, pages 2382–2390.
Publicado
01/09/2025
Como Citar
LOPES, Lucas; LAROCA, Rayson; GRÉGIO, André.
Além do Desempenho: Um Estudo da Confiabilidade de Detectores de Deepfakes. In: SIMPÓSIO BRASILEIRO DE CIBERSEGURANÇA (SBSEG), 25. , 2025, Foz do Iguaçu/PR.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 66-82.
DOI: https://doi.org/10.5753/sbseg.2025.11431.
