Application of Open-Source Tool for Validating Integrity of Digital Content to Combat Malicious Deepfakes
Abstract
This paper addresses the growing problem of malicious deepfakes and proposes the use of open-source tools, such as C2PA, to verify the authenticity of digital content. The paper describes a proposed REST API to simplify the addition and validation of integrity information. The goal is to provide a transparent solution for validating and signing content, combating disinformation, with the potential for integration into decentralized social networks such as the AT Protocol, where a proof of concept will be made to reduce the spread of false information.References
Amazon Web Services (2025). Amazon Web Services (AWS) Documentation. Acesso em: 3 ago. 2025.
Consortium, C. C. (2024). C2pa technical specification. [link]. Acesso em: 15 mar. 2025.
de Jesus, T. O. B. (2025a). C2PA API. [link]. Acesso em: 30 jul. 2025.
de Jesus, T. O. B. (2025b). Statusphere C2PA. [link]. Acesso em: 01 ago. 2025.
Farooq, M. U., Javed, A., Malik, K. M., and Raza, M. A. (2025). A lightweight and interpretable deepfakes detection framework.
Fielding, R. T. (2000). Architectural Styles and the Design of Network-based Software Architectures. PhD thesis, University of California, Irvine. Acesso em: 26 jan. 2025.
Heiding, F., Schneier, B., Vishwanath, A., Bernstein, J., and Park, P. S. (2023). Devising and detecting phishing: Large language models vs. smaller human models.
Hwang, Y., Ryu, J. Y., and Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3):188–193.
Kleppmann, M., Frazee, P., Gold, J., Graber, J., Holmgren, D., Ivy, D., Johnson, J., Newbold, B., and Volpert, J. (2024). Bluesky and the at protocol: Usable decentralized social media. In Proceedings of the ACM Conext-2024 Workshop on the Decentralization of the Internet (DIN ’24), pages 1–9. ACM.
Lynch, C. A. (1994). The integrity of digital information: Mechanics and definitional issues. Journal of the American Society for Information Science, 45(10):737–744.
Mezaris, V. (2018). Invid verification project. [link]. Online: acesso em 15-Março-2025.
Rashid, M. M., Lee, S.-H., and Kwon, K.-R. (2021). Blockchain technology for combating deepfake and protect video/image integrity. Journal of Korea Multimedia Society, 24:1044–1058.
Sablayrolles, A., Douze, M., Schmid, C., and Jégou, H. (2020). Radioactive data: tracing through training.
Yelavich, B. M. (1985). Customer information control system—evolving system facility. IBM Systems Journal, 24(3.4):264–278.
Yu, N., Skripniuk, V., Abdelnabi, S., and Fritz, M. (2021). Artificial fingerprinting for generative models: Rooting deepfake attribution in training data. In ICCV.
Zao-Sanders, M. (2024). How people are really using genai. Harvard Business Review.
Consortium, C. C. (2024). C2pa technical specification. [link]. Acesso em: 15 mar. 2025.
de Jesus, T. O. B. (2025a). C2PA API. [link]. Acesso em: 30 jul. 2025.
de Jesus, T. O. B. (2025b). Statusphere C2PA. [link]. Acesso em: 01 ago. 2025.
Farooq, M. U., Javed, A., Malik, K. M., and Raza, M. A. (2025). A lightweight and interpretable deepfakes detection framework.
Fielding, R. T. (2000). Architectural Styles and the Design of Network-based Software Architectures. PhD thesis, University of California, Irvine. Acesso em: 26 jan. 2025.
Heiding, F., Schneier, B., Vishwanath, A., Bernstein, J., and Park, P. S. (2023). Devising and detecting phishing: Large language models vs. smaller human models.
Hwang, Y., Ryu, J. Y., and Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3):188–193.
Kleppmann, M., Frazee, P., Gold, J., Graber, J., Holmgren, D., Ivy, D., Johnson, J., Newbold, B., and Volpert, J. (2024). Bluesky and the at protocol: Usable decentralized social media. In Proceedings of the ACM Conext-2024 Workshop on the Decentralization of the Internet (DIN ’24), pages 1–9. ACM.
Lynch, C. A. (1994). The integrity of digital information: Mechanics and definitional issues. Journal of the American Society for Information Science, 45(10):737–744.
Mezaris, V. (2018). Invid verification project. [link]. Online: acesso em 15-Março-2025.
Rashid, M. M., Lee, S.-H., and Kwon, K.-R. (2021). Blockchain technology for combating deepfake and protect video/image integrity. Journal of Korea Multimedia Society, 24:1044–1058.
Sablayrolles, A., Douze, M., Schmid, C., and Jégou, H. (2020). Radioactive data: tracing through training.
Yelavich, B. M. (1985). Customer information control system—evolving system facility. IBM Systems Journal, 24(3.4):264–278.
Yu, N., Skripniuk, V., Abdelnabi, S., and Fritz, M. (2021). Artificial fingerprinting for generative models: Rooting deepfake attribution in training data. In ICCV.
Zao-Sanders, M. (2024). How people are really using genai. Harvard Business Review.
Published
2025-09-01
How to Cite
JESUS, Thiago Oliveira Bispo de; SANTI, Juliana de; PIGATTO, Daniel Fernando.
Application of Open-Source Tool for Validating Integrity of Digital Content to Combat Malicious Deepfakes. In: BRAZILIAN SYMPOSIUM ON CYBERSECURITY (SBSEG), 25. , 2025, Foz do Iguaçu/PR.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 971-978.
DOI: https://doi.org/10.5753/sbseg.2025.9799.
