Avaliação da Privacidade Diferencial Aplicada ao Aprendizado Federado Através de Ataques de Inversão de Modelo
Resumo
Este trabalho propõe uma metodologia prática para avaliar a eficácia de mecanismos de Privacidade Diferencial (DP) no Aprendizado Federado (FL) diante de ataques de inversão de modelo. Para isso, foi adotado um cenário adversarial baseado no paradigma de segurança cibernética equipe vermelha/equipe azul (RT/BT), onde a equipe azul implementa a proteção de DP e a equipe vermelha realiza ataques para recuperar dados de um cliente específico a partir do modelo global. Foram implementados dois tipos de ataques — um baseado em gradientes e outro ingênuo — aplicados em diferentes intensidades de ruído gaussiano. Os experimentos demonstraram que ruídos baixos já são suficientes para mitigar significativamente os ataques de inversão, conforme evidenciado tanto por métricas quantitativas (SSIM, PSNR e MSE) quanto pela análise visual das imagens reconstruídas. Por outro lado, observou-se que a acurácia do modelo global se mantém estável até níveis moderados de ruído, com impacto limitado no desempenho. Esses resultados indicam que é possível alcançar um bom equilíbrio entre privacidade e utilidade do modelo em cenários práticos.Referências
Alves, T., Silva, J., Pereira, L., and Souza, M. (2023). Pegasus: Garantias de privacidade diferencial para mitigar ataques adversários em aprendizagem federada. In Anais Estendidos do Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC).
Bagdasaryan, E., Poursaeed, O., and Shmatikov, V. (2019). Differential privacy has disparate impact on model accuracy. In Advances in Neural Information Processing Systems, volume 32, pages 15453–15462.
Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142.
Dwork, C., McSherry, F., Nissim, K., and Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Theory of Cryptography Conference (TCC), pages 265–284. Springer.
Dwork, C. and Roth, A. (2014). The Algorithmic Foundations of Differential Privacy, volume 9. Foundations and Trends in Theoretical Computer Science.
Fredrikson, M., Jha, S., and Ristenpart, T. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333. ACM.
Geyer, R. C., Klein, T., and Nabi, M. (2017). Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
Hughes, M. Z. (2020). Red Team: How to Succeed by Thinking Like the Enemy. Basic Books.
Kairouz, P. e. a. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1–2):1–210.
Naseri, M., Hayes, J., and De Cristofaro, E. (2022). Local and central differential privacy for robustness and privacy in federated learning. In Proceedings of the Network and Distributed System Security Symposium (NDSS).
Shi, Y., Kotevska, O., Reshniak, V., Singh, A., and Raskar, R. (2024). Dealing doubt: Unveiling threat models in gradient inversion attacks under federated learning, a survey and taxonomy. arXiv preprint arXiv:2405.10376.
Shokri, R., Stronati, M., Song, C., and Shmatikov, V. (2017). Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE.
Sun, L., Qian, J., and Chen, X. (2020). Ldp-fl: Practical private aggregation in federated learning with local differential privacy. arXiv preprint arXiv:2007.15789.
Truex, S., Liu, L., Chow, K.-H., Gursoy, M. E., and Wei, W. (2020). Ldp-fed: Federated learning with local differential privacy. arXiv preprint arXiv:2006.03637.
Wei, K., Li, J., Ding, M., Ma, C., Yang, H., Jin, S., Jin, Y., and Zhang, J. (2019). Federated learning with differential privacy: Algorithms and performance analysis. arXiv preprint arXiv:1911.00222.
Zhu, T., Li, G., Zhou, W., and Luo, C. (2017). Differential Privacy and Applications. Springer.
Bagdasaryan, E., Poursaeed, O., and Shmatikov, V. (2019). Differential privacy has disparate impact on model accuracy. In Advances in Neural Information Processing Systems, volume 32, pages 15453–15462.
Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142.
Dwork, C., McSherry, F., Nissim, K., and Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Theory of Cryptography Conference (TCC), pages 265–284. Springer.
Dwork, C. and Roth, A. (2014). The Algorithmic Foundations of Differential Privacy, volume 9. Foundations and Trends in Theoretical Computer Science.
Fredrikson, M., Jha, S., and Ristenpart, T. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333. ACM.
Geyer, R. C., Klein, T., and Nabi, M. (2017). Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
Hughes, M. Z. (2020). Red Team: How to Succeed by Thinking Like the Enemy. Basic Books.
Kairouz, P. e. a. (2021). Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1–2):1–210.
Naseri, M., Hayes, J., and De Cristofaro, E. (2022). Local and central differential privacy for robustness and privacy in federated learning. In Proceedings of the Network and Distributed System Security Symposium (NDSS).
Shi, Y., Kotevska, O., Reshniak, V., Singh, A., and Raskar, R. (2024). Dealing doubt: Unveiling threat models in gradient inversion attacks under federated learning, a survey and taxonomy. arXiv preprint arXiv:2405.10376.
Shokri, R., Stronati, M., Song, C., and Shmatikov, V. (2017). Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE.
Sun, L., Qian, J., and Chen, X. (2020). Ldp-fl: Practical private aggregation in federated learning with local differential privacy. arXiv preprint arXiv:2007.15789.
Truex, S., Liu, L., Chow, K.-H., Gursoy, M. E., and Wei, W. (2020). Ldp-fed: Federated learning with local differential privacy. arXiv preprint arXiv:2006.03637.
Wei, K., Li, J., Ding, M., Ma, C., Yang, H., Jin, S., Jin, Y., and Zhang, J. (2019). Federated learning with differential privacy: Algorithms and performance analysis. arXiv preprint arXiv:1911.00222.
Zhu, T., Li, G., Zhou, W., and Luo, C. (2017). Differential Privacy and Applications. Springer.
Publicado
01/09/2025
Como Citar
MANHÃES, Breno V.; FERREIRA, João Pedro M. F.; THOMAZ, Guilherme A.; CAMPISTA, Miguel Elias M..
Avaliação da Privacidade Diferencial Aplicada ao Aprendizado Federado Através de Ataques de Inversão de Modelo. In: SIMPÓSIO BRASILEIRO DE CIBERSEGURANÇA (SBSEG), 25. , 2025, Foz do Iguaçu/PR.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 211-225.
DOI: https://doi.org/10.5753/sbseg.2025.11428.
