Federated Learning under Attack: Improving Gradient Inversion for Batch of Images
Resumo
Federated Learning (FL) has emerged as a machine learning approach able to preserve the privacy of user’s data. Applying FL, clients train machine learning models on a local dataset and a central server aggregates the learned parameters coming from the clients, training a global machine learning model without sharing user’s data. However, the state-of-the-art shows several approaches to promote attacks on FL systems. For instance, inverting or leaking gradient attacks can find, with high precision, the local dataset used during the training phase of the FL. This paper presents an approach, called Deep Leakage from Gradients with Feedback Blending (DLG-FB), which is able to improve the inverting gradient attack, considering the spatial correlation that typically exists in batches of images. The performed evaluation shows an improvement of 19.18% and 48,82% in terms of attack success rate and the number of iterations per attacked image, respectively.Referências
Geiping, J., Bauermeister, H., Dröge, H., and Moeller, M. (2020). Inverting gradientshow easy is it to break privacy in federated learning? Advances in neural information processing systems, 33:16937–16947.
He, X., Peng, C., Tan, W., and Tan, Y.-a. (2023). Fast and accurate deep leakage from gradients based on wasserstein distance. Int. J. Intell. Syst., 2023.
Huang, P., Li, D., and Yan, Z. (2023). Wireless federated learning with asynchronous and quantized updates. IEEE Communications Letters, 27(9):2393–2397.
Mothukuri, V., Parizi, R. M., Pouriyeh, S., Huang, Y., Dehghantanha, A., and Srivastava, G. (2021). A survey on security and privacy of federated learning. Future Generation Computer Systems, 115:619–640.
Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and Moriai, S. (2017). Privacy-preserving deep learning: Revisited and enhanced. In Applications and Techniques in Information Security: 8th International Conference, ATIS 2017, Auckland, New Zealand, July 6–7, 2017, Proceedings, pages 100–110. Springer.
Sannai, A. (2018). Reconstruction of training samples from loss functions. arXiv preprint arXiv:1805.07337.
Zhao, B., Mopuri, K. R., and Bilen, H. (2020). idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610.
Zhu, L., Liu, Z., and Han, S. (2019). Deep leakage from gradients. Advances in neural information processing systems, 32.
He, X., Peng, C., Tan, W., and Tan, Y.-a. (2023). Fast and accurate deep leakage from gradients based on wasserstein distance. Int. J. Intell. Syst., 2023.
Huang, P., Li, D., and Yan, Z. (2023). Wireless federated learning with asynchronous and quantized updates. IEEE Communications Letters, 27(9):2393–2397.
Mothukuri, V., Parizi, R. M., Pouriyeh, S., Huang, Y., Dehghantanha, A., and Srivastava, G. (2021). A survey on security and privacy of federated learning. Future Generation Computer Systems, 115:619–640.
Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and Moriai, S. (2017). Privacy-preserving deep learning: Revisited and enhanced. In Applications and Techniques in Information Security: 8th International Conference, ATIS 2017, Auckland, New Zealand, July 6–7, 2017, Proceedings, pages 100–110. Springer.
Sannai, A. (2018). Reconstruction of training samples from loss functions. arXiv preprint arXiv:1805.07337.
Zhao, B., Mopuri, K. R., and Bilen, H. (2020). idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610.
Zhu, L., Liu, Z., and Han, S. (2019). Deep leakage from gradients. Advances in neural information processing systems, 32.
Publicado
16/09/2024
Como Citar
LEITE, Luiz; SANTO, Yuri; DALMAZO, Bruno L.; RIKER, André.
Federated Learning under Attack: Improving Gradient Inversion for Batch of Images. In: SIMPÓSIO BRASILEIRO DE SEGURANÇA DA INFORMAÇÃO E DE SISTEMAS COMPUTACIONAIS (SBSEG), 24. , 2024, São José dos Campos/SP.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 794-800.
DOI: https://doi.org/10.5753/sbseg.2024.241680.