A Triad of Defenses to Mitigate Poisoning Attacks in Federated Learning

  • Blenda Oliveira Mazetto UEL
  • Bruno Bogaz Zarpelão UEL

Resumo


Federated learning (FL) enables the training of machine learning models on decentralized data, potentially improving data privacy. However, the FL distributed architecture is vulnerable to poisoning attacks. In this paper, we propose an FL method capable of mitigating these attacks through a triad of defense strategies: organizing clients into groups, checking the local performance of global models during training, and using a voting scheme during the inference phase. The proposed approach first divides the clients into randomly sampled groups, with each group generating a different global model. Each client then receives all global models and selects the one with the best predictive performance to continue training. The selected global models are updated by the clients and then submitted again to the central server, which aggregates these models. During the inference phase, each client classifies its inputs according to a majority-based voting scheme among the global models. Our experiments using the HAR and MNIST datasets show that our method can effectively mitigate poisoning attacks without compromising the global model’s results.

Referências

Andreina, S., Marson, G. A., Möllering, H., and Karame, G. (2020). Baffle: Backdoor detection via feedback-based federated learning. CoRR, abs/2011.02167.

Bouacida, N. and Mohapatra, P. (2021). Vulnerabilities in federated learning. IEEE Access, 9:63229–63249.

Cao, X., Jia, J., and Gong, N. Z. (2021). Provably secure federated learning against malicious clients. CoRR, abs/2102.01854.

Cao, X., Zhang, Z., Jia, J., and Gong, N. Z. (2022). Flcert: Provably secure federated learning against poisoning attacks. IEEE Transactions on Information Forensics and Security, 17:3691–3705.

Che, C., Li, X., Chen, C., He, X., and Zheng, Z. (2022). A decentralized federated learning framework via committee mechanism with convergence guarantee. IEEE Transactions on Parallel and Distributed Systems, 33(12):4783–4800.

Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142.

Fang, M., Cao, X., Jia, J., and Gong, N. (2020). Local model poisoning attacks to Byzantine-Robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pages 1605–1622. USENIX Association.

Jebreel, N. M., Domingo-Ferrer, J., Blanco-Justicia, A., and Sánchez, D. (2024). Enhanced security and privacy via fragmented federated learning. IEEE Transactions on Neural Networks and Learning Systems, 35(5):6703–6717.

Li, S., Ngai, E., and Voigt, T. (2023). Byzantine-robust aggregation in federated learning empowered industrial iot. IEEE Transactions on Industrial Informatics, 19(2):1165–1175.

Liu, B., Ding, M., Shaham, S., Rahayu, W., Farokhi, F., and Lin, Z. (2021). When machine learning meets privacy: A survey and outlook. ACM Comput. Surv., 54(2).

McMahan, B. and Ramage, D. (2017). Federated learning: Collaborative machine learning without centralized training data. Accessed on june 06, 2024.

Reyes-Ortiz, J., Anguita, D., Ghio, A., Oneto, L., and Parra, X. (2012). Human Activity Recognition Using Smartphones. UCI Machine Learning Repository. DOI: 10.24432/C54S4K.

Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020). Data poisoning attacks against federated learning systems. In Chen, L., Li, N., Liang, K., and Schneider, S., editors, Computer Security – ESORICS 2020, pages 480–501, Cham. Springer International Publishing.

Wang, Z., Kang, Q., Zhang, X., and Hu, Q. (2022). Defense strategies toward model poisoning attacks in federated learning: A survey.

Witt, L., Heyer, M., Toyoda, K., Samek, W., and Li, D. (2023). Decentral and incentivized federated learning frameworks: A systematic literature review. IEEE Internet of Things Journal, 10(4):3642–3663.

Xia, G., Chen, J., Yu, C., and Ma, J. (2023). Poisoning attacks in federated learning: A survey. IEEE Access, 11:10708–10722.

Xu, C., Jia, Y., Zhu, L., Zhang, C., Jin, G., and Sharif, K. (2022). Tdfl: Truth discovery based byzantine robust federated learning. IEEE Transactions on Parallel and Distributed Systems, 33(12):4835–4848.

Yang, Q., Liu, Y., Chen, T., and Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2).

Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., and Gao, Y. (2021). A survey on federated learning. Knowledge-Based Systems, 216:106775.

Zhang, Z., Li, J., Yu, S., and Makaya, C. (2023). Safelearning: Secure aggregation in federated learning with backdoor detectability. IEEE Transactions on Information Forensics and Security, 18:3289–3304.
Publicado
16/09/2024
MAZETTO, Blenda Oliveira; ZARPELÃO, Bruno Bogaz. A Triad of Defenses to Mitigate Poisoning Attacks in Federated Learning. In: SIMPÓSIO BRASILEIRO DE SEGURANÇA DA INFORMAÇÃO E DE SISTEMAS COMPUTACIONAIS (SBSEG), 24. , 2024, São José dos Campos/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 1-15. DOI: https://doi.org/10.5753/sbseg.2024.241712.

Artigos mais lidos do(s) mesmo(s) autor(es)