Skip to main content

Bayes and Laplace Versus the World: A New Label Attack Approach in Federated Environments Based on Bayesian Neural Networks

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14195))

Included in the following conference series:

  • 223 Accesses

Abstract

Federated Learning (FL) is a decentralized machine learning approach developed to ensure that training data remains on personal devices, preserving data privacy. However, the distributed nature of FL environments makes defense against malicious attacks a challenging task. This work proposes a new attack approach to poisoning labels using Bayesian neural networks in federated environments. The hypothesis is that a label poisoning attack model trained with the marginal likelihood loss can generate a less complex poisoned model, making it difficult to detect attacks. We present experimental results demonstrating the proposed approach’s effectiveness in generating poisoned models in federated environments. Additionally, we analyze the performance of various defense mechanisms against different attack proposals, evaluating accuracy, precision, recall, and F1-score. The results show that our proposed attack mechanism is harder to defend when we adopt existing defense mechanisms against label poisoning attacks in FL, showing a difference of 18.48% for accuracy compared to the approach without malicious clients.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alistarh, D., Allen-Zhu, Z., Li, J.: Byzantine stochastic gradient descent. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)

    Google Scholar 

  2. Bansal, Y., et al.: For self-supervised learning, rationality implies generalization, provably. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  3. Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning (ICML), vol. 97 (2019)

    Google Scholar 

  4. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 30 (2017)

    Google Scholar 

  5. Chen, L.Y., Chiu, T.C., Pang, A.C., Cheng, L.C.: Fedequal: defending model poisoning attacks in heterogeneous federated learning. In: 2021 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2021)

    Google Scholar 

  6. Dao, N.N., et al.: Securing heterogeneous IoT with intelligent DDoS attack behavior learning. IEEE Syst. J. 16(2), 1974–1983 (2022)

    Article  Google Scholar 

  7. Fang, M., Cao, X., Jia, J., Gong, N.Z.: Local model poisoning attacks to byzantine-robust federated learning. In: Proceedings of the 29th USENIX Conference on Security Symposium (SEC) (2020)

    Google Scholar 

  8. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on Machine Learning (ICML) (2016)

    Google Scholar 

  9. Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015)

    Article  Google Scholar 

  10. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017 (2017)

    Google Scholar 

  11. Immer, A., et al.: Scalable marginal likelihood estimation for model selection in deep learning. In: International Conference on Machine Learning (ICML), vol. 139, pp. 4563–4573 (2021)

    Google Scholar 

  12. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35 (2018)

    Google Scholar 

  13. Lamport, L., Shostak, R., Pease, M.: The byzantine generals problem. ACM Trans. Programm. Lang. Syst. 4(3), 382–401 (1982)

    Article  MATH  Google Scholar 

  14. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37, 50–60 (2020)

    Google Scholar 

  15. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1273–1282 (2017)

    Google Scholar 

  16. Pillutla, K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. IEEE Trans. Signal Process. 70, 1142–1154 (2022)

    Article  MathSciNet  Google Scholar 

  17. Rodríguez-Barroso, N., Martínez-Cámara, E., Luzón, M.V., Herrera, F.: Dynamic defense against byzantine poisoning attacks in federated learning. Future Gener. Comput. Syst. 133, 1–9 (2022)

    Article  Google Scholar 

  18. Shejwalkar, V., Houmansadr, A., Kairouz, P., Ramage, D.: Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: 2022 IEEE Symposium on Security and Privacy (SP), pp. 1354–1371 (2022)

    Google Scholar 

  19. Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. 9(13), 11365–11375 (2022)

    Article  Google Scholar 

  20. Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? (2019)

    Google Scholar 

  21. Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS). Red Hook, NY, USA (2020)

    Google Scholar 

  22. Wu, C., Wu, F., Qi, T., Huang, Y., Xie, X.: Fedattack: effective and covert poisoning attack on federated recommendation via hard sampling. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pp. 4164–4172, New York, NY, USA (2022)

    Google Scholar 

  23. Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the International Conference on Machine Learning, vol. 80, pp. 5650–5659 (2018)

    Google Scholar 

  24. Zhang, C., et al.: Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64(3), 107–115 (2021)

    Article  Google Scholar 

  25. Zhang, J., Chen, B., Cheng, X., Binh, H.T.T., Yu, S.: PoisonGAN: generative poisoning attacks against federated learning in edge computing systems. IEEE Internet Things J. 8(5), 3310–3322 (2021)

    Article  Google Scholar 

  26. Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets. In: 2019 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom/BigDataSE), pp. 374–380 (2019)

    Google Scholar 

  27. Zhao, M., An, B., Gao, W., Zhang, T.: Efficient label contamination attacks against black-box learning models. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pp. 3945–3951 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pedro H. Barros .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Barros, P.H., Murai, F., Ramos, H.S. (2023). Bayes and Laplace Versus the World: A New Label Attack Approach in Federated Environments Based on Bayesian Neural Networks. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14195. Springer, Cham. https://doi.org/10.1007/978-3-031-45368-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45368-7_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45367-0

  • Online ISBN: 978-3-031-45368-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics