Less is More: Evaluating the Impact of Model Compression on Federated Learning Efficiency
Abstract
The growing number of devices connected to the Internet has generated a large volume of data, increasingly driving the use of Artificial Intelligence (AI). However, centralized learning approaches raise concerns regarding user privacy. Federated Learning (FL) emerges as a distributed alternative, as it avoids sharing users’ raw data and complies with privacy regulations. Nevertheless, frequent communication between devices and the server in FL leads to high bandwidth and energy consumption. In networks with limited resources, solutions to mitigate this problem are crucial. In this context, considering the possibility of reducing traffic in FL, we investigate different model compression techniques, aiming to balance model quality and communication cost. The results indicate that compression techniques effectively reduce the volume of transmitted data without degrading model performance, even in scenarios involving imbalanced data.
Keywords:
Federated Learning, Model Compression, Efficient Communication
References
Alves, V. R. M., da Costa, J. B. D., Gonzalez, L., de Souza, A. M., and Villas, L. (2024). Seleção de clientes adaptativa baseada em privacidade diferencial para aprendizado federado. In Anais Estendidos do XLII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pages 225–232. SBC.
de Souza, A. M., Maciel, F., da Costa, J. B. D., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Ficco, M., Guerriero, A., Milite, E., Palmieri, F., Pietrantuono, R., and Russo, S. (2024). Federated learning for iot devices: Enhancing tinyml with on-board training. Information Fusion, 104:102189.
Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., and Peste, A. (2021). Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1–124.
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022). Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems, 34(12):10374–10386.
Liu, B., Lv, N., Guo, Y., and Li, Y. (2024). Recent advances on federated learning: A systematic survey. Neurocomputing, page 128019.
Lu, Z., Pan, H., Dai, Y., Si, X., and Zhang, Y. (2024). Federated learning with non-iid data: A survey. IEEE Internet of Things Journal.
Maciel, F., de Souza, A. M., Bittencourt, L. F., Villas, L. A., and Braun, T. (2024). Federated learning energy saving through client selection. Pervasive and Mobile Computing, 103:101948.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
Sabah, F., Chen, Y., Yang, Z., Azam, M., Ahmad, N., and Sarwar, R. (2024). Model optimization techniques in personalized federated learning: A survey. Expert Systems with Applications, 243:122874.
Thakur, D., Guzzo, A., Fortino, G., and Piccialli, F. (2025). Green federated learning: A new era of green aware ai. ACM Computing Surveys, 57(8).
Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., and Khazaeni, Y. (2019). Bayesian nonparametric federated learning of neural networks. In International conference on machine learning, pages 7252–7261. PMLR.
Zhu, H., Xu, J., Liu, S., and Jin, Y. (2021). Federated learning on non-iid data: A survey. Neurocomputing, 465:371–390.
de Souza, A. M., Maciel, F., da Costa, J. B. D., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Ficco, M., Guerriero, A., Milite, E., Palmieri, F., Pietrantuono, R., and Russo, S. (2024). Federated learning for iot devices: Enhancing tinyml with on-board training. Information Fusion, 104:102189.
Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., and Peste, A. (2021). Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1–124.
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022). Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems, 34(12):10374–10386.
Liu, B., Lv, N., Guo, Y., and Li, Y. (2024). Recent advances on federated learning: A systematic survey. Neurocomputing, page 128019.
Lu, Z., Pan, H., Dai, Y., Si, X., and Zhang, Y. (2024). Federated learning with non-iid data: A survey. IEEE Internet of Things Journal.
Maciel, F., de Souza, A. M., Bittencourt, L. F., Villas, L. A., and Braun, T. (2024). Federated learning energy saving through client selection. Pervasive and Mobile Computing, 103:101948.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
Sabah, F., Chen, Y., Yang, Z., Azam, M., Ahmad, N., and Sarwar, R. (2024). Model optimization techniques in personalized federated learning: A survey. Expert Systems with Applications, 243:122874.
Thakur, D., Guzzo, A., Fortino, G., and Piccialli, F. (2025). Green federated learning: A new era of green aware ai. ACM Computing Surveys, 57(8).
Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., and Khazaeni, Y. (2019). Bayesian nonparametric federated learning of neural networks. In International conference on machine learning, pages 7252–7261. PMLR.
Zhu, H., Xu, J., Liu, S., and Jin, Y. (2021). Federated learning on non-iid data: A survey. Neurocomputing, 465:371–390.
Published
2025-05-19
How to Cite
LIBARDI, Guilherme M. A.; KIMURA, Bruno Y. L.; DA COSTA, Joahannes B. D..
Less is More: Evaluating the Impact of Model Compression on Federated Learning Efficiency. In: WORKSHOP ON SCIENTIFIC INITIATION AND GRADUATION - BRAZILIAN SYMPOSIUM ON COMPUTER NETWORKS AND DISTRIBUTED SYSTEMS (SBRC), 43. , 2025, Natal/RN.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 290-297.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc_estendido.2025.8001.
