An Efficient Method for Model Compression for Federated Learning Scenarios

Resumo


Traditional Machine Learning (ML) relies on centralized systems, requiring large amounts of sensitive data. In contrast, edge processing minimizes data transmission by performing computations locally, improving efficiency. In this context, Federated Learning (FL) is a decentralized machine learning approach that allows clients to train models collaboratively while keeping their data private. However, FL faces challenges such as high communication costs due to the large number of model updates between clients and the central server. In this paper, we introduce an efficient method for model compression for FL scenarios called HUFE-FL, which applies quantization to transform model parameters into discrete values, reducing their size, followed by Huffman encoding. This further compresses the model by efficiently encoding frequent symbols with fewer bits. Our main results show that HUFE-FL achieves up to 77.48% reduction in data transmission compared to traditional FL methods, with minimal loss in accuracy.

Palavras-chave: Federated Learning, Compression, Transmission

Referências

Beitollahi, M. and Lu, N. (2022). Flac: Federated learning with autoencoder compression and convergence guarantee. In GLOBECOM 2022-2022 IEEE Global Communications Conference, pages 4589–4594. IEEE.

de Souza, A. M., Maciel, F., da Costa, J. B., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.

Haddadpour, F., Kamani, M. M., Mokhtari, A., and Mahdavi, M. (2021). Federated learning with compression: Unified analysis and sharp guarantees. In International Conference on Artificial Intelligence and Statistics, pages 2350–2358. PMLR.

Imteaj, A., Thakker, U., Wang, S., Li, J., and Amini, M. H. (2021). A survey on federated learning for resource-constrained IoT devices. IEEE Internet of Things Journal, 9(1):1–24.

Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022). Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems, 34(12):10374–10386.

Khan, F. M. A., Abou-Zeid, H., and Hassan, S. A. (2024). Deep compression for efficient and accelerated over-the-air federated learning. IEEE Internet of Things Journal.

Qi, P., Chiaro, D., Guzzo, A., Ianni, M., Fortino, G., and Piccialli, F. (2024). Model aggregation techniques in federated learning: A comprehensive survey. Future Generation Computer Systems, 150:272–293.

Shah, S. M. and Lau, V. K. (2021). Model compression for communication efficient federated learning. IEEE Transactions on Neural Networks and Learning Systems, 34(9):5937–5951.

Shlezinger, N., Chen, M., Eldar, Y. C., Poor, H. V., and Cui, S. (2020). Federated learning with quantization constraints. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8851–8855. IEEE.

Sousa, J. L. R., Lobato, W., Rosario, D., Cerqueira, E., and Villas, L. A. (2023). Entropy-based client selection mechanism for vehicular federated environments. In Proceedings of the 22nd Workshop on Performance of Computer and Communication Systems, pages 37–48. SBC.

Wang, L., Zhang, X., Su, H., and Zhu, J. (2024). A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence.

Wang, X., Jiang, H., Mu, M., and Dong, Y. (2025). A trackable multi-domain collaborative generative adversarial network for rotating machinery fault diagnosis. Mechanical Systems and Signal Processing, 224:111950.

Wen, H., Wu, Y., Hu, J., Wang, Z., Duan, H., and Min, G. (2023). Communication-efficient federated learning on non-IID data using two-step knowledge distillation. IEEE Internet of Things Journal, 10(19):17307–17322.

Wu, J., Wang, L., and Wang, Y. (2022). An improved CNN-LSTM model compression pruning algorithm. In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery: Proceedings of the ICNC-FSKD 2021, pages 727–736. Springer.

Ye, M., Fang, X., Du, B., Yuen, P. C., and Tao, D. (2023). Heterogeneous federated learning: State-of-the-art and research challenges. ACM Computing Surveys, 56(3):1–44.

Yue, K., Jin, R., Wong, C.-W., and Dai, H. (2022). Communication-efficient federated learning via predictive coding. IEEE Journal of Selected Topics in Signal Processing, 16(3):369–380.

Zhang, C., Zhang, W., Wu, Q., Fan, P., Fan, Q., Wang, J., and Letaief, K. B. (2024). Distributed deep reinforcement learning based gradient quantization for federated learning enabled vehicle edge computing. IEEE Internet of Things Journal.

Zhu, X., Wang, J., Chen, W., and Sato, K. (2023). Model compression and privacy preserving framework for federated learning. Future Generation Computer Systems, 140:376–389.
Publicado
19/05/2025
MORAIS, Renan; VEIGA, Rafael; BASTOS, Lucas; ROSÁRIO, Denis; CERQUEIRA, Eduardo. An Efficient Method for Model Compression for Federated Learning Scenarios. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 43. , 2025, Natal/RN. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 686-699. ISSN 2177-9384. DOI: https://doi.org/10.5753/sbrc.2025.6341.

Artigos mais lidos do(s) mesmo(s) autor(es)

1 2 3 4 5 6 > >>