FedSNIP: A Single-Step Model Pruning Method for Efficient Communication in Federated Learning
Abstract
In the realm of Federated Learning (FL), a collaborative yet decentralized approach to machine learning, communication efficiency is a critical concern, particularly under constraints of limited bandwidth and resources. This paper introduces an innovative application of the SNIP (Single-shot Network Pruning based on Connection Sensitivity) technique within this context. Leveraging SNIP, the proposed method prunes neural networks effectively, converting numerous weights to zero and resulting in sparser weight representations. This significant reduction in weight density substantially decreases the volume of parameters that need to be communicated to the server, thus reducing the communication overhead. Our experiments on the MNIST dataset showcase that this approach not only lowers the data transmission between clients and server but also sustains competitive model accuracy, comparable to conventional FL models. The use of network pruning via SNIP emerges as an efficacious strategy to augment the efficiency of FL, especially advantageous in settings with restricted communication capabilities.References
Beutel, D. J., Topal, T., Mathur, A., Qiu, X., Fernandez-Marques, J., Gao, Y., Sani, L., Li, K. H., Parcollet, T., de Gusmão, P. P. B., et al. (2020). Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390.
Chang, M.-K., Chan, Y.-W., and Wu, T.-E. (2023). Communication-Efficient Federated Learning with Model Pruning. In Hung, J. C., Yen, N. Y., and Chang, J.-W., editors, Frontier Computing, volume 1031, pages 67–76. Springer Nature Singapore, Singapore. Series Title: Lecture Notes in Electrical Engineering.
de Souza, A. M., Maciel, F., da Costa, J. B., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Gutierrez, D. M. J., Anagnostopoulos, A., Chatzigiannakis, I., and Vitaletti, A. (2023). Fedartml. [link]. Federated Learning for Artificial Intelligence and Machine Learning library.
He, Y. and Xiao, L. (2023). Structured pruning for deep convolutional neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1–20.
Isik, B., Pase, F., Gunduz, D., Koyejo, S., Weissman, T., and Zorzi, M. (2023a). Communication-Efficient Federated Learning through Importance Sampling. arXiv:2306.12625 [cs, stat].
Isik, B., Pase, F., Gunduz, D., Weissman, T., and Zorzi, M. (2023b). Sparse Random Networks for Communication-Efficient Federated Learning. arXiv:2209.15328 [cs, stat].
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022a). Model Pruning Enables Efficient Federated Learning on Edge Devices. arXiv:1909.12326 [cs, stat].
Jiang, Z., Xu, Y., Xu, H., Wang, Z., Qiao, C., and Zhao, Y. (2022b). FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 767–779, Kuala Lumpur, Malaysia. IEEE.
Jordao, A. and Pedrini, H. (2021). On the effect of pruning on adversarial robustness.
Kairouz, P. and McMahan, H. B. e. a. (2021). Advances and Open Problems in Federated Learning. arXiv.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Lee, N., Ajanthan, T., and Torr, P. (2019). Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.
Li, A., Sun, J., Wang, B., Duan, L., Li, S., Chen, Y., and Li, H. (2020). LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 [cs, stat].
Li, Z., Chen, T., Li, L., Li, B., and Wang, Z. (2022). Can pruning improve certified robustness of neural networks? 14 Liang, T., Glossner, J., Wang, L., Shi, S., and Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey.
Luping, W., Wei, W., and Bo, L. (2019). Cmfl: Mitigating communication overhead for federated learning. In 2019 IEEE 39th international conference on distributed computing systems (ICDCS), pages 954–964. IEEE.
McMahan, H. B., Moore, E., Ramage, D., and y Arcas, B. A. (2016). Federated learning of deep networks using model averaging. CoRR, abs/1602.05629.
Renda, A., Frankle, J., and Carbin, M. (2020). Comparing rewinding and fine-tuning in neural network pruning.
Shahid, O., Pouriyeh, S., Parizi, R. M., Sheng, Q. Z., Srivastava, G., and Zhao, L. (2021). Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv:2107.10996 [cs].
Soltani, B., Zhou, Y., Haghighi, V., and Lui, J. C. S. (2023). A survey of federated evaluation in federated learning.
Souza, A., Bittencourt, L., Cerqueira, E., Loureiro, A., and Villas, L. (2023). Dispositivos, eu escolho vocês: Seleção de clientes adaptativa para comunicação eficiente em aprendizado federado. In Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pages 1–14, Porto Alegre, RS, Brasil. SBC.
Vallapuram, A. K., Zhou, P., Kwon, Y. D., Lee, L. H., Xu, H., and Hui, P. (2022). HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask. arXiv:2206.04385 [cs].
Wen, J., Zhang, Z., Lan, Y., Cui, Z., Cai, J., and Zhang, W. (2023). A survey on federated learning: challenges and applications. International Journal of Machine Learning and Cybernetics, 14(2):513–535.
Xia, Q., Ye, W., Tao, Z., Wu, J., and Li, Q. (2021). A survey of federated learning for edge computing: Research problems and solutions. High-Confidence Computing, 1(1):100008.
Chang, M.-K., Chan, Y.-W., and Wu, T.-E. (2023). Communication-Efficient Federated Learning with Model Pruning. In Hung, J. C., Yen, N. Y., and Chang, J.-W., editors, Frontier Computing, volume 1031, pages 67–76. Springer Nature Singapore, Singapore. Series Title: Lecture Notes in Electrical Engineering.
de Souza, A. M., Maciel, F., da Costa, J. B., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Gutierrez, D. M. J., Anagnostopoulos, A., Chatzigiannakis, I., and Vitaletti, A. (2023). Fedartml. [link]. Federated Learning for Artificial Intelligence and Machine Learning library.
He, Y. and Xiao, L. (2023). Structured pruning for deep convolutional neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1–20.
Isik, B., Pase, F., Gunduz, D., Koyejo, S., Weissman, T., and Zorzi, M. (2023a). Communication-Efficient Federated Learning through Importance Sampling. arXiv:2306.12625 [cs, stat].
Isik, B., Pase, F., Gunduz, D., Weissman, T., and Zorzi, M. (2023b). Sparse Random Networks for Communication-Efficient Federated Learning. arXiv:2209.15328 [cs, stat].
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022a). Model Pruning Enables Efficient Federated Learning on Edge Devices. arXiv:1909.12326 [cs, stat].
Jiang, Z., Xu, Y., Xu, H., Wang, Z., Qiao, C., and Zhao, Y. (2022b). FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 767–779, Kuala Lumpur, Malaysia. IEEE.
Jordao, A. and Pedrini, H. (2021). On the effect of pruning on adversarial robustness.
Kairouz, P. and McMahan, H. B. e. a. (2021). Advances and Open Problems in Federated Learning. arXiv.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Lee, N., Ajanthan, T., and Torr, P. (2019). Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.
Li, A., Sun, J., Wang, B., Duan, L., Li, S., Chen, Y., and Li, H. (2020). LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 [cs, stat].
Li, Z., Chen, T., Li, L., Li, B., and Wang, Z. (2022). Can pruning improve certified robustness of neural networks? 14 Liang, T., Glossner, J., Wang, L., Shi, S., and Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey.
Luping, W., Wei, W., and Bo, L. (2019). Cmfl: Mitigating communication overhead for federated learning. In 2019 IEEE 39th international conference on distributed computing systems (ICDCS), pages 954–964. IEEE.
McMahan, H. B., Moore, E., Ramage, D., and y Arcas, B. A. (2016). Federated learning of deep networks using model averaging. CoRR, abs/1602.05629.
Renda, A., Frankle, J., and Carbin, M. (2020). Comparing rewinding and fine-tuning in neural network pruning.
Shahid, O., Pouriyeh, S., Parizi, R. M., Sheng, Q. Z., Srivastava, G., and Zhao, L. (2021). Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv:2107.10996 [cs].
Soltani, B., Zhou, Y., Haghighi, V., and Lui, J. C. S. (2023). A survey of federated evaluation in federated learning.
Souza, A., Bittencourt, L., Cerqueira, E., Loureiro, A., and Villas, L. (2023). Dispositivos, eu escolho vocês: Seleção de clientes adaptativa para comunicação eficiente em aprendizado federado. In Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pages 1–14, Porto Alegre, RS, Brasil. SBC.
Vallapuram, A. K., Zhou, P., Kwon, Y. D., Lee, L. H., Xu, H., and Hui, P. (2022). HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask. arXiv:2206.04385 [cs].
Wen, J., Zhang, Z., Lan, Y., Cui, Z., Cai, J., and Zhang, W. (2023). A survey on federated learning: challenges and applications. International Journal of Machine Learning and Cybernetics, 14(2):513–535.
Xia, Q., Ye, W., Tao, Z., Wu, J., and Li, Q. (2021). A survey of federated learning for edge computing: Research problems and solutions. High-Confidence Computing, 1(1):100008.
Published
2024-05-20
How to Cite
BUSTINCIO, Rómulo; SOUZA, Allan M. de; COSTA, Joahannes B. D. da; GONZALEZ, Luis F. G.; BITTENCOURT, Luiz F..
FedSNIP: A Single-Step Model Pruning Method for Efficient Communication in Federated Learning. In: BRAZILIAN SYMPOSIUM ON COMPUTER NETWORKS AND DISTRIBUTED SYSTEMS (SBRC), 42. , 2024, Niterói/RJ.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 980-993.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2024.1520.
