FedSNIP: Método baseado em Poda de Modelo de Etapa Única para Comunicação Eficiente em Aprendizado Federado
Resumo
No âmbito do Aprendizado Federado (AF), uma abordagem colaborativa, porém descentralizada, para a aprendizagem de máquina, a eficiência da comunicação é uma preocupação crítica, especialmente sob as restrições de largura de banda e recursos limitados. Este artigo introduz uma aplicação inovadora da técnica SNIP (Single-shot Network Pruning based on Connection Sensitivity) neste contexto. Aproveitando o SNIP, o método proposto poda eficazmente as redes neurais, convertendo numerosos pesos em zero e resultando em representações de pesos mais esparsas. Essa redução significativa na densidade de pesos diminui substancialmente o volume de parâmetros que precisam ser comunicados ao servidor, reduzindo assim a sobrecarga de comunicação. Os experimentos com o conjunto de dados MNIST demonstram que esta abordagem não apenas reduz a transmissão de dados entre clientes e servidor, mas também mantém a acurácia competitiva do modelo, comparável aos modelos convencionais de AF. O uso da poda de rede via SNIP emerge como uma estratégia eficaz para aumentar a eficiência do AF, especialmente vantajosa em ambientes com capacidades de comunicação restritas.Referências
Beutel, D. J., Topal, T., Mathur, A., Qiu, X., Fernandez-Marques, J., Gao, Y., Sani, L., Li, K. H., Parcollet, T., de Gusmão, P. P. B., et al. (2020). Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390.
Chang, M.-K., Chan, Y.-W., and Wu, T.-E. (2023). Communication-Efficient Federated Learning with Model Pruning. In Hung, J. C., Yen, N. Y., and Chang, J.-W., editors, Frontier Computing, volume 1031, pages 67–76. Springer Nature Singapore, Singapore. Series Title: Lecture Notes in Electrical Engineering.
de Souza, A. M., Maciel, F., da Costa, J. B., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Gutierrez, D. M. J., Anagnostopoulos, A., Chatzigiannakis, I., and Vitaletti, A. (2023). Fedartml. [link]. Federated Learning for Artificial Intelligence and Machine Learning library.
He, Y. and Xiao, L. (2023). Structured pruning for deep convolutional neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1–20.
Isik, B., Pase, F., Gunduz, D., Koyejo, S., Weissman, T., and Zorzi, M. (2023a). Communication-Efficient Federated Learning through Importance Sampling. arXiv:2306.12625 [cs, stat].
Isik, B., Pase, F., Gunduz, D., Weissman, T., and Zorzi, M. (2023b). Sparse Random Networks for Communication-Efficient Federated Learning. arXiv:2209.15328 [cs, stat].
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022a). Model Pruning Enables Efficient Federated Learning on Edge Devices. arXiv:1909.12326 [cs, stat].
Jiang, Z., Xu, Y., Xu, H., Wang, Z., Qiao, C., and Zhao, Y. (2022b). FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 767–779, Kuala Lumpur, Malaysia. IEEE.
Jordao, A. and Pedrini, H. (2021). On the effect of pruning on adversarial robustness.
Kairouz, P. and McMahan, H. B. e. a. (2021). Advances and Open Problems in Federated Learning. arXiv.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Lee, N., Ajanthan, T., and Torr, P. (2019). Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.
Li, A., Sun, J., Wang, B., Duan, L., Li, S., Chen, Y., and Li, H. (2020). LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 [cs, stat].
Li, Z., Chen, T., Li, L., Li, B., and Wang, Z. (2022). Can pruning improve certified robustness of neural networks? 14 Liang, T., Glossner, J., Wang, L., Shi, S., and Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey.
Luping, W., Wei, W., and Bo, L. (2019). Cmfl: Mitigating communication overhead for federated learning. In 2019 IEEE 39th international conference on distributed computing systems (ICDCS), pages 954–964. IEEE.
McMahan, H. B., Moore, E., Ramage, D., and y Arcas, B. A. (2016). Federated learning of deep networks using model averaging. CoRR, abs/1602.05629.
Renda, A., Frankle, J., and Carbin, M. (2020). Comparing rewinding and fine-tuning in neural network pruning.
Shahid, O., Pouriyeh, S., Parizi, R. M., Sheng, Q. Z., Srivastava, G., and Zhao, L. (2021). Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv:2107.10996 [cs].
Soltani, B., Zhou, Y., Haghighi, V., and Lui, J. C. S. (2023). A survey of federated evaluation in federated learning.
Souza, A., Bittencourt, L., Cerqueira, E., Loureiro, A., and Villas, L. (2023). Dispositivos, eu escolho vocês: Seleção de clientes adaptativa para comunicação eficiente em aprendizado federado. In Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pages 1–14, Porto Alegre, RS, Brasil. SBC.
Vallapuram, A. K., Zhou, P., Kwon, Y. D., Lee, L. H., Xu, H., and Hui, P. (2022). HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask. arXiv:2206.04385 [cs].
Wen, J., Zhang, Z., Lan, Y., Cui, Z., Cai, J., and Zhang, W. (2023). A survey on federated learning: challenges and applications. International Journal of Machine Learning and Cybernetics, 14(2):513–535.
Xia, Q., Ye, W., Tao, Z., Wu, J., and Li, Q. (2021). A survey of federated learning for edge computing: Research problems and solutions. High-Confidence Computing, 1(1):100008.
Chang, M.-K., Chan, Y.-W., and Wu, T.-E. (2023). Communication-Efficient Federated Learning with Model Pruning. In Hung, J. C., Yen, N. Y., and Chang, J.-W., editors, Frontier Computing, volume 1031, pages 67–76. Springer Nature Singapore, Singapore. Series Title: Lecture Notes in Electrical Engineering.
de Souza, A. M., Maciel, F., da Costa, J. B., Bittencourt, L. F., Cerqueira, E., Loureiro, A. A., and Villas, L. A. (2024). Adaptive client selection with personalization for communication efficient federated learning. Ad Hoc Networks, 157:103462.
Gutierrez, D. M. J., Anagnostopoulos, A., Chatzigiannakis, I., and Vitaletti, A. (2023). Fedartml. [link]. Federated Learning for Artificial Intelligence and Machine Learning library.
He, Y. and Xiao, L. (2023). Structured pruning for deep convolutional neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1–20.
Isik, B., Pase, F., Gunduz, D., Koyejo, S., Weissman, T., and Zorzi, M. (2023a). Communication-Efficient Federated Learning through Importance Sampling. arXiv:2306.12625 [cs, stat].
Isik, B., Pase, F., Gunduz, D., Weissman, T., and Zorzi, M. (2023b). Sparse Random Networks for Communication-Efficient Federated Learning. arXiv:2209.15328 [cs, stat].
Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., and Tassiulas, L. (2022a). Model Pruning Enables Efficient Federated Learning on Edge Devices. arXiv:1909.12326 [cs, stat].
Jiang, Z., Xu, Y., Xu, H., Wang, Z., Qiao, C., and Zhao, Y. (2022b). FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 767–779, Kuala Lumpur, Malaysia. IEEE.
Jordao, A. and Pedrini, H. (2021). On the effect of pruning on adversarial robustness.
Kairouz, P. and McMahan, H. B. e. a. (2021). Advances and Open Problems in Federated Learning. arXiv.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Lee, N., Ajanthan, T., and Torr, P. (2019). Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.
Li, A., Sun, J., Wang, B., Duan, L., Li, S., Chen, Y., and Li, H. (2020). LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv:2008.03371 [cs, stat].
Li, Z., Chen, T., Li, L., Li, B., and Wang, Z. (2022). Can pruning improve certified robustness of neural networks? 14 Liang, T., Glossner, J., Wang, L., Shi, S., and Zhang, X. (2021). Pruning and quantization for deep neural network acceleration: A survey.
Luping, W., Wei, W., and Bo, L. (2019). Cmfl: Mitigating communication overhead for federated learning. In 2019 IEEE 39th international conference on distributed computing systems (ICDCS), pages 954–964. IEEE.
McMahan, H. B., Moore, E., Ramage, D., and y Arcas, B. A. (2016). Federated learning of deep networks using model averaging. CoRR, abs/1602.05629.
Renda, A., Frankle, J., and Carbin, M. (2020). Comparing rewinding and fine-tuning in neural network pruning.
Shahid, O., Pouriyeh, S., Parizi, R. M., Sheng, Q. Z., Srivastava, G., and Zhao, L. (2021). Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv:2107.10996 [cs].
Soltani, B., Zhou, Y., Haghighi, V., and Lui, J. C. S. (2023). A survey of federated evaluation in federated learning.
Souza, A., Bittencourt, L., Cerqueira, E., Loureiro, A., and Villas, L. (2023). Dispositivos, eu escolho vocês: Seleção de clientes adaptativa para comunicação eficiente em aprendizado federado. In Anais do XLI Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, pages 1–14, Porto Alegre, RS, Brasil. SBC.
Vallapuram, A. K., Zhou, P., Kwon, Y. D., Lee, L. H., Xu, H., and Hui, P. (2022). HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask. arXiv:2206.04385 [cs].
Wen, J., Zhang, Z., Lan, Y., Cui, Z., Cai, J., and Zhang, W. (2023). A survey on federated learning: challenges and applications. International Journal of Machine Learning and Cybernetics, 14(2):513–535.
Xia, Q., Ye, W., Tao, Z., Wu, J., and Li, Q. (2021). A survey of federated learning for edge computing: Research problems and solutions. High-Confidence Computing, 1(1):100008.
Publicado
20/05/2024
Como Citar
BUSTINCIO, Rómulo; SOUZA, Allan M. de; COSTA, Joahannes B. D. da; GONZALEZ, Luis F. G.; BITTENCOURT, Luiz F..
FedSNIP: Método baseado em Poda de Modelo de Etapa Única para Comunicação Eficiente em Aprendizado Federado. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 42. , 2024, Niterói/RJ.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 980-993.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2024.1520.