Performance Evaluation of Federated Learning Applications in Shared Access Networks
Abstract
Federated learning (FL) enables the training of machine learning (ML) models by distributed clients without sharing their local data with a central server (CS). By sharing only the local model parameters of the clients, FL addresses security and privacy challenges of traditional ML training, reducing sensitive data exposure. However, FL introduces a new class of network applications, characterized by frequent and large-size model parameter exchanges and significant network and computational resource utilization, leading to challenges in situations with limited bandwidth and processing resources. Several factors are directly related to the network traffic load generated, such as model size, the number of clients involved, and hyperparameter configuration. Although the configuration of these parameters primarily aims to maximize the model’s accuracy and convergence, achieving a balance between model quality and available network resources is essential. This study analyzes the impact of relevant FL application factors under the same access network on the FL model and network performance. To this end, a homemade network simulator was developed, including a methodology for generating FL traffic and obtaining application-level performance from the LEAF framework, which is a benchmark for learning in federated settings. Simulation results show an increase in latency in FL traffic as the number of clients increases or the batch size decreases.References
Abhinav, D., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., et al. (2024). The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
Amdahl, G. M. (1967). Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18-20, 1967, Spring Joint Computer Conference, AFIPS ’67 (Spring), page 483–485, New York, NY, USA. Association for Computing Machinery.
Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečnỳ, J., McMahan, H. B., Smith, V., and Talwalkar, A. (2018). Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097.
Ciceri, O. J., Astudillo, C. A., Zhu, Z., and da Fonseca, N. L. (2022). Federated learning over next-generation ethernet passive optical networks. IEEE network, 37(1):70–76.
Cohen, G., Afshar, S., Tapson, J., and Van Schaik, A. (2017). Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Docker, I. (2025). Docker. [link]. Acessado em 27 mar. 2025.
Eriş, M. C., Kantarci, B., and Oktug, S. (2021). Unveiling the wireless network limitations in federated learning. In 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pages 262–267. IEEE.
He, C., Li, S., So, J., Zeng, X., Zhang, M., Wang, H., Wang, X., Vepakomma, P., Singh, A., Qiu, H., et al. (2020). Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518.
Jin, W., Yao, Y., Han, S., Gu, J., Joe-Wong, C., Ravi, S., Avestimehr, S., and He, C. (2023). Fedml-he: An efficient homomorphic-encryption-based privacy-preserving federated learning system. arXiv preprint arXiv:2303.10837.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. Foundations and trends® in machine learning, 14(1–2):1–210.
Kaur, K., Singh, J., and Ghumman, N. S. (2014). Mininet as software defined networking testing platform. In International conference on communication, computing & systems (ICCCS), pages 139–42.
Li, J., Chen, L., and Chen, J. (2021). Scalable federated learning over passive optical networks. In 2021 Optical Fiber Communications Conference and Exhibition (OFC), pages 1–3.
Li, J., Shen, X., Chen, L., and Chen, J. (2020). Bandwidth slicing to boost federated learning over passive optical networks. IEEE Communications Letters, 24(7):1492–1495.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
Ollama, I. (2025). Ollama. [link]. Acessado em 27 mar. 2025.
Paolini, E., Pinto, A., Valcarenghi, L., Andriolli, N., Maggiani, L., and Esposito, F. (2024). Efficient distributed learning over lossy wireless networks. In 2024 20th International Conference on Network and Service Management (CNSM), pages 1–7. IEEE.
Parliament, E. (2016). Regulation (eu) 2016/679 of the european parliament and of the council. Official Journal of the European Union.
Riley, G. F. and Henderson, T. R. (2010). The ns-3 network simulator. In Modeling and tools for network simulation, pages 15–34. Springer.
Rodio, A., Neglia, G., Busacca, F., Mangione, S., Palazzo, S., Restuccia, F., and Tinnirello, I. (2023). Federated learning with packet losses. In 2023 26th International Symposium on Wireless Personal Multimedia Communications (WPMC), pages 1–6. IEEE.
Shakespeare, W. (2014). The complete works of William Shakespeare. Race Point Publishing.
Tedeschini, B. C., Savazzi, S., and Nicoli, M. (2023). A traffic model based approach to parameter server design in federated learning processes. IEEE Communications Letters, 27(7):1774–1778.
Varga, A. (2010). Omnet++. In Modeling and tools for network simulation, pages 35–59. Springer.
Varno, F. (2022). Fedsim: A generic federated learning simulator.
Amdahl, G. M. (1967). Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18-20, 1967, Spring Joint Computer Conference, AFIPS ’67 (Spring), page 483–485, New York, NY, USA. Association for Computing Machinery.
Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečnỳ, J., McMahan, H. B., Smith, V., and Talwalkar, A. (2018). Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097.
Ciceri, O. J., Astudillo, C. A., Zhu, Z., and da Fonseca, N. L. (2022). Federated learning over next-generation ethernet passive optical networks. IEEE network, 37(1):70–76.
Cohen, G., Afshar, S., Tapson, J., and Van Schaik, A. (2017). Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Docker, I. (2025). Docker. [link]. Acessado em 27 mar. 2025.
Eriş, M. C., Kantarci, B., and Oktug, S. (2021). Unveiling the wireless network limitations in federated learning. In 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), pages 262–267. IEEE.
He, C., Li, S., So, J., Zeng, X., Zhang, M., Wang, H., Wang, X., Vepakomma, P., Singh, A., Qiu, H., et al. (2020). Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518.
Jin, W., Yao, Y., Han, S., Gu, J., Joe-Wong, C., Ravi, S., Avestimehr, S., and He, C. (2023). Fedml-he: An efficient homomorphic-encryption-based privacy-preserving federated learning system. arXiv preprint arXiv:2303.10837.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. Foundations and trends® in machine learning, 14(1–2):1–210.
Kaur, K., Singh, J., and Ghumman, N. S. (2014). Mininet as software defined networking testing platform. In International conference on communication, computing & systems (ICCCS), pages 139–42.
Li, J., Chen, L., and Chen, J. (2021). Scalable federated learning over passive optical networks. In 2021 Optical Fiber Communications Conference and Exhibition (OFC), pages 1–3.
Li, J., Shen, X., Chen, L., and Chen, J. (2020). Bandwidth slicing to boost federated learning over passive optical networks. IEEE Communications Letters, 24(7):1492–1495.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
Ollama, I. (2025). Ollama. [link]. Acessado em 27 mar. 2025.
Paolini, E., Pinto, A., Valcarenghi, L., Andriolli, N., Maggiani, L., and Esposito, F. (2024). Efficient distributed learning over lossy wireless networks. In 2024 20th International Conference on Network and Service Management (CNSM), pages 1–7. IEEE.
Parliament, E. (2016). Regulation (eu) 2016/679 of the european parliament and of the council. Official Journal of the European Union.
Riley, G. F. and Henderson, T. R. (2010). The ns-3 network simulator. In Modeling and tools for network simulation, pages 15–34. Springer.
Rodio, A., Neglia, G., Busacca, F., Mangione, S., Palazzo, S., Restuccia, F., and Tinnirello, I. (2023). Federated learning with packet losses. In 2023 26th International Symposium on Wireless Personal Multimedia Communications (WPMC), pages 1–6. IEEE.
Shakespeare, W. (2014). The complete works of William Shakespeare. Race Point Publishing.
Tedeschini, B. C., Savazzi, S., and Nicoli, M. (2023). A traffic model based approach to parameter server design in federated learning processes. IEEE Communications Letters, 27(7):1774–1778.
Varga, A. (2010). Omnet++. In Modeling and tools for network simulation, pages 35–59. Springer.
Varno, F. (2022). Fedsim: A generic federated learning simulator.
Published
2025-07-20
How to Cite
CUNHA, Diogo M.; GUERRA, Marco A.; CICERI, Oscar J.; FONSECA, Nelson L. S. da; ASTUDILLO, Carlos A..
Performance Evaluation of Federated Learning Applications in Shared Access Networks. In: WORKSHOP ON PERFORMANCE OF COMPUTER AND COMMUNICATION SYSTEMS (WPERFORMANCE), 24. , 2025, Maceió/AL.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 121-132.
ISSN 2595-6167.
DOI: https://doi.org/10.5753/wperformance.2025.9221.
