LoRA-SL: Low-Rank Adaptation for Continual Split Learning
Resumo
In scenarios with drones on multiple missions and diverse tasks, it is common to use multiple servers training data models for different tasks. However, directly training a model on different tasks can lead to catastrophic forgetting, impairing model accuracy on old tasks and requiring retraining. Furthermore, it is important to preserve data privacy in these scenarios. To address this, we propose LoRA-SL, a Split Learning strategy that uses Low-Rank Adaptation (LoRA) to fine-tune client models rather than retraining them, while maintaining privacy. Experimental simulations performed with common classification datasets show that the proposed strategy allows clients to retain acquired knowledge while maintaining accuracy and reducing the number of training sessions.Referências
Aleixo, E. L., Colonna, J. G., Cristo, M., and Fernandes, E. (2024). Catastrophic forgetting in deep learning: A comprehensive taxonomy. Journal of the Brazilian Computer Society, 30.
Capanema, C. G. S., de Souza, A. M., da Costa, J. B. D., Silva, F. A., Villas, L. A., and Loureiro, A. A. F. (2025). A novel prediction technique for federated learning. IEEE Transactions on Emerging Topics in Computing, 13(1):5–21.
Chen, X., Li, L., Ji, F., and Wu, W. (2025). Memory-efficient split federated learning for llm fine-tuning on heterogeneous mobile devices.
Ding, C., Cao, X., Xie, J., Fan, L., Wang, S., and Lu, Z. (2024). Lora-c: Parameter-efficient fine-tuning of robust cnn for iot devices.
Fourati, H., Maaloul, R., and Chaari, L. (2021). A survey of 5G network systems: Challenges and machine learning approaches. International Journal of Machine Learning and Cybernetics, 12(2):385–431.
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models.
Hu, Z., Zhou, T., Wu, B., Chen, C., and Wang, Y. (2025). A Review and Experimental Evaluation on Split Learning. Future Internet, 17(2).
Kudithipudi, D. et al. (2022). Biological underpinnings for lifelong learning machines. Nature Machine Intelligence, 4(3):196–210.
Lee, C. H. (2025). Investigating CNNs performance on the CIFAR-10 dataset through hyperparameter tuning. NHSJS Reports. Received September 23, 2024; Accepted January 31, 2025.
Li, S., Xu, L. D., and Zhao, S. (2018). 5g internet of things: A survey. Journal of Industrial Information Integration, 10:1–9.
Lin, Z., Hu, X., Zhang, Y., Chen, Z., Fang, Z., Chen, X., Li, A., Vepakomma, P., and Gao, Y. (2025a). Splitlora: A split parameter-efficient fine-tuning framework for large language models.
Lin, Z., Zhang, Y., Chen, Z., Fang, Z., Chen, X., Vepakomma, P., Ni, W., Luo, J., and Gao, Y. (2025b). Hsplitlora: A heterogeneous split parameter-efficient fine-tuning framework for large language models.
Lyu, S., Lin, Z., Qu, G., Chen, X., Huang, X., and Li, P. (2023). Optimal Resource Allocation for U-Shaped Parallel Split Learning.
McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pages 109–165. Academic Press.
van de Ven, G. M., Soures, N., and Kudithipudi, D. (2025). Continual Learning and Catastrophic Forgetting. pages 153–168.
Vepakomma, P., Gupta, O., Swedish, T., and Raskar, R. (2018). Split learning for health: Distributed deep learning without sharing raw patient data.
Wang, L., Zhang, X., Su, H., and Zhu, J. (2024). A Comprehensive Survey of Continual Learning: Theory, Method and Application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5362–5383.
Wu, W., Li, M., Qu, K., Zhou, C., Xuemin, Shen, Zhuang, W., Li, X., and Shi, W. (2022). Split Learning over Wireless Networks: Parallel Design and Resource Management.
Yi, X., Hu, C., Cai, B., Huang, H., Chen, Y., and Wang, K. (2025). Fedalora: Adaptive local lora aggregation for personalized federated learning in llm. IEEE Internet of Things Journal, pages 1–1.
Capanema, C. G. S., de Souza, A. M., da Costa, J. B. D., Silva, F. A., Villas, L. A., and Loureiro, A. A. F. (2025). A novel prediction technique for federated learning. IEEE Transactions on Emerging Topics in Computing, 13(1):5–21.
Chen, X., Li, L., Ji, F., and Wu, W. (2025). Memory-efficient split federated learning for llm fine-tuning on heterogeneous mobile devices.
Ding, C., Cao, X., Xie, J., Fan, L., Wang, S., and Lu, Z. (2024). Lora-c: Parameter-efficient fine-tuning of robust cnn for iot devices.
Fourati, H., Maaloul, R., and Chaari, L. (2021). A survey of 5G network systems: Challenges and machine learning approaches. International Journal of Machine Learning and Cybernetics, 12(2):385–431.
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models.
Hu, Z., Zhou, T., Wu, B., Chen, C., and Wang, Y. (2025). A Review and Experimental Evaluation on Split Learning. Future Internet, 17(2).
Kudithipudi, D. et al. (2022). Biological underpinnings for lifelong learning machines. Nature Machine Intelligence, 4(3):196–210.
Lee, C. H. (2025). Investigating CNNs performance on the CIFAR-10 dataset through hyperparameter tuning. NHSJS Reports. Received September 23, 2024; Accepted January 31, 2025.
Li, S., Xu, L. D., and Zhao, S. (2018). 5g internet of things: A survey. Journal of Industrial Information Integration, 10:1–9.
Lin, Z., Hu, X., Zhang, Y., Chen, Z., Fang, Z., Chen, X., Li, A., Vepakomma, P., and Gao, Y. (2025a). Splitlora: A split parameter-efficient fine-tuning framework for large language models.
Lin, Z., Zhang, Y., Chen, Z., Fang, Z., Chen, X., Vepakomma, P., Ni, W., Luo, J., and Gao, Y. (2025b). Hsplitlora: A heterogeneous split parameter-efficient fine-tuning framework for large language models.
Lyu, S., Lin, Z., Qu, G., Chen, X., Huang, X., and Li, P. (2023). Optimal Resource Allocation for U-Shaped Parallel Split Learning.
McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pages 109–165. Academic Press.
van de Ven, G. M., Soures, N., and Kudithipudi, D. (2025). Continual Learning and Catastrophic Forgetting. pages 153–168.
Vepakomma, P., Gupta, O., Swedish, T., and Raskar, R. (2018). Split learning for health: Distributed deep learning without sharing raw patient data.
Wang, L., Zhang, X., Su, H., and Zhu, J. (2024). A Comprehensive Survey of Continual Learning: Theory, Method and Application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5362–5383.
Wu, W., Li, M., Qu, K., Zhou, C., Xuemin, Shen, Zhuang, W., Li, X., and Shi, W. (2022). Split Learning over Wireless Networks: Parallel Design and Resource Management.
Yi, X., Hu, C., Cai, B., Huang, H., Chen, Y., and Wang, K. (2025). Fedalora: Adaptive local lora aggregation for personalized federated learning in llm. IEEE Internet of Things Journal, pages 1–1.
Publicado
25/05/2026
Como Citar
OLIVEIRA, Mateus C.; SILVA, Heitor H. da; SANTOS, Camilo H. M. dos; SENNA, Carlos; SOUZA, Allan M. de; BITTENCOURT, Luiz F..
LoRA-SL: Low-Rank Adaptation for Continual Split Learning. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 44. , 2026, Praia do Forte/BA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2026
.
p. 940-953.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2026.19228.
