Detecção e Mitigação de Ataques de Inversão de Rótulos em Modelos Compactados e Privados no Aprendizado Federado
Resumo
Este artigo propõe uma técnica para detectar clientes maliciosos que realizam ataques de inversão de rótulo (label flipping) durante o treinamento de modelos de Aprendizado Federado (FL). O objetivo é identificar clientes maliciosos que manipulam rótulos de dados, mesmo com modelos locais compactados e privados por meio da aplicação de privacidade diferencial. O vetor de pesos da última camada de ativação da rede neural é usado para detectar comportamentos anômalos desses clientes, preservando a privacidade dos dados. A solução foi avaliada em diferentes conjuntos de dados, como MNIST e Fashion-MNIST, e no emulador MininetFed. Os resultados mostram que a proposta foi eficaz na detecção e neutralização de ataques, mesmo em cenários onde até 40% dos clientes na rede eram maliciosos.
Palavras-chave:
Aprendizado Federado, Compactação, Detecção de Ataques, Segurança
Referências
Aloran, I. (2024). Defending federated learning against model poisoning attacks. Master’s thesis, University of Windsor. Electronic Theses and Dissertations, 9458.
Bastos, J., Sarmento, E., Villaça, R., and Mota, V. (2024). Mininetfed: Uma ferramenta para emulação e análise de aprendizado federado com dispositivos heterogêneos. In Anais Estendidos do XLII SBRC, Porto Alegre, RS, Brasil. SBC.
Dehghani, M., and Yazdanparast, Z. (2023). From distributed machine to distributed deep learning: A comprehensive survey. Journal of Big Data, 10(1), 158.
Dwork, C. (2006). Differential privacy. In M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener (Eds.), Automata, Languages and Programming (pp. 1–12). Berlin, Heidelberg: Springer Berlin Heidelberg.
Elmahfoud, E., Hajla, S. E., Maleh, Y., Mounir, S., and Ouazzane, K. (2024). Label flipping attacks in hierarchical federated learning for intrusion detection in IoT. Information Security Journal: A Global Perspective.
He, X., Xu, Y., Zhang, S., Xu, W., and Yan, J. (2024). Enhance membership inference attacks in federated learning. Computers & Security, 136, 103535.
Iglewicz, B., and Hoaglin, D. C. (1993). How to detect and handle outliers (Vol. 16). American Society for Quality Control.
Jebreel, N. M., Domingo-Ferrer, J., Blanco-Justicia, A., and Sánchez, D. (2024a). Enhanced security and privacy via fragmented federated learning. IEEE Transactions on Neural Networks and Learning Systems, 35(5), 6703–6717.
Jebreel, N. M., Domingo-Ferrer, J., Sánchez, D., and Blanco-Justicia, A. (2024b). Lfighter: Defending against the label-flipping attack in federated learning. Neural Networks, 170, 111–126.
Jiang, Y., Zhang, W., and Chen, Y. (2023). Data quality detection mechanism against label flipping attacks in federated learning. IEEE Transactions on Information Forensics and Security, 18, 1625–1637.
Kolasa, D., Pilch, K., and Mazurczyk, W. (2024). Federated learning secure model: A framework for malicious clients detection. SoftwareX, 27, 101765.
LeCun, Y., et al. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
Li, D., Wong, W. E., Wang, W., Yao, Y., and Chau, M. (2021). Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-means. In 2021 8th International Conference on Dependable Systems and Applications (DSA) (pp. 551–559). IEEE.
Li, Y., Chen, H., Bao, W., Xu, Z., and Yuan, D. (2023). Honest score client selection scheme: Preventing federated learning label flipping attacks in non-IID scenarios.
Manzoor, H. U., Shabbir, A., Chen, A., Flynn, D., and Zoha, A. (2024). A survey of security strategies in federated learning: Defending models, data, and privacy. Future Internet, 16(10).
McMahan, H. B., et al. (2016). Communication-efficient learning of deep networks from decentralized data. In International Conference on Artificial Intelligence and Statistics.
Sarmento, E., Mota, V., and Villaça, R. (2024). Privacidade e comunicação eficiente em aprendizado federado: Uma abordagem utilizando estruturas de dados probabilísticas e seleção de clientes. In Anais do XLII SBRC (pp. 85–98), Porto Alegre, RS, Brasil. SBC.
Shen, X., Liu, Y., Li, F., and Li, C. (2024). Privacy-preserving federated learning against label-flipping attacks on non-IID data. IEEE Internet of Things Journal, 11(1).
Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020). Data poisoning attacks against federated learning systems. In ESORICs 2020: 25th European Symposium on Research in Computer Security (pp. 480–501). Springer.
Upreti, D., Kim, H., Yang, E., and Seo, C. (2024). Defending against label-flipping attacks in federated learning systems using uniform manifold approximation and projection. IAES International Journal of Artificial Intelligence (IJ-AI), 13(1), 459–466.
Bastos, J., Sarmento, E., Villaça, R., and Mota, V. (2024). Mininetfed: Uma ferramenta para emulação e análise de aprendizado federado com dispositivos heterogêneos. In Anais Estendidos do XLII SBRC, Porto Alegre, RS, Brasil. SBC.
Dehghani, M., and Yazdanparast, Z. (2023). From distributed machine to distributed deep learning: A comprehensive survey. Journal of Big Data, 10(1), 158.
Dwork, C. (2006). Differential privacy. In M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener (Eds.), Automata, Languages and Programming (pp. 1–12). Berlin, Heidelberg: Springer Berlin Heidelberg.
Elmahfoud, E., Hajla, S. E., Maleh, Y., Mounir, S., and Ouazzane, K. (2024). Label flipping attacks in hierarchical federated learning for intrusion detection in IoT. Information Security Journal: A Global Perspective.
He, X., Xu, Y., Zhang, S., Xu, W., and Yan, J. (2024). Enhance membership inference attacks in federated learning. Computers & Security, 136, 103535.
Iglewicz, B., and Hoaglin, D. C. (1993). How to detect and handle outliers (Vol. 16). American Society for Quality Control.
Jebreel, N. M., Domingo-Ferrer, J., Blanco-Justicia, A., and Sánchez, D. (2024a). Enhanced security and privacy via fragmented federated learning. IEEE Transactions on Neural Networks and Learning Systems, 35(5), 6703–6717.
Jebreel, N. M., Domingo-Ferrer, J., Sánchez, D., and Blanco-Justicia, A. (2024b). Lfighter: Defending against the label-flipping attack in federated learning. Neural Networks, 170, 111–126.
Jiang, Y., Zhang, W., and Chen, Y. (2023). Data quality detection mechanism against label flipping attacks in federated learning. IEEE Transactions on Information Forensics and Security, 18, 1625–1637.
Kolasa, D., Pilch, K., and Mazurczyk, W. (2024). Federated learning secure model: A framework for malicious clients detection. SoftwareX, 27, 101765.
LeCun, Y., et al. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
Li, D., Wong, W. E., Wang, W., Yao, Y., and Chau, M. (2021). Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-means. In 2021 8th International Conference on Dependable Systems and Applications (DSA) (pp. 551–559). IEEE.
Li, Y., Chen, H., Bao, W., Xu, Z., and Yuan, D. (2023). Honest score client selection scheme: Preventing federated learning label flipping attacks in non-IID scenarios.
Manzoor, H. U., Shabbir, A., Chen, A., Flynn, D., and Zoha, A. (2024). A survey of security strategies in federated learning: Defending models, data, and privacy. Future Internet, 16(10).
McMahan, H. B., et al. (2016). Communication-efficient learning of deep networks from decentralized data. In International Conference on Artificial Intelligence and Statistics.
Sarmento, E., Mota, V., and Villaça, R. (2024). Privacidade e comunicação eficiente em aprendizado federado: Uma abordagem utilizando estruturas de dados probabilísticas e seleção de clientes. In Anais do XLII SBRC (pp. 85–98), Porto Alegre, RS, Brasil. SBC.
Shen, X., Liu, Y., Li, F., and Li, C. (2024). Privacy-preserving federated learning against label-flipping attacks on non-IID data. IEEE Internet of Things Journal, 11(1).
Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020). Data poisoning attacks against federated learning systems. In ESORICs 2020: 25th European Symposium on Research in Computer Security (pp. 480–501). Springer.
Upreti, D., Kim, H., Yang, E., and Seo, C. (2024). Defending against label-flipping attacks in federated learning systems using uniform manifold approximation and projection. IAES International Journal of Artificial Intelligence (IJ-AI), 13(1), 459–466.
Publicado
19/05/2025
Como Citar
C. BATISTA, João Pedro; SCHMITZ BASTOS, Johann J.; DOS REIS FONTES, Ramon; CERQUEIRA, Eduardo; F. S. MOTA, Vinícius; S. VILLAÇA, Rodolfo.
Detecção e Mitigação de Ataques de Inversão de Rótulos em Modelos Compactados e Privados no Aprendizado Federado. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 43. , 2025, Natal/RN.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 322-335.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2025.5915.