Do Aprendizado Centralizado ao Federado: O Que Acontece com a Explicabilidade dos Modelos?
Resumo
Este artigo investiga o impacto do aprendizado federado (FL) na explicabilidade de modelos de aprendizado de máquina, com foco em explicações baseadas em valores SHAP. É proposta uma metodologia comparativa entre modelos treinados de forma centralizada e federada, considerando cenários com distribuição uniforme e não uniforme dos dados entre clientes. A abordagem avalia a similaridade das explicações por meio de métricas como distância do cosseno, similaridade de rankings e consistência de sinais, sendo validada com o conjunto de dados EHMS para detecção de ataques em sistemas de saúde. Os resultados mostram que modelos federados produzem explicações distintas das centralizadas e que a heterogeneidade dos dados afeta significativamente a consistência das explicações. Em cenários uniformes, explicações locais e globais são consistentes, enquanto em cenários não uniformes tornam-se divergentes.Referências
Bak, M. et al. (2024). Federated learning is not a cure-all for data ethics. Nature Machine Intelligence, 6.
Branson, J. et al. (2020). Evaluating the re-identification risk of a clinical study report anonymized under EMA Policy 0070 and Health Canada Regulations. Trials, 21.
Chen, P. et al. (2022). EVFL: An explainable vertical federated learning for data-oriented artificial intelligence systems. Journal of Systems Architecture, 126:102474.
Chen, X. et al. (2021). Fed-EINI: An efficient and interpretable inference framework for decision tree ensembles in federated learning.
Dong, T. et al. (2022). An interpretable federated learning-based network intrusion detection framework.
Ducange, P. et al. (2024). Consistent post-hoc explainability in federated learning through federated fuzzy clustering. In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–10.
Guidotti, R. (2022). Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, 38:1–55.
Hady, A. A. et al. (2020). Intrusion detection system for healthcare systems using medical and network data: A comparison study. IEEE Access, 8:106576–106584.
Haffar, R. et al. (2022). Explaining predictions and attacks in federated learning via random forests. Applied Intelligence, 53:1–17.
Hou, B. et al. (2022). Mitigating the backdoor attack by federated filters for industrial IoT applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571.
Imakura, A. et al. (2020). Interpretable collaborative data analysis on distributed data.
Kalakoti, R. et al. (2025). Federated learning of explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks. Computer Networks, 270:111479.
Li, A. et al. (2023). Towards interpretable federated learning.
Liang, Z. and Wang, H. (2022). FedTSC: a secure federated learning system for interpretable time series classification. Proc. VLDB Endow., 15(12):3686–3689.
Linardatos, P. et al. (2021). Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1).
Lopez-Ramos, L. M. et al. (2024). Interplay between federated learning and explainable artificial intelligence: a scoping review.
Lundberg, S. and Lee, S.-I. (2017). A unified approach to interpreting model predictions.
Ma, X. and Gu, L. (2023). Research and application of generative-adversarial-network attacks defense method based on federated learning. Electronics, 12:975.
Malandrino, F. and Chiasserini, C. F. (2021). Toward node liability in federated learning: Computational cost and network overhead. IEEE Communications Magazine, 59(9):72–77.
Markus, A. F. et al. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113:103655.
McMahan, H. B. et al. (2023). Communication-efficient learning of deep networks from decentralized data.
Polato, M. et al. (2022). Boosting the federation: Cross-silo federated learning without gradient descent. In 2022 Int Joint Conf on Neural Networks (IJCNN), pages 1–10.
Sarmento, E. M. et al. (2024). MininetFed: A tool for assessing client selection, aggregation, and security in federated learning. In 2024 IEEE 10th World Forum on Internet of Things (WF-IoT), pages 1–6. IEEE.
Selvaraju, R. R. et al. (2019). Grad-CAM: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128(2):336–359.
Shapley, L. S. (1953). A Value for n-Person Games, pages 307–318. Princeton Univ. Press.
Wang, G. (2019). Interpret federated learning with Shapley values.
Yang, Q. et al., editors (2020). Federated Learning Privacy and Incentive, volume 12500 of Lecture Notes in Computer Science. Springer.
Younis, R. et al. (2023). FLAMES2Graph: An interpretable federated multivariate time series classification framework. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’23, page 3140–3150. ACM.
Yuan, X. et al. (2022). An efficient digital twin assisted clustered federated learning algorithm for disease prediction. In 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), pages 1–6.
Branson, J. et al. (2020). Evaluating the re-identification risk of a clinical study report anonymized under EMA Policy 0070 and Health Canada Regulations. Trials, 21.
Chen, P. et al. (2022). EVFL: An explainable vertical federated learning for data-oriented artificial intelligence systems. Journal of Systems Architecture, 126:102474.
Chen, X. et al. (2021). Fed-EINI: An efficient and interpretable inference framework for decision tree ensembles in federated learning.
Dong, T. et al. (2022). An interpretable federated learning-based network intrusion detection framework.
Ducange, P. et al. (2024). Consistent post-hoc explainability in federated learning through federated fuzzy clustering. In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–10.
Guidotti, R. (2022). Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, 38:1–55.
Hady, A. A. et al. (2020). Intrusion detection system for healthcare systems using medical and network data: A comparison study. IEEE Access, 8:106576–106584.
Haffar, R. et al. (2022). Explaining predictions and attacks in federated learning via random forests. Applied Intelligence, 53:1–17.
Hou, B. et al. (2022). Mitigating the backdoor attack by federated filters for industrial IoT applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571.
Imakura, A. et al. (2020). Interpretable collaborative data analysis on distributed data.
Kalakoti, R. et al. (2025). Federated learning of explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks. Computer Networks, 270:111479.
Li, A. et al. (2023). Towards interpretable federated learning.
Liang, Z. and Wang, H. (2022). FedTSC: a secure federated learning system for interpretable time series classification. Proc. VLDB Endow., 15(12):3686–3689.
Linardatos, P. et al. (2021). Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1).
Lopez-Ramos, L. M. et al. (2024). Interplay between federated learning and explainable artificial intelligence: a scoping review.
Lundberg, S. and Lee, S.-I. (2017). A unified approach to interpreting model predictions.
Ma, X. and Gu, L. (2023). Research and application of generative-adversarial-network attacks defense method based on federated learning. Electronics, 12:975.
Malandrino, F. and Chiasserini, C. F. (2021). Toward node liability in federated learning: Computational cost and network overhead. IEEE Communications Magazine, 59(9):72–77.
Markus, A. F. et al. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113:103655.
McMahan, H. B. et al. (2023). Communication-efficient learning of deep networks from decentralized data.
Polato, M. et al. (2022). Boosting the federation: Cross-silo federated learning without gradient descent. In 2022 Int Joint Conf on Neural Networks (IJCNN), pages 1–10.
Sarmento, E. M. et al. (2024). MininetFed: A tool for assessing client selection, aggregation, and security in federated learning. In 2024 IEEE 10th World Forum on Internet of Things (WF-IoT), pages 1–6. IEEE.
Selvaraju, R. R. et al. (2019). Grad-CAM: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128(2):336–359.
Shapley, L. S. (1953). A Value for n-Person Games, pages 307–318. Princeton Univ. Press.
Wang, G. (2019). Interpret federated learning with Shapley values.
Yang, Q. et al., editors (2020). Federated Learning Privacy and Incentive, volume 12500 of Lecture Notes in Computer Science. Springer.
Younis, R. et al. (2023). FLAMES2Graph: An interpretable federated multivariate time series classification framework. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’23, page 3140–3150. ACM.
Yuan, X. et al. (2022). An efficient digital twin assisted clustered federated learning algorithm for disease prediction. In 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), pages 1–6.
Publicado
25/05/2026
Como Citar
TRINDADE, Daniel Ribeiro; ZAMBON, Eduardo; VILLAÇA, Rodolfo da Silva; DIAS, Diego Roberto Colombo; COMARELA, Giovanni.
Do Aprendizado Centralizado ao Federado: O Que Acontece com a Explicabilidade dos Modelos?. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 44. , 2026, Praia do Forte/BA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2026
.
p. 659-672.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2026.19824.
