Interpretability of Intrusion Detection Models: An Information Visualization Approach
Resumo
This article explores information visualization to enhance the interpretability of intrusion detection models, focusing on Machine Learning and Explainable Artificial Intelligence (XAI). Given the complexity of cyberattacks and the ”black-box” nature of many models, this work proposes the use of techniques such as SHAP and visualizations to make model decisions, such as those from Random Forest, more understandable. Using the CICIDS2017 dataset, the study aims to apply preprocessing, train the model, interpret its decisions with SHAP, and generate explanatory visualizations. The objective is to increase confidence and adoption of intrusion detection systems, making them more transparent and auditable for security analysts. Results showed that the Random Forest model achieved an accuracy of 99.9%, indicating its high capability to distinguish between benign and malicious network traffic. More importantly, SHAP visualizations, including importance, summary, and dependence plots, provided valuable insights into model behavior.
Referências
Y. Hosain and M. C¸ akmak, “Xai-xgboost: an innovative explainable intrusion detection approach for securing internet of medical things systems,” Scientific Reports, vol. 15, no. 1, p. 22278, 2025. [Online]. DOI: 10.1038/s41598-025-07790-0
S. Reynaud and A. Roxin, “Review of explainable artificial intelligence for cybersecurity systems,” Discover Artificial Intelligence, vol. 5, no. 1, p. 78, 2025. [Online]. DOI: 10.1007/s44163-025-00318-5
C. Molnar, Interpretable Machine Learning, 3rd ed., 2025. [Online]. Available: [link]
S. AL and S. Sagiroglu, “Explainable artificial intelligence models in intrusion detection systems,” Engineering Applications of Artificial Intelligence, vol. 144, p. 110145, 2025. [Online]. Available: [link]
S. Quincozes, C. Albuquerque, D. Passos, and D. Mossé, “A survey on intrusion detection and prevention systems in digital substations,” Computer Networks, vol. 184, p. 107679, 01 2021.
W. E. Marcílio and D. M. Eler, “From explanations to feature selection: assessing shap values as feature selection mechanism,” in 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2020, pp. 340–347.
C. N. Adams and D. H. Snider, “Effective data visualization in cybersecurity,” in SoutheastCon 2018, 2018, pp. 1–8.
S. Miksch, C. Di Ciccio, P. Soffer, and B. Weber, “Visual analytics meets process mining: Challenges and opportunities,” IEEE Computer Graphics and Applications, vol. 44, no. 6, pp. 132–141, 2024.
A. Palma and M. Angelini, “Impavid: Enhancing incident management process compliance assessment with visual analytics,” Computers and Graphics, vol. 130, p. 104243, 2025. [Online]. Available: [link]
C. K. I, B. A, M. B, C. V, and D. R. N, “Explaining aha! moments in artificial agents through ike-xai: Implicit knowledge extraction for explainable ai,” NEURAL NETWORKS, vol. 155, pp. 95–118, 2022.
S. Jagatheesaperumal, V. Pham, R. Ruby, Z. Yang, C. Xu, and Z. Zhang, “Explainable ai over the internet of things (iot): Overview, state-of-theart and future directions,” IEEE Open Journal of the Communications Society, vol. PP, pp. 1–1, 01 2022.
MixMode, “The imperative of explainability in ai-driven cybersecurity,” [link], 2024, publicado em 5 de setembro de 2024. Acesso em: 13 jul. 2025.
H. Shiravi, A. Shiravi, and A. A. Ghorbani, “A survey of visualization systems for network security,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 8, pp. 1313–1329, 2012.
P. Corea, Y. Liu, J. Wang, S. Niu, and H. Song, “Explainable ai for comparative analysis of intrusion detection models,” 06 2024.
