An Explainable Artificial Intelligence (XAI)-Based Architecture for Intrusion Detection Systems in Smart Grids

  • Camilla Borchhardt Quincozes UNIPAMPA
  • Henrique C. Oliveira UFU
  • Silvio E. Quincozes UNIPAMPA
  • Rodrigo S. Miani UFU
  • Vagner E. Quincozes UFF

Abstract


This paper proposes an architecture for an Explainable Intrusion Detection System (X-IDS) for electrical substations, aiming to enhance the transparency and reliability of traditional IDSs. The architecture integrates eXplainable Artificial Intelligence techniques (XAI) and new feature extraction methods, using temporal enrichment and robust preprocessing to improve the detection and interpretation of attacks. The results demonstrate that the proposed X-IDS reduces bias towards certain attacks, enhances the interpretation of complex attacks, and facilitates the analysis of corrections and new implementations, offering a more robust and transparent solution for the security of electrical substations. Random Forest presented the best performance statistics: accuracy and precision of 98.79%, and recall 98.68%.

References

Bisong, E. and Bisong, E. (2019). Introduction to scikit-learn. Building machine learning and deep learning models on google cloud platform: a comprehensive guide for beginners, pages 215–229.

Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. mit Press.

Dresch, F. N., Scherer, F. H., Quincozes, S. E., and Kreutz, D. L. (2024). Modelos interpretáveis com inteligência artificial explicável (XAI) na detecção de intrusões em redes intra-veiculares controller area network (CAN). In Anais do XIX Simpósio Brasileiro de Segurança da Informaçao e de Sistemas Computacionais. SBC.

IEC, T. (2003). Communication networks and systems in substations. IEC61850.

Kuzlu, M., Cali, U., Sharma, V., and Guler, O. (2020). Gaining insight into solar photo-voltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access, 8:187814–187823.

Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.

Molnar, C. (2022). Interpretable Machine Learning. Leanpub, 2 edition.

Munir, M. S., Shetty, S., and Rawat, D. B. (2023). Trustworthy artificial intelligence framework for proactive detection and risk explanation of cyber attacks in smart grid.

Neupane, S., Ables, J., Anderson, W., Mittal, S., Rahimi, S., Banicescu, I., and Seale, M. (2022). Explainable intrusion detection systems (x-ids): A survey of current methods, challenges, and opportunities. IEEE Access, 10:112392–112415.

Premaratne, U. K., Samarabandu, J., Sidhu, T. S., Beresh, R., and Tan, J.-C. (2010). An intrusion detection system for IEC61850 automated substations. IEEE Transactions on Power Delivery, 25(4):2376–2383.

Quincozes, S. E., Albuquerque, C., Passos, D., and Mossé, D. (2021). A survey on intrusion detection and prevention systems in digital substations. Computer Networks, 184:107679.

Quincozes, S. E., Albuquerque, C., Passos, D., and Mossé, D. (2023). ERENO: A Framework for Generating Realistic IEC–61850 Intrusion Detection Datasets for Smart Grids. IEEE Transactions on Dependable and Secure Computing.

Quincozes, V. E., Quincozes, S. E., Kazienko, J. F., Gama, S., Cheikhrouhou, O., and Koubaa, A. (2024). A survey on IoT application layer protocols, security challenges, and the role of explainable AI in IoT (XAIoT). International Journal of Information Security, 23(3):1975–2002.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). Why should i trust you? explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM.

Sivamohan, S., Sridhar, S., and Krishnaveni, S. (2023). TEA-EKHO-IDS: An intrusion detection system for industrial CPS with trustworthy explainable AI and enhanced krill herd optimization. Peer-to-Peer Networking and Applications, 16(4):1993–2021.

Suaboot, J., Fahad, A., Tari, Z., Grundy, J., Mahmood, A. N., Almalawi, A., Zomaya, A. Y., and Drira, K. (2020). A taxonomy of supervised learning for IDSs in scada environments. ACM Computing Surveys (CSUR), 53(2):1–37.

Vainio-Pekka, H., Agbese, M. O.-O., Jantunen, M., Vakkuri, V., Mikkonen, T., Rousi, R., and Abrahamsson, P. (2023). The role of explainable ai in the research field of ai ethics. ACM Transactions on Interactive Intelligent Systems, 13(4):1–39.

Wang, M., Zheng, K., Yang, Y., and Wang, X. (2020). An explainable machine learning framework for intrusion detection systems. IEEE Access, 8:73127–73141.

Youssef, T. A., El Hariri, M., Bugay, N., and Mohammed, O. (2016). Iec 61850: Technology standards and cyber-threats. In 2016 IEEE 16th International Conference on Environment and Electrical Engineering (EEEIC), pages 1–6. IEEE.

Zolanvari, M., Yang, Z., Khan, K., Jain, R., and Meskin, N. (2021). TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security. IEEE internet of things journal, 10(4):2967–2978.
Published
2024-09-16
QUINCOZES, Camilla Borchhardt; OLIVEIRA, Henrique C.; QUINCOZES, Silvio E.; MIANI, Rodrigo S.; QUINCOZES, Vagner E.. An Explainable Artificial Intelligence (XAI)-Based Architecture for Intrusion Detection Systems in Smart Grids. In: BRAZILIAN SYMPOSIUM ON CYBERSECURITY (SBSEG), 24. , 2024, São José dos Campos/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 662-677. DOI: https://doi.org/10.5753/sbseg.2024.241370.

Most read articles by the same author(s)

1 2 3 > >>