Minimal but Lethal: A XAI-Driven Approach for Feature-Level Adversarial Attacks on Healthcare 5.1
Abstract
In Healthcare 5.0, the expanded attack surface increases the vulnerability of Intrusion Detection Systems (IDS) to sophisticated threats. Among them, adversarial attacks modify features to evade the detection of malicious samples. XAI-driven methods enable the manipulation of fewer — sometimes just one—features while maximizing impact. To date, no XAI-driven adversarial strategy has been applied to cyber-biomedical features in Healthcare 5.0. In this work, we address this gap by employing XAI-Driven approach to maximize IDS degradation through a feature-level adversarial attacks. Our results reveals that a single feature perturbed can drastically reducing F1-Score from 99% to 0% in data alteration scenarios and from 81% to 12% in spoofing attacks.References
Agrawal, N., Pendharkar, I., Shroff, J., Raghuvanshi, J., Neogi, A., Patil, S., Walambe, R., and Kotecha, K. (2024). A-xai: adversarial machine learning for trustable explainability. AI and Ethics, 4(4):1143–1174.
Alhajjar, E., Maxwell, P., and Bastian, N. (2021). Adversarial machine learning in network intrusion detection systems. Expert Systems with Applications, 186:115782.
Asiri, A., Wu, F., Tian, Z., and Yu, S. (2024). From the perspective of ai safety: Analyzing the impact of xai performance on adversarial attack. In GLOBECOM 2024-2024 IEEE Global Communications Conference, pages 4982–4987. IEEE.
Baniecki, H. and Biecek, P. (2024). Adversarial attacks and defenses in explainable artificial intelligence: A survey. Information Fusion, page 102303.
Bayer, M., Neiczer, M., Samsinger, M., Buchhold, B., and Reuter, C. (2024). Xai-attack: Utilizing explainable ai to find incorrectly learned patterns for black-box adversarial example creation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 17725–17738.
Brohi, S. and Mastoi, Q.-u.-a. (2025). From accuracy to vulnerability: Quantifying the impact of adversarial perturbations on healthcare ai models. Big Data and Cognitive Computing, 9(5):114.
Check Point Software Technologies (2025). Check Point Software’s 2025 Security Report Finds Alarming 44% Increase in Cyber-Attacks Amid Maturing Cyber Threat Ecosystem. Available at: [link]. Accessed on February 20, 2025.
Chen, J., Jordan, M. I., and Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 ieee symposium on security and privacy (sp), pages 1277–1294. IEEE.
Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA. ACM.
Gadekallu, T. R., Maddikunta, P. K. R., Boopathy, P., Deepa, N., Chengoden, R., Victor, N., Wang, W., Wang, W., Zhu, Y., and Dev, K. (2024). Xai for industry 5.0-concepts, opportunities, challenges and future directions. IEEE Open Journal of the Communications Society.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
Hady, A. A., Ghubaish, A., Salman, T., Unal, D., and Jain, R. (2020). Intrusion Detection System for Healthcare Systems Using Medical and Network Data: A Comparison Study. IEEE Access, 8:106576–106584.
Imam, N. H. (2024). Adversarial examples on xai-enabled dt for smart healthcare systems. Sensors, 24(21):6891.
Khan, N., Ahmad, K., Tamimi, A. A., Alani, M. M., Bermak, A., and Khalil, I. (2024). Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions. arXiv preprint arXiv:2408.03335.
Kuppa, A. and Le-Khac, N.-A. (2021). Adversarial xai methods in cybersecurity. IEEE transactions on information forensics and security, 16:4924–4938.
Lundberg, S. M. and Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc.
Nicolae, M.-I., Sinn, M., Tran, M. N., Buesser, B., Rawat, A., Wistuba, M., Zantedeschi, V., Baracaldo, N., Chen, B., Ludwig, H., Molloy, I., and Edwards, B. (2018). Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069.
Okada, S., Jmila, H., Akashi, K., Mitsunaga, T., Sekiya, Y., Takase, H., Blanc, G., and Nakamura, H. (2025). Xai-driven black-box adversarial attacks on network intrusion detectors. International Journal of Information Security, 24(3):1–15.
Rosenberg, I., Meir, S., Berrebi, J., Gordon, I., Sicard, G., and David, E. O. (2020). Generating end-to-end adversarial examples for malware classifiers using explainability. In 2020 international joint conference on neural networks (IJCNN), pages 1–10. IEEE.
Statista (2025). Digital Health - Worldwide. Available in: [link]. Accessed on February 20, 2025.
Tandel, V., Kumari, A., Tanwar, S., Singh, A., Sharma, R., and Yamsani, N. (2024). Intelligent wearable-assisted digital healthcare industry 5.0. Artificial Intelligence in Medicine, 157:103000.
Vázquez-Hernández, M., Morales-Rosales, L. A., Algredo-Badillo, I., Fernández-Gregorio, S. I., Rodríguez-Rangel, H., and Córdoba-Tlaxcalteco, M.-L. (2024). A survey of adversarial attacks: An open issue for deep learning sentiment analysis models. Applied Sciences, 14(11):4614.
Yan, A., Liu, X., Li, W., Ye, H., and Li, L. (2024). Explanation-guided adversarial example attacks. Big Data Research, 36:100451.
Alhajjar, E., Maxwell, P., and Bastian, N. (2021). Adversarial machine learning in network intrusion detection systems. Expert Systems with Applications, 186:115782.
Asiri, A., Wu, F., Tian, Z., and Yu, S. (2024). From the perspective of ai safety: Analyzing the impact of xai performance on adversarial attack. In GLOBECOM 2024-2024 IEEE Global Communications Conference, pages 4982–4987. IEEE.
Baniecki, H. and Biecek, P. (2024). Adversarial attacks and defenses in explainable artificial intelligence: A survey. Information Fusion, page 102303.
Bayer, M., Neiczer, M., Samsinger, M., Buchhold, B., and Reuter, C. (2024). Xai-attack: Utilizing explainable ai to find incorrectly learned patterns for black-box adversarial example creation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 17725–17738.
Brohi, S. and Mastoi, Q.-u.-a. (2025). From accuracy to vulnerability: Quantifying the impact of adversarial perturbations on healthcare ai models. Big Data and Cognitive Computing, 9(5):114.
Check Point Software Technologies (2025). Check Point Software’s 2025 Security Report Finds Alarming 44% Increase in Cyber-Attacks Amid Maturing Cyber Threat Ecosystem. Available at: [link]. Accessed on February 20, 2025.
Chen, J., Jordan, M. I., and Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 ieee symposium on security and privacy (sp), pages 1277–1294. IEEE.
Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA. ACM.
Gadekallu, T. R., Maddikunta, P. K. R., Boopathy, P., Deepa, N., Chengoden, R., Victor, N., Wang, W., Wang, W., Zhu, Y., and Dev, K. (2024). Xai for industry 5.0-concepts, opportunities, challenges and future directions. IEEE Open Journal of the Communications Society.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
Hady, A. A., Ghubaish, A., Salman, T., Unal, D., and Jain, R. (2020). Intrusion Detection System for Healthcare Systems Using Medical and Network Data: A Comparison Study. IEEE Access, 8:106576–106584.
Imam, N. H. (2024). Adversarial examples on xai-enabled dt for smart healthcare systems. Sensors, 24(21):6891.
Khan, N., Ahmad, K., Tamimi, A. A., Alani, M. M., Bermak, A., and Khalil, I. (2024). Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions. arXiv preprint arXiv:2408.03335.
Kuppa, A. and Le-Khac, N.-A. (2021). Adversarial xai methods in cybersecurity. IEEE transactions on information forensics and security, 16:4924–4938.
Lundberg, S. M. and Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc.
Nicolae, M.-I., Sinn, M., Tran, M. N., Buesser, B., Rawat, A., Wistuba, M., Zantedeschi, V., Baracaldo, N., Chen, B., Ludwig, H., Molloy, I., and Edwards, B. (2018). Adversarial robustness toolbox v1.2.0. CoRR, 1807.01069.
Okada, S., Jmila, H., Akashi, K., Mitsunaga, T., Sekiya, Y., Takase, H., Blanc, G., and Nakamura, H. (2025). Xai-driven black-box adversarial attacks on network intrusion detectors. International Journal of Information Security, 24(3):1–15.
Rosenberg, I., Meir, S., Berrebi, J., Gordon, I., Sicard, G., and David, E. O. (2020). Generating end-to-end adversarial examples for malware classifiers using explainability. In 2020 international joint conference on neural networks (IJCNN), pages 1–10. IEEE.
Statista (2025). Digital Health - Worldwide. Available in: [link]. Accessed on February 20, 2025.
Tandel, V., Kumari, A., Tanwar, S., Singh, A., Sharma, R., and Yamsani, N. (2024). Intelligent wearable-assisted digital healthcare industry 5.0. Artificial Intelligence in Medicine, 157:103000.
Vázquez-Hernández, M., Morales-Rosales, L. A., Algredo-Badillo, I., Fernández-Gregorio, S. I., Rodríguez-Rangel, H., and Córdoba-Tlaxcalteco, M.-L. (2024). A survey of adversarial attacks: An open issue for deep learning sentiment analysis models. Applied Sciences, 14(11):4614.
Yan, A., Liu, X., Li, W., Ye, H., and Li, L. (2024). Explanation-guided adversarial example attacks. Big Data Research, 36:100451.
Published
2025-09-01
How to Cite
SIQUEIRA, Lucas P.; LUI, Pedro H.; KAZIENKO, Juliano F.; QUINCOZES, Silvio E.; QUINCOZES, Vagner E.; WELFER, Daniel.
Minimal but Lethal: A XAI-Driven Approach for Feature-Level Adversarial Attacks on Healthcare 5.1. In: BRAZILIAN SYMPOSIUM ON CYBERSECURITY (SBSEG), 25. , 2025, Foz do Iguaçu/PR.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 609-625.
DOI: https://doi.org/10.5753/sbseg.2025.11450.
