Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico

  • Gabriel H. N. Espindola da Silva UEL
  • Rodrigo Sanches Miani UFU
  • Bruno Bogaz Zarpelão UEL

Resumo


Neste artigo, investigamos o impacto que amostras adversárias causam em algoritmos de aprendizado de máquina supervisionado utilizados para detectar ataques em um sistema ciberfísico. O estudo leva em consideração o cenário onde um atacante consegue obter acesso a dados do sistema alvo que podem ser utilizados para o treinamento do modelo adversário. O objetivo do atacante é gerar amostras maliciosas utilizando aprendizado de máquina adversário para enganar os modelos implementados para detecção de intrusão. Foi observado através dos ataques FGSM (Fast Gradient Sign Method) e JSMA (Jacobian Saliency Map Attack) que o conhecimento prévio da arquitetura do algoritmo alvo pode levar a ataques mais severos, e que os algoritmos alvo testados sofrem diferentes impactos conforme se varia o volume de dados roubados pelo atacante. Por fim, o método FGSM produziu ataques com maior severidade média que o JSMA, mas o JSMA apresenta a vantagem de ser menos invasivo e, possivelmente, mais difícil de ser detectado.

Referências

Aiken, J. and Scott-Hayward, S. (2019). Investigating adversarial attacks against network intrusion detection systems in SDNs. In 2019 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), pages 1-7.

Alhajjar, E., Maxwell, P., and Bastian, N. D. (2020). Adversarial machine learning in network intrusion detection systems. CoRR, abs/2004.11898.

Anthi, E., Williams, L., Rhode, M., Burnap, P., and Wedgbury, A. (2021). Adversarial attacks on machine learning cybersecurity defences in industrial control systems. Journal of Information Security and Applications, 58:102717.

Apruzzese, G., Colajanni, M., Ferretti, L., and Marchetti, M. (2019). Addressing adversarial attacks against security systems based on machine learning. In 2019 11th International Conference on Cyber Conflict (CyCon), volume 900, pages 1-18.

Ayub, M. A., Johnson, W. A., Talbert, D. A., and Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 54th Annual Conference on Information Sciences and Systems (CISS), pages 1-6.

Beaver, J. M., Borges-Hink, R. C., and Buckner, M. A. (2013). An evaluation of machine learning methods to detect malicious SCADA communications. In 2013 12th International Conference on Machine Learning and Applications, volume 2, pages 54-59.

Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.

Ibitoye, O., Shafiq, O., and Matrawy, A. (2019). Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1-6.

Kim, S., Park, K.-J., and Lu, C. (2022). A survey on network security for cyber-physical systems: From threats to resilient design. IEEE Communications Surveys Tutorials, 24(3):1534-1573.

Ning, X. and Jiang, J. (2022). Design, analysis and implementation of a security assessment/enhancement platform for cyber-physical systems. IEEE Transactions on Industrial Informatics, 18(2):1154-1164.

Papernot, N., Faghri, F., Carlini, N., Goodfellow, I., Feinman, R., Kurakin, A., Xie, C., Sharma, Y., Brown, T., Roy, A., Matyasko, A., Behzadan, V., Hambardzumyan, K., Zhang, Z., Juang, Y.-L., Li, Z., Sheatsley, R., Garg, A., Uesato, J., Gierke, W., Dong, Y., Berthelot, D., Hendricks, P., Rauber, J., and Long, R. (2018). Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768.

Papernot, N., McDaniel, P., and Goodfellow, I. (2016a). Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., and Swami, A. (2016b). The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroSP), pages 372-387.

Pawlicki, M., Choraś, M., and Kozik, R. (2020). Defending network intrusion detection systems against adversarial evasion attacks. Future Generation Computer Systems, 110:148-154.

Resende, P. A. A. and Drummond, A. C. (2018). A survey of random forest based methods for intrusion detection systems. ACM Comput. Surv., 51(3).

Tabassi, E., Burns, K. J., Hadjimichael, M., Molina-Markham, A. D., and Sexton, J. T. (2019). A taxonomy and terminology of adversarial machine learning. NIST IR, pages 1-29.

Wang, Z. (2018). Deep learning-based intrusion detection with adversaries. IEEE Access, 6:38367-38384.

Yang, K., Liu, J., Zhang, C., and Fang, Y. (2018). Adversarial examples against the deep learning based network intrusion detection systems. In MILCOM 2018 2018 IEEE Military Communications Conference (MILCOM), pages 559-564.

Zarpelão, B. B., Barbon Junior, S., Acarali, D., and Rajarajan, M. (2020). How Machine Learning Can Support Cyberattack Detection in Smart Grids, pages 225-258. Springer International Publishing, Cham.

Zhang, J., Pan, L., Han, Q.-L., Chen, C., Wen, S., and Xiang, Y. (2022). Deep learning based attack detection for cyber-physical system cybersecurity: A survey. IEEE/CAA Journal of Automatica Sinica, 9(3):377-391.
Publicado
22/05/2023
SILVA, Gabriel H. N. Espindola da; MIANI, Rodrigo Sanches; ZARPELÃO, Bruno Bogaz. Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 41. , 2023, Brasília/DF. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 281-294. ISSN 2177-9384. DOI: https://doi.org/10.5753/sbrc.2023.488.