Impact of evasion attacks and effectiveness of adversarial training-based defense on malware detectors

  • Gabriel H. N. Espindola da Silva UEL
  • Gilberto Fernandes Junior UEL
  • Bruno Bogaz Zarpelão UEL

Abstract


Machine learning (ML) algorithms can aid in detecting malicious software (malware), by identifying their behavior patterns. However, ML models are vulnerable to adversarial machine learning (AML) attacks, leading to malware being misclassified as benign. This study examines the impact of dFGSM (Deterministic Fast Gradient Sign Method), rFGSM (Randomic Fast Gradient Sign Method), BGA (Bit Gradient Ascent), BCA (Bit Coordinate Ascent), and Grosse attacks on malware detectors and evaluates the effectiveness of adversarial training as a defense. The results demonstrate that high-intensity attacks (dFGSM, rFGSM, BGA) significantly reduce the detector accuracy, even when adversarial training is applied.

References

Al-Dujaili, A., Huang, A., Hemberg, E., and O’Reilly, U.-M. (2018). Adversarial deep learning for robust detection of binary encoded malware. In 2018 IEEE Security and Privacy Workshops (SPW), pages 76–82. IEEE.

Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K., and Siemens, C. (2014). Drebin: Effective and explainable detection of android malware in your pocket. In Ndss, volume 14, pages 23–26.

Aslan, Ö. A. and Samet, R. (2020). A comprehensive review on malware detection approaches. IEEE Access, 8:6249–6271.

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P. (2016). Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435.

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P. (2017). Adversarial examples for malware detection. In European symposium on research in computer security, pages 62–79. Springer.

Kolosnjaji, B., Demontis, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. (2018). Adversarial malware binaries: Evading deep learning for malware detection in executables. In 2018 26th European signal processing conference (EUSIPCO), pages 533–537. IEEE.

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., and Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830.

Pierazzi, F., Pendlebury, F., Cortellazzi, J., and Cavallaro, L. (2020). Intriguing properties of adversarial ml attacks in the problem space. In 2020 IEEE symposium on security and privacy (SP), pages 1332–1349. IEEE.

Rosenberg, I., Shabtai, A., Rokach, L., and Elovici, Y. (2018). Generic black-box end-to-end attack against state of the art api call based malware classifiers. In International Symposium on Research in Attacks, Intrusions, and Defenses, pages 490–510. Springer.

Shaukat, K., Luo, S., and Varadharajan, V. (2022). A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks. Engineering Applications of Artificial Intelligence, 116:105461.
Published
2024-09-16
SILVA, Gabriel H. N. Espindola da; FERNANDES JUNIOR, Gilberto; ZARPELÃO, Bruno Bogaz. Impact of evasion attacks and effectiveness of adversarial training-based defense on malware detectors. In: BRAZILIAN SYMPOSIUM ON CYBERSECURITY (SBSEG), 24. , 2024, São José dos Campos/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 829-835. DOI: https://doi.org/10.5753/sbseg.2024.240800.

Most read articles by the same author(s)