Evaluating Robustness and Detection of Adversarial Attacks in EEG-Based Brain-Computer Interfaces

  • Beatriz C. da Costa FURG
  • André Riker UFPA
  • Roger Immich UFRN
  • Bruno L. Dalmazo FURG

Abstract


Research Context: Brain-computer interfaces (BCI) are systems that capture brain signals through techniques such as electroencephalography (EEG), processing these signals for various applications, especially in the control of devices for people with motor limitations. Despite the benefits, there are security concerns, including adversarial and cybersecurity attacks. Due to the emergence of brain-computer interaction devices on the market, it is necessary to analyze the security of these devices. Scientific and/or Practical Problem: Machine learning classifiers used in BCIs are vulnerable to adversarial attacks, which can compromise accuracy, safety, and user privacy. The lack of systematic evaluation of these vulnerabilities represents a gap in current research. Proposed Solution and/or Analysis: This work aims to emulate and analyze adversarial attacks in classifiers of BCI devices. Our experiments used the Foolbox tool to evaluate different adversarial techniques, such as DeepFool, FGSM, PGD, and Carlini-Wagner. Our evaluation identifies the negative effects of adversarial attacks on data classification. Related IS Theory: Technology acceptance model; Information processing theory. Research Method: Experiments were conducted on the BCI Competition 2008 Graz dataset, with attacks emulated during inference. Detection mechanisms based on Random Forest, SVM, and KNN were trained and evaluated to assess the feasibility of automatic defense. Summary of Results: Classifier accuracy decreased sharply under attack, with success rates ranging from 75.2% to 100%. Detection models achieved 83% accuracy with Random Forest and SVM for FGSM attacks, but only 5% with KNN for DeepFool, highlighting the challenge of detecting subtle perturbations. Contributions and Impact to IS area: The work demonstrates the vulnerabilities of BCI classifiers, proposes an evaluation pipeline for adversarial robustness, and emphasizes the importance of integrating security assessment into BCI development. Results have direct implications for information systems dealing with sensitive biomedical data.

References

Aissa, N. E. H. S. B., Kerrache, C. A., Korichi, A., Lakas, A., and Belkacem, A. N. (2024). Enhancing eeg signal classifier robustness against adversarial attacks using a generative adversarial network approach. IEEE Internet of Things Magazine, 7(3):44–49.

Aissa, N. E. H. S. B., Lakas, A., Korichi, A., Kerrache, C. A., and Belkacem, A. N. (2023). Robust detection of adversarial attacks for eeg-based motor imagery classification using hierarchical deep learning. In 2023 15th International Conference on Innovations in Information Technology (IIT), pages 156–161.

Barreno, M., Nelson, B., Sears, R., Joseph, A. D., and Tygar, J. D. (2006). Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS ’06, page 16–25, New York, NY, USA. Association for Computing Machinery.

Bernal, S. L., Celdrán, A. H., Pérez, G. M., Barros, M. T., and Balasubramaniam, S. (2021). Security in brain-computer interfaces: State-of-the-art, opportunities, and future challenges. ACM Comput. Surv., 54(1).

Brunner, C., Leeb, R., and Müller-Putz, G. (2024). Bci competition 2008–graz data set a.

Carlini, N. and Wagner, D. (2017). Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57.

Chen, X., Meng, L., Xu, Y., and Wu, D. (2024). Adversarial artifact detection in eeg-based brain–computer interfaces. Journal of Neural Engineering, 21(5):056043.

Dalmazo, B. L., Vilela, J. P., and Curado, M. (2017). Performance analysis of network traffic predictors in the cloud. Journal of Network and Systems Management, 25:290–320.

Dalmazo, B. L., Vilela, J. P., and Curado, M. (2018). Triple-similarity mechanism for alarm management in the cloud. Computers & Security, 78:33–42.

Feng, B., Wang, Y., and Ding, Y. (2021). Saga: Sparse adversarial attack on eeg-based brain computer interface. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 975–979.

Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Explaining and harnessing adversarial examples.

Hossen, M. I., Tu, Y., and Hei, X. (2023). A first look at the security of eeg-based systems and intelligent algorithms under physical signal injections. In Proceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop, SecTL ’23, New York, NY, USA. Association for Computing Machinery.

Jiang, X., Zhang, X., and Wu, D. (2019). Active learning for black-box adversarial attacks in eeg-based brain-computer interfaces. In 2019 IEEE Symposium Series on Computational Intelligence (SSCI), pages 361–368.

Jung, J., Moon, H., Yu, G., and Hwang, H. (2023). Generative perturbation network for universal adversarial attacks on brain-computer interfaces. IEEE Journal of Biomedical and Health Informatics, 27(11):5622–5633.

Kanhere, S. and Naveed, A. (2005). A novel tuneable low-intensity adversarial attack. In The IEEE Conference on Local Computer Networks 30th Anniversary (LCN’05)l, pages 8 pp.–801.

Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., and Lance, B. J. (2018). Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces. Journal of Neural Engineering, 15(5):056013.

Leite, L., Santo, Y., Dalmazo, B., and Riker, A. (2024). Federated learning under attack: Improving gradient inversion for batch of images. In Anais do XXIV Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais, pages 794–800, Porto Alegre, RS, Brasil. SBC.

Li, Y., Yu, X., Yu, S., and Chen, B. (2022). Adversarial training for the adversarial robustness of eeg-based brain-computer interfaces. In 2022 IEEE 32nd International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6.

Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., and Leung, V. C. M. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6:12103–12117.

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2019). Towards deep learning models resistant to adversarial attacks.

Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2574–2582.

Neurable (2024). Neurable. Company website.

Rauber, J., Zimmermann, R., Bethge, M., and Brendel, W. (2020). Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax. Journal of Open Source Software, 5(53):2607.

Shi, Y., Sagduyu, Y., and Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International Symposium on Technologies for Homeland Security (HST), pages 1–5.

Silva, G., Junior, G. F., and Zarpelão, B. (2024). Impacto de ataques de evasão e eficácia da defesa baseada em treinamento adversário em detectores de malware. In Anais do XXIV Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais, pages 829–835, Porto Alegre, RS, Brasil. SBC.

Upadhayay, B. and Behzadan, V. (2023). Adversarial stimuli: Attacking brain-computer interfaces via perturbed sensory events. In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3061–3066.

Vassilev, A., Oprea, A., Fordyce, A., and Anderson, H. (2024). Adversarial machine learning: A taxonomy and terminology of attacks and mitigations. NIST Artificial Intelligence (AI) Report NIST AI 100-2e2023, National Institute of Standards and Technology, Gaithersburg, MD.

Wang, F., Liu, W., and Chawla, S. (2014). On sparse feature attacks in adversarial learning. In 2014 IEEE International Conference on Data Mining, pages 1013–1018.

Yu, H., Chan, P. P. K., Ng, W. W. Y., and Yeung, D. S. (2010). Apply randomization in knn to make the adversary harder to attack the classifier. In 2010 International Conference on Machine Learning and Cybernetics, volume 1, pages 179–183.
Published
2026-05-25
COSTA, Beatriz C. da; RIKER, André; IMMICH, Roger; DALMAZO, Bruno L.. Evaluating Robustness and Detection of Adversarial Attacks in EEG-Based Brain-Computer Interfaces. In: BRAZILIAN SYMPOSIUM ON INFORMATION SYSTEMS (SBSI), 22. , 2026, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 291-308. DOI: https://doi.org/10.5753/sbsi.2026.248345.

Most read articles by the same author(s)

1 2 3 > >>