Aplicando Privacidade Diferencial contra Ataques de Associação em Internet das Coisas
Resumo
Os modelos de aprendizado profundo têm sido usados em sistemas de detecção de intrusão para detectar anomalias e classificar ataques para os ambientes de Internet das Coisas, onde os dispositivos são alvos de ameaças constantes. No entanto, os modelos de aprendizado profundo podem ser alvos de ataques como inferência de associação e expor dados sensíveis durante o treinamento. Nesse contexto, este trabalho implementou um modelo de redes neurais densas feedforward para classificação de ataques no conjunto de dados IoT-23. Para diminuir o risco de exposição de dados, aplicou-se o algoritmo DP-SGD (Differentially Private Stochastic Gradient Descent) no modelo. Além disso, utilizou-se o desempenho do ataque de inferência de associação baseado em regras para avaliar o nível de privacidade do modelo, com métricas de acurácia, precisão e recall. Também foram feitos experimentos com diferentes níveis de privacidade. Constatou-se que a eficiência do ataque diminui proporcionalmente com a eficiência do modelo. O uso do DP-SGD diminuiu a acurácia do ataque em 5%, sem impactar sua precisão, e o recall em 11%.
Palavras-chave:
Privacidade diferencial, Aprendizado profundo, Internet das coisas, Ataque de inferência de associação, Classificação de ataques
Referências
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318.
AI, T. (2024). Adversarial robustness toolbox repository. [link].
Amari, S.-i. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4–5):185–196.
Bezerra, E. (2016). Introdução à aprendizagem profunda. Artigo – 31º Simpósio Brasileiro de Banco de Dados – SBBD2016 – Salvador.
Boulemtafes, A., Derhab, A., and Challal, Y. (2020). A review of privacy-preserving techniques for deep learning. Neurocomputing, 384:21–45.
Cherubin, G., Kopf, B., Paverd, A., Tople, S., Wutschitz, L., and Zanella-Béguelin, S. (2024). Closed-form bounds for DP-SGD against record-level inference attacks. In 33rd USENIX Security Symposium (USENIX Security 24), pages 4819–4836.
Chua, L., Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sinha, A., and Zhang, C. (2024). How private is DP-SGD? arXiv preprint arXiv:2403.17673.
Dwork, C. (2006). Differential privacy. In International Colloquium on Automata, Languages, and Programming, pages 1–12. Springer.
Haddadi, F., Khanchi, S., Shetabi, M., and Derhami, V. (2010). Intrusion detection and attack classification using feed-forward neural network. In 2010 Second International Conference on Computer and Network Technology, pages 262–266. IEEE.
Hoque, N., Bhattacharyya, D. K., and Kalita, J. K. (2015). Botnet in DDoS attacks: Trends and challenges. IEEE Communications Surveys & Tutorials, 17(4):2242–2270.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang, X. (2022). Membership inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 54(11s):1–37.
Javaid, A., Niyaz, Q., Sun, W., and Alam, M. (2016). A deep learning approach for network intrusion detection system. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies, pages 21–26.
Machooka, D., Yuan, X., Roy, K., and Chen, G. (2024). Comparison of LSTM and MLP trained under differential privacy for intrusion detection. In 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), pages 1–10. IEEE.
Nissim, K., and Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170358.
Oliveira, J. A. d. (2024). F-NIDS: Sistema de detecção de intrusão baseado em aprendizado federado.
Pinheiro, A. J., de Araujo-Filho, P. F., Bezerra, J. d. M., and Campelo, D. R. (2020). Adaptive packet padding approach for smart home networks: A tradeoff between privacy and performance. IEEE Internet of Things Journal, 8(5):3930–3938.
Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M. P., Shyu, M.-L., Chen, S.-C., and Iyengar, S. S. (2018). A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), 51(5):1–36.
Pustozerova, A., Baumbach, J., and Mayer, R. (2023). Differentially private federated learning: Privacy and utility analysis of output perturbation and DP-SGD. In 2023 IEEE International Conference on Big Data (BigData), pages 5549–5558. IEEE.
Qi, X., Wang, T., and Liu, J. (2017). Comparison of support vector machine and softmax classifiers in computer vision. In 2017 Second International Conference on Mechanical, Control and Computer Engineering (ICMCCE), pages 151–155. IEEE.
Reis, C. H. (2021). Otimização de hiperparâmetros em redes neurais profundas. Minas Gerais.
Rimer, M., and Martinez, T. (2004). SoftProp: Softmax neural network backpropagation learning. In 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), volume 2, pages 979–983. IEEE.
Sebastian Garcia, Agustin Parmisano, M. J. E. (2020). Iot-23: A labeled dataset with malicious and benign iot network traffic (version 1.0.0). DOI: 10.5281/zenodo.4743746.
Siachos, I., Kaltakis, K., Papachristopoulou, K., Giannoulakis, I., and Kafetzakis, E. (2023). Comparison of machine learning algorithms trained under differential privacy for intrusion detection systems. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR), pages 654–658. IEEE.
Tang, Q., Shpilevskiy, F., and Lecuyer, M. (2024). DP-AdamBC: Your DP-Adam is actually DP-SGD (unless you apply bias correction). In Proceedings of the AAAI Conference on Artificial Intelligence, 38:15276–15283.
TensorFlow Privacy. (2024). TensorFlow Privacy - DPKerasAdamOptimizer.
Vidal, I. d. C. (2020). Protecting: Garantindo a privacidade de dados gerados em casas inteligentes localmente na borda da rede.
Zhang, Z., Yan, C., and Malin, B. A. (2022). Membership inference attacks against synthetic health data. Journal of Biomedical Informatics, 125:103977.
AI, T. (2024). Adversarial robustness toolbox repository. [link].
Amari, S.-i. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4–5):185–196.
Bezerra, E. (2016). Introdução à aprendizagem profunda. Artigo – 31º Simpósio Brasileiro de Banco de Dados – SBBD2016 – Salvador.
Boulemtafes, A., Derhab, A., and Challal, Y. (2020). A review of privacy-preserving techniques for deep learning. Neurocomputing, 384:21–45.
Cherubin, G., Kopf, B., Paverd, A., Tople, S., Wutschitz, L., and Zanella-Béguelin, S. (2024). Closed-form bounds for DP-SGD against record-level inference attacks. In 33rd USENIX Security Symposium (USENIX Security 24), pages 4819–4836.
Chua, L., Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sinha, A., and Zhang, C. (2024). How private is DP-SGD? arXiv preprint arXiv:2403.17673.
Dwork, C. (2006). Differential privacy. In International Colloquium on Automata, Languages, and Programming, pages 1–12. Springer.
Haddadi, F., Khanchi, S., Shetabi, M., and Derhami, V. (2010). Intrusion detection and attack classification using feed-forward neural network. In 2010 Second International Conference on Computer and Network Technology, pages 262–266. IEEE.
Hoque, N., Bhattacharyya, D. K., and Kalita, J. K. (2015). Botnet in DDoS attacks: Trends and challenges. IEEE Communications Surveys & Tutorials, 17(4):2242–2270.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang, X. (2022). Membership inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 54(11s):1–37.
Javaid, A., Niyaz, Q., Sun, W., and Alam, M. (2016). A deep learning approach for network intrusion detection system. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies, pages 21–26.
Machooka, D., Yuan, X., Roy, K., and Chen, G. (2024). Comparison of LSTM and MLP trained under differential privacy for intrusion detection. In 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), pages 1–10. IEEE.
Nissim, K., and Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170358.
Oliveira, J. A. d. (2024). F-NIDS: Sistema de detecção de intrusão baseado em aprendizado federado.
Pinheiro, A. J., de Araujo-Filho, P. F., Bezerra, J. d. M., and Campelo, D. R. (2020). Adaptive packet padding approach for smart home networks: A tradeoff between privacy and performance. IEEE Internet of Things Journal, 8(5):3930–3938.
Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M. P., Shyu, M.-L., Chen, S.-C., and Iyengar, S. S. (2018). A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), 51(5):1–36.
Pustozerova, A., Baumbach, J., and Mayer, R. (2023). Differentially private federated learning: Privacy and utility analysis of output perturbation and DP-SGD. In 2023 IEEE International Conference on Big Data (BigData), pages 5549–5558. IEEE.
Qi, X., Wang, T., and Liu, J. (2017). Comparison of support vector machine and softmax classifiers in computer vision. In 2017 Second International Conference on Mechanical, Control and Computer Engineering (ICMCCE), pages 151–155. IEEE.
Reis, C. H. (2021). Otimização de hiperparâmetros em redes neurais profundas. Minas Gerais.
Rimer, M., and Martinez, T. (2004). SoftProp: Softmax neural network backpropagation learning. In 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), volume 2, pages 979–983. IEEE.
Sebastian Garcia, Agustin Parmisano, M. J. E. (2020). Iot-23: A labeled dataset with malicious and benign iot network traffic (version 1.0.0). DOI: 10.5281/zenodo.4743746.
Siachos, I., Kaltakis, K., Papachristopoulou, K., Giannoulakis, I., and Kafetzakis, E. (2023). Comparison of machine learning algorithms trained under differential privacy for intrusion detection systems. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR), pages 654–658. IEEE.
Tang, Q., Shpilevskiy, F., and Lecuyer, M. (2024). DP-AdamBC: Your DP-Adam is actually DP-SGD (unless you apply bias correction). In Proceedings of the AAAI Conference on Artificial Intelligence, 38:15276–15283.
TensorFlow Privacy. (2024). TensorFlow Privacy - DPKerasAdamOptimizer.
Vidal, I. d. C. (2020). Protecting: Garantindo a privacidade de dados gerados em casas inteligentes localmente na borda da rede.
Zhang, Z., Yan, C., and Malin, B. A. (2022). Membership inference attacks against synthetic health data. Journal of Biomedical Informatics, 125:103977.
Publicado
19/05/2025
Como Citar
SILVA, Davi Bezerra Yada da; DOS SANTOS, Aldri Luiz; BEZERRA, Jeandro de M..
Aplicando Privacidade Diferencial contra Ataques de Associação em Internet das Coisas. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 43. , 2025, Natal/RN.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 980-993.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2025.6445.