Applying Differential Privacy Against Membership Inference Attacks in the Internet of Things
Abstract
Deep learning models have been employed in intrusion detection systems to identify anomalies and classify attacks. These models are particularly useful in Internet of Things (IoT) environments, where devices face constant threats. However, deep learning models are vulnerable to attacks, such as membership inference, which can expose sensitive training data. In this context, a feedforward dense neural network was implemented for attack classification using the IoT-23 dataset. To mitigate data exposure risks, the DP-SGD (Differentially Private Stochastic Gradient Descent) algorithm was applied to the model. Additionally, the performance of a rule-based membership inference attack was used to evaluate the model’s privacy level, with metrics such as accuracy, precision, and recall. Experiments with varying privacy levels revealed that attack effectiveness decreases proportionally to model efficiency. The use of DP-SGD reduced attack accuracy by 5%, without impacting its precision, and decreased recall by 11%.
Keywords:
Differential privacy, Deep learning, Internet of Things, Membership inference attack, Attack classification
References
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318.
AI, T. (2024). Adversarial robustness toolbox repository. [link].
Amari, S.-i. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4–5):185–196.
Bezerra, E. (2016). Introdução à aprendizagem profunda. Artigo – 31º Simpósio Brasileiro de Banco de Dados – SBBD2016 – Salvador.
Boulemtafes, A., Derhab, A., and Challal, Y. (2020). A review of privacy-preserving techniques for deep learning. Neurocomputing, 384:21–45.
Cherubin, G., Kopf, B., Paverd, A., Tople, S., Wutschitz, L., and Zanella-Béguelin, S. (2024). Closed-form bounds for DP-SGD against record-level inference attacks. In 33rd USENIX Security Symposium (USENIX Security 24), pages 4819–4836.
Chua, L., Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sinha, A., and Zhang, C. (2024). How private is DP-SGD? arXiv preprint arXiv:2403.17673.
Dwork, C. (2006). Differential privacy. In International Colloquium on Automata, Languages, and Programming, pages 1–12. Springer.
Haddadi, F., Khanchi, S., Shetabi, M., and Derhami, V. (2010). Intrusion detection and attack classification using feed-forward neural network. In 2010 Second International Conference on Computer and Network Technology, pages 262–266. IEEE.
Hoque, N., Bhattacharyya, D. K., and Kalita, J. K. (2015). Botnet in DDoS attacks: Trends and challenges. IEEE Communications Surveys & Tutorials, 17(4):2242–2270.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang, X. (2022). Membership inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 54(11s):1–37.
Javaid, A., Niyaz, Q., Sun, W., and Alam, M. (2016). A deep learning approach for network intrusion detection system. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies, pages 21–26.
Machooka, D., Yuan, X., Roy, K., and Chen, G. (2024). Comparison of LSTM and MLP trained under differential privacy for intrusion detection. In 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), pages 1–10. IEEE.
Nissim, K., and Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170358.
Oliveira, J. A. d. (2024). F-NIDS: Sistema de detecção de intrusão baseado em aprendizado federado.
Pinheiro, A. J., de Araujo-Filho, P. F., Bezerra, J. d. M., and Campelo, D. R. (2020). Adaptive packet padding approach for smart home networks: A tradeoff between privacy and performance. IEEE Internet of Things Journal, 8(5):3930–3938.
Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M. P., Shyu, M.-L., Chen, S.-C., and Iyengar, S. S. (2018). A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), 51(5):1–36.
Pustozerova, A., Baumbach, J., and Mayer, R. (2023). Differentially private federated learning: Privacy and utility analysis of output perturbation and DP-SGD. In 2023 IEEE International Conference on Big Data (BigData), pages 5549–5558. IEEE.
Qi, X., Wang, T., and Liu, J. (2017). Comparison of support vector machine and softmax classifiers in computer vision. In 2017 Second International Conference on Mechanical, Control and Computer Engineering (ICMCCE), pages 151–155. IEEE.
Reis, C. H. (2021). Otimização de hiperparâmetros em redes neurais profundas. Minas Gerais.
Rimer, M., and Martinez, T. (2004). SoftProp: Softmax neural network backpropagation learning. In 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), volume 2, pages 979–983. IEEE.
Sebastian Garcia, Agustin Parmisano, M. J. E. (2020). Iot-23: A labeled dataset with malicious and benign iot network traffic (version 1.0.0). DOI: 10.5281/zenodo.4743746.
Siachos, I., Kaltakis, K., Papachristopoulou, K., Giannoulakis, I., and Kafetzakis, E. (2023). Comparison of machine learning algorithms trained under differential privacy for intrusion detection systems. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR), pages 654–658. IEEE.
Tang, Q., Shpilevskiy, F., and Lecuyer, M. (2024). DP-AdamBC: Your DP-Adam is actually DP-SGD (unless you apply bias correction). In Proceedings of the AAAI Conference on Artificial Intelligence, 38:15276–15283.
TensorFlow Privacy. (2024). TensorFlow Privacy - DPKerasAdamOptimizer.
Vidal, I. d. C. (2020). Protecting: Garantindo a privacidade de dados gerados em casas inteligentes localmente na borda da rede.
Zhang, Z., Yan, C., and Malin, B. A. (2022). Membership inference attacks against synthetic health data. Journal of Biomedical Informatics, 125:103977.
AI, T. (2024). Adversarial robustness toolbox repository. [link].
Amari, S.-i. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4–5):185–196.
Bezerra, E. (2016). Introdução à aprendizagem profunda. Artigo – 31º Simpósio Brasileiro de Banco de Dados – SBBD2016 – Salvador.
Boulemtafes, A., Derhab, A., and Challal, Y. (2020). A review of privacy-preserving techniques for deep learning. Neurocomputing, 384:21–45.
Cherubin, G., Kopf, B., Paverd, A., Tople, S., Wutschitz, L., and Zanella-Béguelin, S. (2024). Closed-form bounds for DP-SGD against record-level inference attacks. In 33rd USENIX Security Symposium (USENIX Security 24), pages 4819–4836.
Chua, L., Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sinha, A., and Zhang, C. (2024). How private is DP-SGD? arXiv preprint arXiv:2403.17673.
Dwork, C. (2006). Differential privacy. In International Colloquium on Automata, Languages, and Programming, pages 1–12. Springer.
Haddadi, F., Khanchi, S., Shetabi, M., and Derhami, V. (2010). Intrusion detection and attack classification using feed-forward neural network. In 2010 Second International Conference on Computer and Network Technology, pages 262–266. IEEE.
Hoque, N., Bhattacharyya, D. K., and Kalita, J. K. (2015). Botnet in DDoS attacks: Trends and challenges. IEEE Communications Surveys & Tutorials, 17(4):2242–2270.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang, X. (2022). Membership inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 54(11s):1–37.
Javaid, A., Niyaz, Q., Sun, W., and Alam, M. (2016). A deep learning approach for network intrusion detection system. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies, pages 21–26.
Machooka, D., Yuan, X., Roy, K., and Chen, G. (2024). Comparison of LSTM and MLP trained under differential privacy for intrusion detection. In 2024 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), pages 1–10. IEEE.
Nissim, K., and Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170358.
Oliveira, J. A. d. (2024). F-NIDS: Sistema de detecção de intrusão baseado em aprendizado federado.
Pinheiro, A. J., de Araujo-Filho, P. F., Bezerra, J. d. M., and Campelo, D. R. (2020). Adaptive packet padding approach for smart home networks: A tradeoff between privacy and performance. IEEE Internet of Things Journal, 8(5):3930–3938.
Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M. P., Shyu, M.-L., Chen, S.-C., and Iyengar, S. S. (2018). A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), 51(5):1–36.
Pustozerova, A., Baumbach, J., and Mayer, R. (2023). Differentially private federated learning: Privacy and utility analysis of output perturbation and DP-SGD. In 2023 IEEE International Conference on Big Data (BigData), pages 5549–5558. IEEE.
Qi, X., Wang, T., and Liu, J. (2017). Comparison of support vector machine and softmax classifiers in computer vision. In 2017 Second International Conference on Mechanical, Control and Computer Engineering (ICMCCE), pages 151–155. IEEE.
Reis, C. H. (2021). Otimização de hiperparâmetros em redes neurais profundas. Minas Gerais.
Rimer, M., and Martinez, T. (2004). SoftProp: Softmax neural network backpropagation learning. In 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), volume 2, pages 979–983. IEEE.
Sebastian Garcia, Agustin Parmisano, M. J. E. (2020). Iot-23: A labeled dataset with malicious and benign iot network traffic (version 1.0.0). DOI: 10.5281/zenodo.4743746.
Siachos, I., Kaltakis, K., Papachristopoulou, K., Giannoulakis, I., and Kafetzakis, E. (2023). Comparison of machine learning algorithms trained under differential privacy for intrusion detection systems. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR), pages 654–658. IEEE.
Tang, Q., Shpilevskiy, F., and Lecuyer, M. (2024). DP-AdamBC: Your DP-Adam is actually DP-SGD (unless you apply bias correction). In Proceedings of the AAAI Conference on Artificial Intelligence, 38:15276–15283.
TensorFlow Privacy. (2024). TensorFlow Privacy - DPKerasAdamOptimizer.
Vidal, I. d. C. (2020). Protecting: Garantindo a privacidade de dados gerados em casas inteligentes localmente na borda da rede.
Zhang, Z., Yan, C., and Malin, B. A. (2022). Membership inference attacks against synthetic health data. Journal of Biomedical Informatics, 125:103977.
Published
2025-05-19
How to Cite
SILVA, Davi Bezerra Yada da; DOS SANTOS, Aldri Luiz; BEZERRA, Jeandro de M..
Applying Differential Privacy Against Membership Inference Attacks in the Internet of Things. In: BRAZILIAN SYMPOSIUM ON COMPUTER NETWORKS AND DISTRIBUTED SYSTEMS (SBRC), 43. , 2025, Natal/RN.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 980-993.
ISSN 2177-9384.
DOI: https://doi.org/10.5753/sbrc.2025.6445.
