Metadata privacy and obliviousness in distributed learning via a bulletin board: a proof-of-concept

  • Andreis G. M. Purim UNICAMP
  • Witor M. A. Oliveira UNICAMP
  • Allan M. de Souza UNICAMP

Abstract


While many works study payload privacy in distributed learning, metadata and communication patterns between peers can still reveal sensitive information. We treat metadata privacy as a distinct problem, and first introduces a risk and threat model for metadata leakage in distributed learning systems. Building on this analysis, we present a bulletin board-based communication architecture in which peers exchange activations, gradients, and model updates under configurable privacy and efficiency settings. We implement a proof-of-concept code of the proposed design and evaluate it on small-scale distributed learning workloads, which is available on Github.

References

Almashaqbeh, G. and Ghodsi, Z. (2023). Anofel: Supporting anonymity for privacypreserving federated learning.

Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., Ramage, D., Segal, A., and Seth, K. (2016). Practical secure aggregation for federated learning on user-held data.

de Retana, M. F., Zulaika, U., Sánchez-Corcuera, R., and Almeida, A. (2025). Differential privacy: Gradient leakage attacks in federated learning environments.

Egger, M., Urbanke, R., and Bitar, R. (2025). Federated one-shot learning with data privacy and objective-hiding.

Li, Z., Lowy, A., Liu, J., Koike-Akino, T., Parsons, K., Malin, B., and Wang, Y. (2024). Analyzing inference privacy risks through gradients in machine learning.

Liu, K. and Gupta, T. (2024). Federated learning with differential privacy and an untrusted aggregator. In Proceedings of the 10th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP, pages 379–389. INSTICC, SciTePress.

McMahan, H. B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2016). Communication-efficient learning of deep networks from decentralized data.

Mozaffari, H., Marathe, V., and Dice, D. (2022). Private and robust federated learning using private information retrieval and norm bounding. In Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022).

Nguyen, K., Khan, T., and Michalas, A. (2023). Split without a leak: Reducing privacy leakage in split learning.

Rangwala, M., Sinnott, R. O., and Buyya, R. (2025). Network structures as an attack surface: Topology-based privacy leakage in federated learning.

Shuvo, M. N. H. and Hossain, M. (2025). Fingerprinting deep learning models via network traffic patterns in federated learning. In Proceedings of the 2025 ACM Workshop on Wireless Security and Machine Learning, page 32–37. ACM.

Shuvo, M. N. H., Hossain, M., Mallik, A., Twigg, J., and Dagefu, F. (2025). Flare: A wireless side-channel fingerprinting attack on federated learning.

Vepakomma, P., Gupta, O., Swedish, T., and Raskar, R. (2018). Split learning for health: Distributed deep learning without sharing raw patient data.

Wei, W., Liu, L., Loper, M., Chow, K.-H., Gursoy, M. E., Truex, S., and Wu, Y. (2020). A framework for evaluating gradient leakage attacks in federated learning.

Zhu, L., Liu, Z., and Han, S. (2019). Deep leakage from gradients.

Zhu, X., Luo, X., Wu, Y., Jiang, Y., Xiao, X., and Ooi, B. C. (2025). Passive inference attacks on split learning via adversarial regularization.
Published
2026-05-25
PURIM, Andreis G. M.; OLIVEIRA, Witor M. A.; SOUZA, Allan M. de. Metadata privacy and obliviousness in distributed learning via a bulletin board: a proof-of-concept. In: BRAZILIAN SYMPOSIUM ON COMPUTER NETWORKS AND DISTRIBUTED SYSTEMS (SBRC), 44. , 2026, Praia do Forte/BA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 982-995. ISSN 2177-9384. DOI: https://doi.org/10.5753/sbrc.2026.19928.