Adaptive Client-Dropping in Federated Learning: Preserving Data Integrity in Medical Domains
Resumo
In this work, we address the challenge of training machine learning models on sensitive clinical data while ensuring data privacy and robustness against data corruption. Our primary contribution is an approach that integrates Conformal Prediction (CP) techniques into Federated Learning (FL) to enhance the detection and exclusion of corrupted data contributors. By implementing a client-dropping strategy based on an adaptive threshold informed by the interval width metric, we dynamically identify and exclude unreliable clients. This approach, tested using the MedMNIST dataset with a ResNet50 architecture, effectively isolates and discards corrupted inputs, maintaining the integrity and performance of the learning model. Our findings demonstrate that this strategy prevents the potential 10% decrease in accuracy that can occur without such measures, confirming the efficacy of our CP-enhanced FL methodology in ensuring robust and private data handling in sensitive domains like healthcare.
Publicado
17/11/2024
Como Citar
NEGRÃO, Arthur; SILVA, Guilherme; PEDROSA, Rodrigo; LUZ, Eduardo; SILVA, Pedro.
Adaptive Client-Dropping in Federated Learning: Preserving Data Integrity in Medical Domains. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 13. , 2024, Belém/PA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 111-126.
ISSN 2643-6264.