Federated Model Synchronization for Diagnostic Redefinition through a Novel Selective Parameter Unlearning
Abstract
Federated learning (FL) allows multiple medical institutions to collaboratively train machine learning models without sharing sensitive patient data, preserving privacy. However, as medical guidelines and disease classifications change over time, existing models can become outdated and may need updates to stay relevant. We propose a novel approach that efficiently updates federated models by selectively removing outdated knowledge without requiring full retraining. Our approach uses gradient-based Shapley value approximations to identify and modify the most important model parameters linked to obsolete diagnostic categories. This enables precise unlearning of outdated information while preserving performance on current diagnoses. We validate our method on the PathMNIST and COVID-19 Radiography datasets, showing that it can effectively eliminate specific diagnostic classes with minimal loss in accuracy for relevant conditions. Our method only requires a single communication round among clients and offers better control than previous techniques by targeting individual parameters instead of whole channels. This makes it especially useful for keeping federated medical models aligned with evolving medical knowledge.