AFL-PRF: Adaptive Federated Learning for Low-Quality Data: Enhancing Performance, Robustness, and Fairness
Abstract
Federated learning (FL) enables collaborative model training across distributed clients while preserving privacy, yet its decentralized nature makes it vulnerable to poisoned updates and performance degradation under highly skewed data. Prior studies typically treat accuracy, robustness, and fairness separately, leaving open the challenge of a unified solution. We propose AFL-PRF, an adaptive federated learning framework that simultaneously enhances accuracy, robustness, and fairness in adversarial and heterogeneous environments. AFL-PRF integrates three key techniques. First, an exponential adaptive weighting mechanism dynamically scales client updates, suppressing poisoned or unreliable contributions while retaining meaningful signals from benign but low-quality clients. Second, a client prioritization strategy guided by the novel Weight Update Divergence (WUD) score promotes reliable updates and their benign neighbors, preventing malicious gradients from dominating aggregation. Third, sensitivity profiling identifies fully connected (FC) layers as highly vulnerable due to large weight variance, motivating a selective clipping strategy that filters extreme updates in these layers while preserving normal learning dynamics. Extensive experiments on benchmark datasets demonstrate that AFL-PRF consistently outperforms state-of-the-art baselines, achieving over 30% improvement in robustness and 20% enhancement in fairness, while maintaining superior predictive accuracy. By unifying adaptive weighting, client prioritization, and targeted clipping, AFL-PRF establishes a new benchmark for federated learning under poisoned and highly non-IID conditions.