Gai, K. and Wang, D. and Yu, J. and Zhu, L. and Meng, W. (2025) FedAMM : Federated Learning Against Majority Malicious Clients Using Robust Aggregation. IEEE Transactions on Information Forensics and Security, 20. pp. 9950-9964. ISSN 1556-6013
FedAMM.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (969kB)
Abstract
As a collaborative framework designed to safeguard privacy, Federated Learning (FL) seeks to protect participants’ data throughout the training process. However, the framework still faces security risks from poisoning attacks, arising from the unmonitored process of client-side model updates. Most existing solutions address scenarios where less than half of clients are malicious, i.e., which leaves a significant challenge to defend against attacks when more than half of partici pants are malicious. In this paper, we propose a FL scheme, named FedAMM, that resists backdoor attacks across various data distributions and malicious client ratios. We develop a novel backdoor defense mechanism to filter out malicious models, aiming to reduce the performance degradation of the model. The proposed scheme addresses the challenge of distance measurement in high-dimensional spaces by applying Principal Component Analysis (PCA) to improve clustering effectiveness. We borrow the idea of critical parameter analysis to enhance discriminative ability in non-iid data scenarios, via assessing the benign or malicious nature of models by comparing the similarity of critical parameters across different models. Finally, our scheme employs a hierarchical noise perturbation to improve the backdoor mitigation rate, effectively eliminating the backdoor and reducing the adverse effects of noise on task accuracy. Through evaluations conducted on multiple datasets, we demonstrate that the proposed scheme achieves superior backdoor defense across diverse client data distributions and different ratios of malicious participants. With 80% malicious clients, FedAMM achieves low backdoor attack success rates of 1.14%, 0.28%, and 5.53% on MNIST, FMNIST, and CIFAR-10, respectively, demonstrating enhanced robustness of FL against backdoor attacks.