Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution

Alharbi, Ebtisaam and Soriano Marcolino, Leandro and Gouglidis, Antonios and Ni, Qiang (2023) Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution. In: 26th European Conference on Artificial Intelligence ECAI 2023- IOS Press :. UNSPECIFIED, POL. (In Press)

[thumbnail of 1129Alharbi]
Text (1129Alharbi)
1129Alharbi.pdf - Accepted Version

Download (4MB)

Abstract

Federated Learning (FL) is essential for building global models across distributed environments. However, it is significantly vulnerable to data and model poisoning attacks that can critically compromise the accuracy and reliability of the global model. These vulnerabilities become more pronounced in heterogeneous environments, where clients’ data distributions vary broadly, creating a challenging setting for maintaining model integrity. Furthermore, malicious attacks can exploit this heterogeneity, manipulating the learn- ing process to degrade the model or even induce it to learn incorrect patterns. In response to these challenges, we introduce RFCL, a novel Robust Federated aggregation method that leverages CLustering and cosine similarity to select similar cluster models, effectively defending against data and model poisoning attacks even amidst high data heterogeneity. Our experiments assess RFCL’s performance against various attacker numbers and Non-IID degrees. The findings reveal that RFCL outperforms existing robust aggregation methods and demonstrates the capability to defend against multiple attack types.

Item Type:
Contribution in Book/Report/Proceedings
Uncontrolled Keywords:
Research Output Funding/no_not_funded
Subjects:
?? no - not fundedno ??
ID Code:
204514
Deposited By:
Deposited On:
19 Sep 2023 13:55
Refereed?:
Yes
Published?:
In Press
Last Modified:
28 Apr 2024 23:17