Strengthening Federated Learning: Addressing Model Poisoning Attack and Defense Methods

Main Article Content

Jyoti Yadav, Swati Jadhav, Vilas Kharat, A. D. Shaligram

Abstract

Federated Learning (FL) is a machine learning technique that enables multiple devices to train a shared global model collaboratively without compromising privacy. In FL, data remains on each device, and models are trained locally using that data. The local model updates are then aggregated to update a global model, representing all devices. Since only model updates are shared with the server, privacy is maintained. FL offers numerous benefits, including privacy-preserving, lower communication costs, better scalability, and more. It can be applied in various applications such as natural language processing, computer vision, personalized recommendations, etc. However, FL also poses challenges, particularly model poisoning attacks. Model poisoning occurs because of FL's decentralized and privacy-preserving nature, where the central server does not have direct access to the participants' data and updates, making it difficult to detect when malicious participants send manipulated updates that can degrade or maliciously influence the global model. This vulnerability is especially concerning as FL is increasingly adopted in sensitive fields like medicine and finance. Thus, understanding various model poisoning attacks and their impact on the global model’s performance is critical. This paper highlights the need for a new robust aggregation method to handle a wider range of attacks by analyzing various model poisoning attacks and countermeasures.

Article Details

Section
Articles