cFL:Data distribution edge clustering algorithm based on deep Federated learning

Main Article Content

Xide Wang, Qingyan Fang

Abstract

Federated learning (FL) allows end-devices to train local models on their respective local datasets and collaborate with the server to train a global predictive model, thus achieving the goal of machine learning while protecting the privacy and sensitive data of end-devices. However, simultaneous access to the server by a large number of end devices may result in increased transmission latency and some local models may have malicious behavior, converging in the opposite direction to the global model. As a result, additional communication costs can occur during the federation learning process. Existing research has mainly focused on reducing the number of communication rounds or cleaning dirty local data. In order to decrease the overall amount of local updates, we provide an edge-based model cleaning and device clustering strategy in this study. By computing the cosine similarity between local update parameters and global model parameters over a multi-dimensional space, the approach assesses if local updates are necessary while preventing pointless communications. In addition, end devices are grouped together based on where they are connected to the network and connect to the cloud via mobile edge nodes in clusters, therefore lowering the latency associated with high concurrent server access. Convolutional neural networks and Soft max regression are used, for instance, to perform MNIST handwritten digit recognition, and the effectiveness of the suggested approach to enhancing communication efficiency is confirmed. According to experimental findings, the edge-based model cleaning and device clustering technique decreases the quantity of local updates by 60% and speeds up the model's convergence by 10.3% when compared to the conventional FL approach.

Article Details

Section
Articles