Enhancing Federated Learning Evaluation: Exploring Instance-Level Insights with SQUARES in Image Classification Models

Main Article Content

Mayank Jindal, Madan Mohan Tito Ayyalasomayajula, Dedeepya Sai Gondi, Harish Mashetty

Abstract

Federated Learning (FL) presents a novel approach within the domain of Machine Learning (ML)—enabling the training of ML models in a distributed manner. This paradigmatic shift ensures the preservation of user privacy by conducting the training process at the edge, where data remains localized on individual devices. This stands in contrast to the conventional centralized paradigm—where users transmit data to a central server for processing. The FL framework operates within a more heterogeneous environment characterized by diverse data distributions across clients, a departure from the engineer-analyzed datasets typical of centralized paradigms. Whereas, evaluation of FL models typically relies on statistical techniques such as accuracy, recall, precision, log loss, and the confusion matrix, alongside visualization methods. While these techniques provide an overview of the model's performance and data utilization—they may lack granularity when comparing models with disparate characteristics. However, the SQUARES technique, offers a more nuanced evaluation of model performance at the instance level. This approach facilitates the examination of data biases, outlier detection, and model behavior during training on individual samples. Therefore, this study presents the development and evaluation of an FL image classification model across various scenarios—utilizing the SQUARES prototype. In addressing these more intricate scenarios, we aim to augment traditional visualizations and metrics, thereby uncovering insights and nuances that may elude standard evaluation methods prevalent in ML benchmarks.

Article Details

Section
Articles