Optimizing Image Recognition Efficiency using Sparse Representation Learning and Transfer Learning for Resource-Constrained Environments

Main Article Content

S. Vijayprasath, Arun Aram, S.Vijayalakshmi, C.Gnanaprakasam, P. Gopalsamy, M.A.Mukunthan

Abstract

Image recognition in resource-constrained environments is crucial for mobile devices, IoT, and embedded systems. It reduces the need for internet connectivity and enhances privacy and security. Real-time processing and decision-making are critical for applications such as facial recognition, object detection, and augmented reality. This approach is efficient, making it accessible in remote areas or situations with limited internet access. Our study proposes a novel approach using a network architecture with four modules, emphasizing sparse representation learning and transfer learning to improve image recognition efficiency in resource-constrained environments. The Domain-Adaptive Feature Extractor () facilitates effective sparse representation learning by projecting data from diverse domains into a shared space. The Transferable Affine Decoder () captures affine relationships between domains to facilitate knowledge transfer, while the Cross-Domain Correspondence Network () enforces pixel-level correspondence to extract shared intrinsic representations. The Efficient Classifier Network () enhances classification accuracy using efficient CNNs. The baseline model achieved an accuracy of 0.89. Improved Model 1, leveraging transfer learning, attained 0.92 accuracy, while Improved Model 2 with the Cross-Domain Correspondence Network reached 0.91. The Final Model, amalgamating all methodologies, excelled with the highest accuracy of 0.94. This holistic approach optimizes resource usage and enables real-time processing, thus empowering a diverse array of applications in resource-limited environments.

Article Details

Section
Articles