DCGA2N: Densely Connected Generative Attentional Adversarial Network for Indian Sign Language Recognition

Main Article Content

Prachi Pramod Waghmare, Ashwini Mangesh Deshpande

Abstract

Every individual communicates uniquely, shaping their understanding of life. Sign language, commonly used by those with speech and hearing impairments, plays a significant role in this communication spectrum. However, when a signer interacts with someone unfamiliar with sign language, conveying thoughts becomes challenging. To address this, a sign language recognition system emerges as a valuable aid. In this work, we propose a novel Densely Connected Generative Attentional Adversarial Network (DCGA2N) model for Indian Sign Language (ISL) recognition. Leveraging the MediaPipe holistic pipeline, crucial features like hand gestures, facial expressions, and body poses are extracted and structured into a Comma-Separated Value (CSV) format. These extracted features are then fed into the integrated densely connected generative attentional adversarial network for sign language interpretation, comprising both a generator and discriminator. To assess the effectiveness of our proposed approach, we evaluate its performance using various metrics on 3 ISL datasets. Our findings indicate an impressive accuracy of 97.61% for ISL dataset-1 and 97.91% for ISL dataset-2. Furthermore, we compare our proposed technique against the existing methods, demonstrating its superior performance. This research showcases the potential of the proposed method in advancing sign language recognition, thereby enhancing communication accessibility for the speech and hearing impaired.

Article Details

Section
Articles