Translating Gestures: Utilizing CNN to Enhance ASL Communication and Understanding

Main Article Content

G. B. Sambare,Shailesh B. Galande, Tejas Ippar, Purnesh Joshi, Yash Maddiwar, Abdul Waasi S Mulla

Abstract

This research introduces a system for translating American Sign Language (ASL) utilising Convolutional Neural Networks. (CNNs). The system under consideration has the capability to identify manual gestures executed by persons communicating in American Sign Language (ASL) and subsequently convert them into written language, thereby facilitating uninterrupted communication between the deaf and hearing populations. The CNN model was validated and trained using a set of data made up of 78,300 pictures of movements in the ASL Alphabet written in American Sign Language (ASL). In order to improve the performance of the model, the pre-processing of the data included a number of stages, such as converting the photos to grayscale, normalising the pixel values, and enhancing performance of the model. In order to ease classification, fully connected layers were added after a succession of pooling and convolutional layers in CNN's design. After 15 epochs of training, the model attained a validation accuracy of 99.85%. The findings of this research demonstrate the viability of employing Convolutional Neural Networks (CNNs) in the creation of precise and effective American Sign Language (ASL) translation systems, which can serve as a means of communication for people with auditory impairments.

Article Details

Section
Articles