Kurdish Sign Language Recognition Using Pre-Trained Deep Learning Models

Main Article Content

Ali A. Alsaud, Raghad Z. Yousif, Marwan M. Aziz, Shahab W. Kareem, Amer J. Maho

Abstract

In the tapestry of rich human communication, sign language gleams like one of the basic threads of this art, giving voice to hundreds of deaf and hard-of-hearing individuals in the region. The technology for recognizing and translating sign language fell far behind what these communities needed. Therefore, the present investigation is going to compare the performance of three top-performing deep learning algorithms in recognizing the signs created from the database of Kurdish Sign Language. The models are going to be put to a rigorous test, with a variety of signs drawn. All the 3 models give a performance that is—from all indications—good or even excellent, this is MobileNetV2, a very good candidate that manages to walk an amazing line between the requirements of high accuracy, low space complexity, and acceptable time complexity. We conclude by looking at some exciting opportunities for future research, including integrating our models into hardware devices and expanding our study to a larger variety of sign languages. And just as any good journey would, it throws up as many questions as it answers, leaving us inspired by the many possibilities that will need to be explored to enhance communication for all.

Article Details

Section
Articles