“Deep Learning Based Classification of Single-Hand South Indian Sign Language Gestures”

Main Article Content

Rajesh Yakkundimath ,Ramesh Badiger , Naveen Malvade

Abstract

Gesture recognition is a branch of computer science and language technology dedicated to utilizing mathematical algorithms for the analysis of human gestures. Within the realm of non-verbal communication, the pivotal role of human arm movements and gestures remains a focal point. This research introduces advanced multi-stream deep transfer learning models tailored for identifying signs from South Indian languages, specifically Kannada, Tamil, and Telugu. The primary aim is to offer support to individuals encountering speech disorders or disabilities. The key deep transfer learning models utilized include Inception-V3, VGG-16, and ResNet-50, which have been modified and improved to attain heightened classification efficacy. The dataset comprises 35,000 images capturing single-hand gestures. In the realm of models, Inception-V3 exhibits the utmost test accuracy, standing at 91.45%, alongside a validation accuracy of 93.45% when tasked with the classification of single-hand gesture images across thirty-five distinct categories. The importance of this study lies in its prospective utility for creating an automated system that can support and improve the functional capabilities of individuals facing speech disorders or disabilities.

Article Details

Section
Articles
Author Biography

Rajesh Yakkundimath ,Ramesh Badiger , Naveen Malvade

[1]Rajesh Yakkundimath

2Ramesh Badiger

3Naveen Malvade

 

[1]*Department of Computer Science and Engineering, K. L. E. Institute of Technology, Visvesvaraya Technological University, Belagavi 590018, Karnataka, India

2Department of Computer Science and Engineering, Tontadarya College of Engineering, Visvesvaraya Technological University, Belagavi 590018, Karnataka, India Gadag 582 101, Karnataka, India

3Department of Information Science and Engineering, S.K.S.V.M. A. College of Engineering and Technology, Lakshmeshwar 582116, Karnataka, India

*Corresponding Author: Rajesh Yakkundimath

*Department of Computer Science and Engineering, K. L. E. Institute of Technology, Visvesvaraya Technological University, Belagavi 590018, Karnataka, India

 

References

Abualkishik, A., Alzyadat, W., Al Share, M., Al-Khaifi, S. and Nazari, M., 2023. Intelligent Gesture Recognition System for Deaf People by using CNN and IoT. Int. J. Advance Soft Compu. Appl, 15(1), pp.144-158.

Alam, M.M., Islam, M.T. and Rahman, S.M., 2022. Unified learning approach for egocentric hand gesture recognition and fingertip detection. Pattern Recognition, 121, pp.108-200.

Al Farid, F., Hashim, N., Abdullah, J., Bhuiyan, M.R., Shahida Mohd Isa, W.N., Uddin, J., Haque, M.A. and Husen, M.N., 2022. A structured and methodological review on vision-based hand gesture recognition system. Journal of Imaging, 8(6), pp.1-19.

Alyami, S., Luqman, H. and Hammoudeh, M., 2023. Isolated Arabic Sign Language Recognition Using A Transformer-based Model and Landmark Keypoints. ACM Transactions on Asian and Low-Resource Language Information Processing. https://doi.org/10.1145/3584984.pp.1-19.

Anami, B.S. and Bhandage, V.A., 2019. A comparative study of suitability of certain features in classification of bharatanatyam mudra images using artificial neural network. Neural Processing Letters, 50(1), pp.741-769.

Bora, J., Dehingia, S., Boruah, A., Chetia, A.A. and Gogoi, D., 2023. Real-time Assamese Sign Language Recognition using MediaPipe and Deep Learning. Procedia Computer Science, 218, pp.1384-1393.

Chakraborty, S., Sarkar, S., Paul, P., Bhattacharjee, S. and Chakraborty, A., 2023. Sign language recognition using landmark detection, GRU and LSTM. American Journal of Electronics & Communication, 3(3), pp.20-26.

Damaneh, M.M., Mohanna, F. and Jafari, P., 2023. Static hand gesture recognition in sign language based on convolutional neural network with feature extraction method using ORB descriptor and Gabor filter. Expert Systems with Applications, 211, pp.118559.

Das, S., Imtiaz, M.S., Neom, N.H., Siddique, N. and Wang, H., 2023. A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier. Expert Systems with Applications, 213, pp.118914.

De Castro, G.Z., Guerra, R.R. and Guimarães, F.G., 2023. Automatic translation of sign language with multi-stream 3D CNN and generation of artificial depth maps. Expert Systems with Applications, 215, pp.119394.

He, K., Zhang, X., Ren, S., Sun, J., 2015. Deep Residual Learning for Image Recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), https://doi.org/10.48550/arXiv.1512.03385, 7(3), pp. 171–180.

Mia, K., Islam, T., Assaduzzaman, M., Shaha, S.P., Saha, A., Razzak, M.A., Dhar, A. and Sarker, T., 2023. Isolated sign language recognition using hidden transfer learning. International Journal of Computer Science and Information Technology Research, 11(1), pp.28-38.

Pandey, A., Chauhan, A. and Gupta, A., 2023. Voice Based Sign Language Detection For Dumb People Communication Using Machine Learning. Journal of Pharmaceutical Negative Results, 14(2), pp.22-30.

Rajalakshmi, E., Elakkiya, R., Subramaniyaswamy, V., Alexey, L.P., Mikhail, G., Bakaev, M., Kotecha, K., Gabralla, L.A. and Abraham, A., 2023. Multi-Semantic Discriminative Feature Learning for Sign Gesture Recognition Using Hybrid Deep Neural Architecture. IEEE Access, 11, pp.2226-2238.

Sudhakar, R., Gayathri, V., Gomathi, P., Renuka, S. and Hemalatha, N., 2023. Sign Language Detection. South Asian Journal of Engineering and Technology, 13(1), pp.49-56.

Szegedy, C., Liu, W., Jia, Y., Scott Reed, P,S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A. 2014. Going Deeper with Convolutions.2015 IEEE Conference on Computer Vision and Pattern Recognition, https://doi.org/10.48550/arXiv.1409.4842, pp.1-12.

Tran, D.S., Ho, N.H., Yang, H.J., Baek, E.T., Kim, S.H. and Lee, G., 2020. Real-time hand gesture spotting and recognition using RGB-D camera and 3D convolutional neural network, Applied Sciences, 10(2), pp.722.

Too E C, Yujiana Li, Njukia S, Yingchun L.2019. A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture, 161, pp.272-279.

Wang, J., Liu, T. and Wang, X., 2020. Human hand gesture recognition with convolutional neural networks for K-12 double-teachers instruction mode classroom. Infrared Physics & Technology, 111, pp.103464.

Xu, C., Wu, X., Wang, M., Qiu, F., Liu, Y. and Ren, J., 2023. Improving dynamic gesture recognition in untrimmed videos by an online lightweight framework and a new gesture dataset ZJUGesture. Neurocomputing, 523, pp.58-68.

Zengeler, N., Kopinski, T. and Handmann, U., 2018. Hand gesture recognition in automotive human–machine interaction using depth cameras. Sensors, 19(1), pp.59.