Cross-Language Transfer Learning using Visual Information for Automatic Sign Gesture Recognition

Automatic sign gesture recognition (GR) plays a critical role in facilitating communication between hearing-impaired individuals and the rest of society. However, recognizing sign gestures accurately and efficiently remains a challenging task due to the diversity of sign languages (SLs) and their limited availability of labeled data. This scientific paper proposes a new approach to improving the accuracy of automatic sign GR using cross-language transfer learning with visual information. Two large-scale multimodal SL corpora are utilized as the basic SLs for this study: the Ankara University Turkish Sign Language Dataset (AUTSL) and the Thesaurus Russian Sign Language (TheRusLan). Experimental studies were conducted, resulting in an accuracy of 93.33% for 18 different gestures, including the Russian target SL gestures. This result exceeds the previous state-of-the-art accuracy by 2.19%, demonstrating the effectiveness of the proposed approach. The study highlights the potential of the proposed approach to enhance the accuracy and robustness of machine SL translation, improve the naturalness of human-computer interaction, and facilitate the social adaptation of people with hearing impairments. This paper proposes a promising direction for future research to explore the application of the proposed approach to other SLs and to investigate the impact of individual and cultural differences on GR.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sign Language Recognition AUTSL FE+LSTM Rank-1 Recognition Rate 0.9338 # 6

Methods


No methods listed for this paper. Add relevant methods here