Sign Language Translator with Speech Recognition Integration: Bridging the Communication Gap

Authors

  • Prof.(Dr.) Kaushika D Patel Professor, Department of Electronics Engineering, Birla Vishvakarma Mahavidyalaya Engineering College (An Autonomous Institution), Vallabh Vidyanagar 388 120, Affiliated to Gujarat Technological University, Gujarat, India Author
  • Kasak Vaghasiya UG Student, Department of Electronics Engineering, Birla Vishvakarma Mahavidyalaya Engineering College (An Autonomous Institution), Vallabh Vidyanagar 388 120, Affiliated to Gujarat Technological University, Gujarat, India Author
  • Radhika Savaliya UG Student, Department of Electronics Engineering, Birla Vishvakarma Mahavidyalaya Engineering College (An Autonomous Institution), Vallabh Vidyanagar 388 120, Affiliated to Gujarat Technological University, Gujarat, India Author

DOI:

https://doi.org/10.32628/IJSRSET25122201

Abstract

This paper presents the development of a real-time sign language translator that integrates speech recognition technology to bridge communication gaps between sign language users and non-signers. The system translates spoken or typed words into corresponding American Sign Language (ASL) and Indian Sign Language (ISL) gestures using a predefined database of sign representations. The paper discusses the system’s architecture, implementation challenges, results, and potential applications in education, public spaces, and assistive technology. Additionally, the research explores the integration of AI-driven enhancements to improve translation accuracy, accommodate a broader vocabulary, and incorporate non-manual sign components such as facial expressions and body language. Future work will focus on refining gesture recognition, improving speech processing, and ensuring accessibility across diverse linguistic and cultural contexts.

Downloads

Download data is not yet available.

References

Ong, A. S., Ranganath, S., & Zhou, Z. (2005). Hand gesture recognition systems: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 35(3), 309-332.

Mitra, S., & Acharya, T. (2007). Gesture recognition: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311-324.

Web Speech API: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API

(Add more specific research papers on speech recognition and sign language translation that are relevant to your work here.)

Kaur, M., & Sharma, R. (2020). Speech to sign language translation using machine learning techniques: A review. In International Journal of Advanced Trends in Computer Science and Engineering, 9(2), 1907–1913.

Hussain, A., & Muhammad, G. (2018). Speech-based assistive technology for sign language translation using deep learning. IEEE Access, 6, 40658–40668.

Dong, C., Leu, M. C., & Yin, Z. (2015). American Sign Language alphabet recognition using Microsoft Kinect. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 44–52.

Pigou, L., Dieleman, S., Kindermans, P. J., & Schrauwen, B. (2015). Sign language recognition using convolutional neural networks. In European Conference on Computer Vision (pp. 572–578). Springer.

Downloads

Published

26-04-2025

Issue

Section

Research Articles

How to Cite

[1]
Prof.(Dr.) Kaushika D Patel, Kasak Vaghasiya, and Radhika Savaliya, “Sign Language Translator with Speech Recognition Integration: Bridging the Communication Gap”, Int J Sci Res Sci Eng Technol, vol. 12, no. 2, pp. 741–749, Apr. 2025, doi: 10.32628/IJSRSET25122201.