Detecting Sign Language for the Deaf and Mute Using Neural Networks

Authors

  • Basel A. Dabwan AlBaydha University, AlByadha, Yemen
  • Mukti E. Jadhav
  • Prachi Janrao Thakur Engineering College, Kandewli Mumbai, India
  • Avinash B. Kadam

DOI:

https://doi.org/10.25215/1103.391

Keywords:

Sign Language, Neural Network, Deaf and Dumb, Machine Learning

Abstract

Living without communication is extremely challenging for humans. People employ different methods to express and exchange their ideas between the sender and receiver. Speaking and using gestures are the most common means of communication. Speech refers to audible communication perceived through hearing, whereas gestures involve using body movements like hands and facial expressions. Sign language is a form of communication categorized as a gestural language that is understood and conveyed through visual perception. While most people have the choice to use gestures in their communication, deaf individuals primarily rely on sign language as their main form of communication. Deaf and dumb individuals require communication to engage with others, acquire knowledge, and participate in the activities in their surroundings. Sign language serves as the connection that closes the divide between them and the rest of society. We have developed models for detecting sign language and converting it into normal text, allowing ordinary people to understand what individuals with disabilities want, after training these models on dataset using neural network, we achieved excellent results.

Metrics

Metrics Loading ...

Published

2022-11-05

How to Cite

Basel A. Dabwan, Mukti E. Jadhav, Prachi Janrao, & Avinash B. Kadam. (2022). Detecting Sign Language for the Deaf and Mute Using Neural Networks. International Journal of Indian Psychȯlogy, 11(3). https://doi.org/10.25215/1103.391