SignConnect
Abstract
Millions of people with speech and hearing
impairments communicate with sign languages
every day. For hearing-impaired people, gesture
recognition is a natural way of communicating,
much like voice recognition is for most people. In
this project, we look at the issue of
translating/converting sign language to text and
propose a better solution based on deep learning
techniques. We want to establish a system that
hearing-impaired people may utilize in their
everyday lives to promote communication and
collaboration between hearing- impaired people and
people who are not trained in Sign Language. So,
here we propose a system that recognizes sign
language and predicts the right sign using a web
camera. The system uses Deep learning techniques,
Convolution neural networks, max pooling, and
ReLU activation functions. We aim to create
software that is both affordable, much more
accessible to the users, and works without
compromising the desired results.