Affective Sign-to-voice Translator

Emotionally aware dynamic sign-to-voice translator

Objective


Collaboration with MIT Media Lab to develop a dynamic sign language-to-voice translator for Deaf and Hard of Hearing (DHH) individuals, incorporating emotional awareness.

Phase 1: Sign Language Recognition


  • Building a computer vision model to recognize sign language while tracking emotions through:
    • Hand gestures.
    • Facial expressions.
    • Body language.

Phase 2: Voice Cloning System


  • Translating signs to voice using a voice cloning system designed for speech-impaired DHH individuals.
  • Enables parents, relatives, and friends to hear their loved one’s "voice" for the first time.

Publications


Planning multiple publications on top-tier Artificial Intelligence/Computer Vision and Human-Computer Interaction venues.