Overview
World's first visual conversational AI assistant designed for and with people who are blind or visually impaired.
Role
- Development of firmware and software for an android wearable.
- Integrated advanced speech and image processing algorithms.
- Achieved 3x faster response rate and higher accuracy than leading products.
- Created the Multimodal AI Agentic architecture for seamless interaction with visual language models.
- Currently building a visual memory layer to enhance proactive memory recall.
Key Findings
- Currently, leading a study with 5 visually impaired individuals, evaluating trust, satisfaction, and system accuracy.
Recognition
- Selected for Prototypes for Humanity, 2024
- Winner of the Meta Llama Innovation Impact Award 2024.
- First-author paper in preparation for IMWUT's February 2025 cycle.