Yasith Samaradivakara
Freestyle Technologist & HCI Researcher
I’m a graduate student in the Fluid Interfaces group at the MIT Media Lab, advised by Prof. Pattie Maes . Through my work, I build first-person, always-on wearables and multimodal AI systems and study how they augment human perception and cognition—helping people understand themselves and the world. I aim to inspire technologies that elevate our inner capacities and unlock humanity’s potential while preserving our humanity.
Before MIT, I was a Researcher and Engineer at the Augmented Human Lab (National University of Singapore) with Prof. Suranga Nanayakkara, where I designed intelligent wearable companions as sensory substitutes for people with different abilities.
SeEar
A low-cost AR device with localized real-time captions tailored to enhance the educational experience of Deaf and Hard-of-Hearing (DHH) students.
AiSee
AI-powered companion for people with visual impairments to independently access visual information.
Mirai
Wearable AI system with an integrated camera, real-time speech processing, and personalized voice-cloning to provide proactive and contextual nudges for positive behavior change.
Hatthini
Conversational companion for emotional regulation, using AI to assist users in managing emotions effectively.
Kavy
Conversational AI agent designed to foster language-speaking skills and build self-confidence.
ICOAE
Wristband and mobile app for health monitoring and emergency handling, designed for elderly individuals.
SpeechAssist
Mobile app designed to support advanced stammering treatments such as Delayed Auditory Feedback (DAF).
CHI Conference on Human Factors in Computing Systems, 2024
CHI Conference on Human Factors in Computing Systems, 2025
Findings of the Association for Computational Linguistics: EMNLP, 2023
Proceedings of the Augmented Humans International Conference, 2024
Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, 2024
International Journal of Human Computer Interaction, 2024