Loading...

Soundability Lab

Transforming the Human Experience of Sound

About us

We are an advanced Human-Computer Interaction research lab in the Computer Science and Engineering Department at the University of Michigan. Our mission is to redefine hearing as a programmable, editable, and hyperpersonal experience—not a fixed sense, but a customizable interface between humans and their world—much like how web design lets us program, rearrange, and personalize visual elements.

We design human-centered, agentic AI that empowers people to shape how they hear, perceive, and interact with sound. Our research spans accessibility, healthcare, and entertainment domains, with current projects including editable digital media soundscapes, relational audio tools for cross-neurotype communication, and adaptive hearing systems for clinical environments.

Accessibility is a core driver of our work. We view sound personalization not just as a convenience, but as a powerful way to make sound more inclusive and equitable. This includes developing systems for real-time audio captioning, enabling users to edit or customize captions, and creating tools that translate sound into formats tailored to individual sensory and cognitive needs.

We focus on accessibility because it is the entry point to future interfaces. Communities that rely on captioning, sensory augmentation, or neurodivergent-friendly tools are often the earliest adopters of new technologies. By designing for these users, we uncover what's next for everyone.

Our work is already making real-world impact. The SoundWatch app for sound awareness on smartwatches has over 4,000 downloads. Our indoor navigation system for visually impaired users has been used more than 100,000 times in museums across India. Our clinical communication tools are being deployed at Michigan Medicine. A feature we pioneered for people with paralysis is now built into every iPhone, and our work has directly influenced real-time captioning tools at Google.

These outcomes are made possible by our lab’s deeply collaborative, community-centered research. Our team includes HCI and AI researchers working with healthcare professionals, neuroscientists, engineers, psychologists, Deaf and disability studies scholars, musicians, designers, and sound artists. We partner closely with community members, end users, and organizations to tackle complex challenges and turn ideas into meaningful, trusted technologies.

Looking forward, we imagine a future of auditory superintelligence—systems that don’t just support hearing, but expand it. These tools will filter, interpret, and reshape soundscapes in real time, adapting to our needs and context. From cognitive hearing aids to sound-based memory support and emotion-aware audio companions, we aim to make hearing a fully customizable experience.

Explore our projects below—and get in touch if you’re interested in working with us.

Recent News

Jul 7: Our work on sound personalization for digital media accessibility has been accepted to ICMI 2025!
Jul 3: Three papers on improving DHH accessibility have been accepted to ASSETS 2025: SoundNarratives, CapTune, and CARTGPT! See you in Denver!
Jun 7: Our work on designing an adaptive music system for exercise has been accepted to ISMIR 2025!
Apr 15: We have three new incoming PhD students in Fall 2025. Welcome Sid, Lindy, and Veronica!
Mar 16: Our work on improving communication in the operating rooms has been accepted to the journal of ORL Head and Neck Nursing!
Feb 22: Our initial work on using runtime generative tools to improve accessibility of virtual 3D scenes as been accepted to CHI 20205 LBW!
Feb 19: Our initial work on enchancing communication in high-noise operating rooms has been accepted as a poster at Collaborating Across Borders IX!
Jan 16: Our SoundWeaver system, which weaves multiple sources of sound information to present them accessibly to DHH users, has been accepted to CHI 2025!
Jan 16: Our proposal on dizziness diagnosis was approved for funding from William Demant Foundation! See news article.
Oct 30: Our CARTGPT work received the best poster award at ASSETS!
Oct 11: Soundability lab students are presenting 7 papers, demos, and posters at the upcoming UIST and ASSETS 2024 conferences!
Sep 30: We were awarded the Google Academic Research Award for Leo and Jeremy's project!

Our Team

Headshot of Dhruv Jain
Dhruv "DJ" Jain

Dhruv "DJ" Jain

Assistant Professor, Computer Science & Engineering (Lab head)
Headshot of Jeremy Huang
Jeremy Huang

Jeremy Huang

PhD Student, Computer Science & Engineering
Headshot of Xinyun Cao
Xinyun Cao

Xinyun Cao

PhD Student, Computer Science & Engineering
Headshot of Lindy Le
Lindy Le

Lindy Le

PhD Student, Computer Science & Engineering
Headshot of Sidharth
Sidharth

Sidharth

PhD Student, Computer Science & Engineering
Headshot of Veronica Pimenova
Veronica Pimenova

Veronica Pimenova

PhD Student, School of Information (Co-advised by Ventakesh Potluri)
Headshot of Liang-Yuan Wu
Liang-Yuan Wu

Liang-Yuan Wu

Pre-PhD Researcher, Computer Science & Engineering
Headshot of Sarah Hughes
Sarah Hughes

Sarah Hughes

Medical Student, Michigan Medicine
Headshot of Michael M. McKee
Michael M. McKee

Michael M. McKee

Professor of Family Medicine, Michigan Medicine (Collaborator)
Headshot of Devin McCaslin
Devin McCaslin

Devin McCaslin

Professor and Chief of Audiology, Michigan Medicine (Collaborator)

Alumni

Headshot of Alexander Wang
Alexander Wang

Alexander Wang

Visiting Researcher, CSE (now at CMU)
Headshot of Hriday Chhabria
Hriday Chhabria

Hriday Chhabria

Undergraduate Student, CSE (now at UCSD)
Headshot of Hanlong Liu
Hanlong Liu

Hanlong Liu

Undergraduate Student, CSE (now at Georgia Tech)
Headshot of Yuni Park
Yuni Park

Yuni Park

Undergraduate Research Assistant, CSE (now at Orchid)
Headshot of Andy Jin
Andy Jin

Andy Jin

Undergraduate Student, CSE (now at USC)
Headshot of Reyna Wood
Reyna Wood

Wren "Reyna" Wood

Undergraduate Student, CSE (now at Clemson)
Headshot of Emily Tsai
Emily Tsai

Emily Tsai

Masters Student, School of Information (now at Google)
Headshot of Mansanjam Kaur
Mansanjam Kaur

Mansanjam Kaur

Masters Student, School of Information
Headshot of Yifan Zhu
Yifan Zhu

Yifan Zhu

Masters Student, CSE
Headshot of Andrew Dailey
Andrew Dailey

Andrew Dailey

Undergraduate Student, CSE
Headshot of Rue-Chei Chang
Rue-Chei Chang

Rue-Chei Chang

PhD Student, Computer Science & Engineering
Headshot of Anhong Guo
Anhong Guo

Anhong Guo

Assistant Professor, Computer Science & Engineering (Collaborator)

Publications

We publish our research work in the most prestigious human-computer interaction and accessibility venues including CHI, UIST, and ASSETS. Nine of our articles have been honored with awards.

A snapshot of a CART real-time captioning being broadcasted on an open screen.
AWARD

CARTGPT

ASSETS 2024 (Poster): PAPER
A user is wearing a smartwatch in front of water running down a sink. The smartwatch displays the identified sound as 'water running' with a classification confidence of 83%.
AWARD

SoundWatch Field Study

Real-World Feasibility of Sound Recognition
(Best paper honorable mention)
ASSETS 2023: PAPER | CODE
A close up shot of a person attending a 10-person video conference on a laptop.
AWARD

Classes Taught by DJ

EECS 495: Accessible Computing

This upper-level undergraduate class serves as an introduction to accessibility for undergraduate studdents and uses a curriculum designed by Professor Dhruv Jain. Students learn essential concepts related to accessibiity, disability theory, and user-centric design, and contribute to a studio-style team project in collaboration with clients with a disability and relevant stakeholders we recruit. This intense 14-week class requires working in teams to lead a full scale end-to-end accessibility project from its conceptualization, to design, to implementation, and evaluation. The goal is to reach a level of proficiency comparable to that of a well-launched employee team in a computing industry. Often, projects terminate in real-world deployments and app releases.

Read more →

EECS 598: Advanced Accessibility

This graduate-level class focuses on advances topics in accessibility including disabilty theory, user-research, and their impact on technology. Includes guest lectures by esteemed researchers and practioners in the field of accessibility.

Read more →

Videos

Lab Openings

Prospective PhD students: Our lab has openings for upto three PhD students (beginning Fall 2025) in two areas, (1) data science and AI for acoustics and/or hearing health and (2) AR/VR interaction design for sound accessibility. Please see our research focus. If you believe you are the right fit, apply to the UMich CSE PhD program and email Prof. DJ at profdj [at] umich [dot] edu with: (1) a brief description of yourself and your skillset, supported by relevant prior experience, (2) some examples of projects you'd like to pursue in your PhD, and (3) your CV. We look forward to hearing from you!

Undergraduates/Masters students: Please complete this online intake form and we will get back to you when we have openings.

Potential postdocs: We are recruiting a postdoc in the area of HCI/accessibility with starting date of your choice this year (2025). If interested, please email Prof. DJ with your research interests, a paragraph explaining your dissertation, and your CV as well as apply to the official posting here.