Thursday, May 2, 2024

Research team develops voice-activated AI accessible to people who stutter

January 30, 2023

In the past few years, voice-activated artificial intelligence, or AI, has become ingrained in our daily lives. But it’s not just applications like Siri and Alexa and speech-to-text software. Even calling to set up a doctor’s appointment or interviewing for a job can involve voice-activated AI. 

While this makes life easier for many, for people with speech differences such as stuttering, these technological advancements can be disadvantageous.

A team of MSU researchers created a project to make AI more accessible because of this issue.

Dr. Nihar Mahapatra from the department of electrical and computer engineering first got the idea for the project when he heard about work on stuttering on the radio. This inspired him to consider how he could use his expertise to help people who stutter. He searched on the internet and found anecdotes from people who shared their frustrations with voice-activated AI that either could not recognize their speech or didn’t give them enough time to finish speaking.

Last year when Mahapatra heard about a National Science Foundation program that funds research to enhance opportunities for people with disabilities, he decided it was the perfect opportunity. Next, he formed a team of researchers that received a $750,000 grant from the NSF. 

The team includes Dr. J. Scott Yaruss from communicative sciences and disorders, Dr. Ann Marie Ryan from organizational psychology, Dr. Hope Gerlach from Western Michigan University and communicative sciences and disorders doctoral candidate Caryn Herring, who is also a board chairperson at The National Association of Young People Who Stutter. 

The project is currently in the first of two phases, focusing on gathering all the information necessary to understand voice activated AI, how it’s used and how it affects people who stutter. 

Herring is a person who stutters. She has had many issues with the software, which mainly shows up when talking to AI over the phone. When speaking with a person, Herring said she can explain she has a stutter. But when talking to AI, the software doesn't understand that she may need more time.

“I've been in situations where I can get through the first three or five (questions), and then whatever the prompt is that I'm supposed to say, I can't do it in a fluent way,” Herring said. “And then just being disconnected or sent back to step one to then start the whole process again.” 

Though this is inconvenient and frustrating, the consequences of this issue can become more serious when the AI is administering “conversations” like job interviews. Since the AI cannot accurately transcribe their speech, people who stutter often face hiring discrimination. Herring has heard about these experiences from other people who stutter.

“Unfortunately because you don't get to talk to anyone or advocate for yourself, it makes it really hard to sort of explain what stuttering is and what stuttering is not,” Herring said. “Being able to self-disclose and explain that the amount of fluency I have won't impact my ability to do this job, and there's just not there's no chance to do that.”

Herring’s main role in the project is to bring in the perspective of the stuttering community. She said that this is crucial to ensuring the technology they create will actually be used. 

“My role is sort of like a bridge to ensure that members of the stuttering community can learn what we are doing,” Herring said. “But I would say even more importantly, for the rest of the team to sort of learn from us to understand what the main issues are and how they can be fixed.”

The team also needs to hear from people who create the AI and employers who use it. The team will conduct a conference with all the stakeholders to hear their perspectives. This will help the team create a solution that works for everyone. Yaruss said this will also help the team develop a set of guidelines for others to check whether their systems are fair, equitable and accessible for people who stutter.

Ryan is also involved in communicating with HR managers and people who create and sell AI software to teach them about the software’s issues and how to use AI appropriately. 

The team will also conduct tests of current voice activated AI technologies to see when and for what types of speech they work. This will help develop guidelines for the team's own software. 

Mahapatra said the issue boils down to how the AI is trained. Often, the AI is trained to recognize what is considered mainstream speech, thus creating a bias that disadvantages people with speech difference. The team plans on changing the way the AI is trained. They hope that this will improve life for people who stutter. 

“The benefit of this program is that it is actually taking solutions that researchers develop to the hands of users who can benefit from it,” Mahapatra said. “So that's the other aspect of it that basically excited us all, as a team. We wanted to make some positive impact.” 

Though this project focuses on accessibility for people who stutter, the team said AI still has far to go to understand everyone. Herring hopes this project can help pave the way for others.

“It's an endless opportunity to add stuff,” Herring said. “Stuttering is what we're focused on right now, but speech differences are present across the board. From individuals who have had traumatic brain injuries or a stroke to articulation disorders, cancer patients to more just everyday diversity in speech, English as a second language, so many different dialects and accents, and AI has a long way to sort of catch up. AI is not so great at understanding any of those yet.”

Discussion

Share and discuss “Research team develops voice-activated AI accessible to people who stutter” on social media.