Alexa, Siri, Cortana, Bixby....are they excluding the vulnerable?
Image Source: Diane Hall

Member Article

Alexa! Siri! Voice Search And The Vulnerable

Cortana, Siri, Alexa, Bixby, Google Assistant…most people will know who or ‘what’ these are: technological personal servants that await our every command.

Communication, in previous generations, used to be limited to face-to-face interaction, telephone calls (at least during the last hundred years or so), and the humble letter. In 2018, following the birth of the internet, there are literally hundreds of ways to communicate with someone. We’re effectively connected to the entire world.

Experts claim that the scope of our communication won’t be limited to ‘Teddy from Australia’, or ‘Xi from China’…we’ll be having conversations with our machines as well, as we instruct them to connect with Tom, Dick or Harry. These non-sentient beings are forever listening, waiting for us to speak - for the moment you tell them to do something. This action is known as ‘voice search’.

Though Alexa et al sound like digital servants, they simply represent a physical shortcut for Google searches. The difference, however, is the way we communicate with them. When typing into search engines, we often keep to keywords and miss out unnecessary connecting words, knowing Google appreciates specifics…however, when talking to Alexa, we do so like she’s one of the family - using full sentences, phrases, and detailed requests.

Google and Amazon have addressed this shift, i.e. how we verbalise our requests for information; so that Alexa and her friends can answer us appropriately, the tech wizards behind voice search technology have built their systems to learn and improve their conversational skills. Google has even fed 3,000 romance novels into its artificial intelligence system, to better its speech and comprehension.

The rising popularity of smart speakers and personal assistants within our devices has resulted in 30% of all Google users now finding what they need to know via voice search. This figure is predicted to rise to 50% by 2020. Two in five users already say its essential in their lives. And whereas, with all other new technologies, it’s typically 18 to 25-year-olds who are the early adopters, with voice search the biggest concentration of users are middle-aged. Currently, the majority of voice commands come from drivers in their cars, and 22% of questions are along the lines of ‘where is X, Y, Z/what’s around me?’.

However, it’s not a solution for everyone. Think of those with strong accents. Will Siri be able to cope with a thick Scottish tongue? Eventually, maybe, if enough audio files are fed into the program bearing the same regional dialect.

But what about the portion of the population with speech and communication issues, like aphasia. People who struggle to be understood even when another human being is in-front of them? In this scenario, the other party can at least use other cues to work out what’s being said: such as lip-reading, body language, gestures and pointing, facial expressions and emotions.

Voice search isn’t practical for, or useful to, people with aphasia, and they risk being excluded from tomorrow’s world.

*Can’t they just continue typing their questions into Google? *What’s the big deal?

Strokes are the main cause of aphasia, which is diagnosed when the language centre in the brain becomes damaged or starved of oxygen. Typing into a search box could prove difficult for a patient if their stroke affected their mobility.

No one likes to feel excluded, and life is already isolating when you struggle to be understood and have no voice. When the four walls around you constitute your whole life, your world doesn’t open up - it recedes.

Maybe technology will move on further and find a way to overcome such hurdles. For example, text-to-speech software has given millions of blind people across the world access to information on the internet. Voice search will be a good thing for them, at least.

What’s encouraging is the news that a Seattle-based entrepreneur is working on an app, Voiceitt, that ‘normalises’ the voice of people with speech impairments – something that could eventually allow many more people to use voice search technology. The app is still in development, but the man behind the technology, Danny Weissberg, began working on the idea after his grandmother suffered a stroke that ultimately affected her speech. Weissberg’s app could potentially help people with other disabilities, such as cerebral palsy or Parkinson’s.

It’s been said before that the world doesn’t wait for those who can’t keep up. If technology exists to make our lives better, let’s have more solutions like Voiceitt, so that everyone can have equal access to the fruits of Silicon Valley.

**Yorkshire-based charity Speak With I.T. supports people with aphasia on a one-to-one basis in their homes, using specialist computer therapy to assist the recovery of their speech. **

This was posted in Bdaily's Members' News section by Diane Hall .

Our Partners