MICROSOFT NADELLA- TALK OF ARTIFICIAL intelligence often leads to speculation about how machines may displace workers. Microsoft CEO Satya Nadella thinks we should talk more about how AI algorithms can expand the workforce now—by helping people with disabilities.
“There are a billion people in the world who don’t fully participate in our economies or societies,” Nadella said, at the WIRED25 Summit in San Francisco. “Technology can allow them to fully participate.”
Nadella, a WIRED25 Icon, nominated Jenny Lay-Flurrie, Microsoft’s chief accessibility officer, as someone who will shape the next 25 years of technology.
Lay-Flurrie was born hearing-impaired, and is now profoundly deaf. She described a Microsoft research project that created a plugin for PowerPoint that can automatically add closed captions during a presentation, by transcribing a speaker’s words. People in the audience can choose to see those captions in their language of choice, thanks to Microsoft’s automated translation technology.
“Artificial intelligence is going to just open up so many doors to us all,” said Lay-Flurrie. She was accompanied by a sign language translator who helps her understand what people around her are saying, so that Lay-Flurrie can respond with her own voice. She said automatic captioning is one example of how AI technology could help more people into the workforce. Another is software that can translate sign language to help hearing and non-hearing people communicate more naturally, she said. The unemployment rate of people with disabilities is roughly twice that for the rest of the population, Lay-Flurrie said.
But if hearing-impaired people are going to rely on speech recognition and translation algorithms, those technologies had better be trustworthy. Any bias or limitations in their ability to perceive the world could distort a person’s own perception of reality, and how others perceive them.
Nadella said that making AI systems trustworthy was a “foundational challenge” of the technology. Because algorithms that transcribe speech and parse language are trained on giant stores of past data, they can too easily learn to reproduce or even accentuate historical societal biases, such as stereotyping women. Microsoft researchers are trying to build tools that can be used to spot bias in data before it is used to train AI systems, and to correct biases when they are discovered in AI systems, he said.