Whitey Ford
03-24-2020, 12:13 PM
There Is a Racial Divide in Speech-Recognition Systems, Researchers Say
Technology from Amazon, Apple, Google, IBM and Microsoft misidentified 35 percent of words from people who were black. White people fared much better.
https://i.imgur.com/CEIvm.jpg
Speech recognition systems from five of the world’s biggest tech companies — Amazon, Apple, Google, IBM and Microsoft — make far fewer errors with users who are white than with users who are black, according to a study published Monday in the journal Proceedings of the National Academy of Sciences.
The systems misidentified words about 19 percent of the time with white people. With black people, mistakes jumped to 35 percent. About 2 percent of audio snippets from white people were considered unreadable by these systems, according to the study, which was conducted by researchers at Stanford University. That rose to 20 percent with black people.
The Stanford study indicated that leading speech recognition systems could be flawed because companies are training the technology on data that is not as diverse as it could be — learning their task mostly from white people, and relatively few black people.
“Here are probably the five biggest companies doing speech recognition, and they are all making the same kind of mistake,” said John Rickford, one of the Stanford researchers behind the study, who specializes in African-American speech. “The assumption is that all ethnic groups are well represented by these companies. But they are not.”
http://archive.is/A0njM#selection-1069.0-1069.338
Technology from Amazon, Apple, Google, IBM and Microsoft misidentified 35 percent of words from people who were black. White people fared much better.
https://i.imgur.com/CEIvm.jpg
Speech recognition systems from five of the world’s biggest tech companies — Amazon, Apple, Google, IBM and Microsoft — make far fewer errors with users who are white than with users who are black, according to a study published Monday in the journal Proceedings of the National Academy of Sciences.
The systems misidentified words about 19 percent of the time with white people. With black people, mistakes jumped to 35 percent. About 2 percent of audio snippets from white people were considered unreadable by these systems, according to the study, which was conducted by researchers at Stanford University. That rose to 20 percent with black people.
The Stanford study indicated that leading speech recognition systems could be flawed because companies are training the technology on data that is not as diverse as it could be — learning their task mostly from white people, and relatively few black people.
“Here are probably the five biggest companies doing speech recognition, and they are all making the same kind of mistake,” said John Rickford, one of the Stanford researchers behind the study, who specializes in African-American speech. “The assumption is that all ethnic groups are well represented by these companies. But they are not.”
http://archive.is/A0njM#selection-1069.0-1069.338