Whitey Ford
08-25-2019, 12:41 PM
And it found that the word nigger used on social media was mostly used by niggers to call each other niggers.
https://i.imgur.com/CEIvm.jpg
new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually "discriminate against the groups who are often the targets of the abuse we are trying to detect," according to the study abstract.
"The results show evidence of systematic racial bias in all datasets"
The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.
"The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in nigger gibberish bix nood African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users," the abstract continues.
https://www.campusreform.org/?ID=13560
https://i.imgur.com/CEIvm.jpg
new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually "discriminate against the groups who are often the targets of the abuse we are trying to detect," according to the study abstract.
"The results show evidence of systematic racial bias in all datasets"
The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.
"The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in nigger gibberish bix nood African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users," the abstract continues.
https://www.campusreform.org/?ID=13560