moonbattery logo

Aug 23 2019

Antiracism Artificial Intelligence Accused of Racism

Antiracism is fundamentally racist. No matter how obvious this is, you are never supposed to admit it. This poses a problem for the designers of artificial intelligence intended to sniff out racists. Machines only understand what you specifically tell them. They don’t speak the phony language of lies used to prop up political correctness.

From Campus Reform:

A new Cornell University study reveals that some artificial intelligence systems created by universities to identify “prejudice” and “hate speech” online might be racially biased themselves and that their implementation could backfire, leading to the over-policing of minority voices online.

The systems are racially biased against favored minorities because they are not racially biased against whites. This lack of racial bias causes them to notice that blacks are more racist than whites are — which is a racist observation.

A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually “discriminate against the groups who are often the targets of the abuse we are trying to detect,” according to the study abstract. …

“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users,” the abstract continues.

Anything that has a disproportionate impact on sacred blacks is racist — including acknowledging that blacks tend to be racist, which is why they tend to side with their own even in outrageous cases like the O.J. verdict, and why they vote as a homogenous borg.

So the researchers attribute their findings to human error in the programming. They denounce the presumably liberal programmers for “internal biases”; the unthinkable alternative would be to admit that the ideology they share with these programmers is a crock of crap.

Several universities have been working on systems that will automatically identify politically incorrect thoughts, as well as “fake news” — i.e., information that has not been approved by authorities. For example, social engineers at University of California, Santa Barbara are working on a system that will determine whether information shared between users is to be designated as “genuine” (politically correct) or “misleading” (politically incorrect). The idea is to integrate their software “into browsers on the client side” so as to target “content that causes hate, aversion, and prejudice.”

They had better tweak their code to allow hate, aversion, and prejudice from persons of politically preferred pigmentation, or they will find themselves denounced for being racist.

On a tip from R F. Hat tip: Brass Pills.

Alibi3col theme by Themocracy