reports that Researchers at the University of Rochester have developed an artificial intelligence system that can identify coded hate speech online. In early 2016, Google unveiled tech incubator Jigsaw, with the intention of “substantially reducing” online hate and harassment. 

But the plan backfired when trolls responded with the “Operation Google” campaign, which replaces racial slurs with names of technology brands and products. “Google,” for example, refers to black people, while fellow search engines “Yahoo” and “Bing” allude to Mexicans and Asians, and Jewish folks are called “Skypes.” 

The idea was to force Google to censor its own websites by making the common word synonymous with bigotry. Now, analysts at New York’s University of Rochester are fighting back with an AI of their own. Their AI takes uses key topics and hashtags to identify hateful tweets even when explicit hate speech is coded.