Protecting free speech and freedom of expression are important and fundamental principles of open, democratic societies. Nevertheless, governments also recognize the need to set boundaries for speech that is not protected. Speech that can lead to harming people, including advocacy of genocide or violence against groups based on race, ethnicity, nationality, religion, sex, political affiliation, or sexual orientation are deemed to be “hate speech” in most democracies.
While experts in legal matters, political leaders, and pundits continue ongoing dialogue about the definitions of protected speech, those who are entrusted with understanding, identifying, and counteracting illegal actions connected to hate speech need solutions for this challenge. Given that billions of new data points are generated every day in open sources, including social media platforms, the analysts, investigators, and decision-makers who must investigate and prevent violent and illegal actions precipitated by hate speech need the most advanced and sophisticated analytic solutions – including Machine Learning and Artificial Intelligence (AI) – to optimize their effectiveness.
With billions of users globally, social media platforms process massive amounts of data every minute. There are AI-based solutions that can provide the capabilities to identify, track, and counter hate speech. There are keywords and search parameters to flag hate speech, in a continuous process that combs through millions of posts and messages to identify epithets and terms that are used by individuals, networks, and organizations promoting hate speech.
AI-based technology doesn’t only flag any use of a word in the data set as being hate speech. There may be exceptions for historical context, alternative word uses, cultural applications, or self-referential terms (members of a group referring to themselves with terminology that would be offensive if used by someone outside of the group). There are advanced AI solutions which can be programmed to recognize the placement and context of specific words and phrases to identify only the cases in which they are used with harmful intent. This is not a flawless process, and Subject Matter Experts should be involved in making final calls and confirming the absence of (or mitigating) biases.
In addition to the factors that may lead to false positives for hate speech cited above, analysts face the challenge of the social media users’ purposefully using abbreviations, acronyms, euphemisms, and code words to evade detection. To overcome this challenge, there are sophisticated AI applications which are even able to actively learn and adapt to emerging trends in hidden hate speech, catching on quickly and adjusting parameters to identify even cleverly disguised written attacks. These advantages make AI technology the most useful tool available for combatting the influence of hate-driven communications.
Given the continuous crush of high-volume and high-velocity new data points in the realm of hate speech, deploying the best and most relevant AI technology is critical to freeing the overworked analyst and investigator from tasks they are otherwise required to perform in a rote or manual process, allowing them the time for higher-order analysis and investigations successes.