Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Facebook X (Twitter) Instagram
Trending
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Why AI is Struggling to Detect Hate Speech
Opinion

Why AI is Struggling to Detect Hate Speech

ContributorBy Contributor2019-09-01Updated:2021-02-104 Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Facebook
Facebook. Alexey Boldin / Shutterstock.com
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link
by Ben Dickson
As online trolling and hate speech are becoming more problematic, companies like Facebook and Twitter are under increasing pressure to identify and block hateful speech on their networks. And like many other problems that involve the massive amounts of online content, these companies have turned to artificial intelligence for solutions.

All major social media networks use AI algorithms to moderate online content. But while AI shows promise in detecting some type of content, it is hard pressed when it comes to spotting hate speech.

A recent study by scientists at University of Washington, Carnegie Mellon University, and Allen Institute for Artificial Intelligence, has found that the leading AI systems for detecting hate speech are deeply biased against African Americans. This includes Google Perspective, an AI tool for moderating online conversations.

The study and the unending struggles of tech companies to automate hate speech detection highlight the limits of current AI technologies in understanding the context of human language.

Understanding language context is hard

Robot sitting on a bunch of books. Contains clipping path

Advances in deep learning have helped automate complicated tasks such as image classification and object detection. Artificial neural networks, the key innovation behind deep learning algorithms, learn to perform tasks by reviewing examples. The general belief is that the more quality data you provide a neural network, the better it performs. This is true, to some extent.

We would love for you to take a look at our Reddit Statistics Feature we are building.

At their core, neural networks are statistical machines, albeit very complicated ones. This might not pose a problem for image classification, which is largely dependent on the visual features of objects. For instance, a neural network that is trained on millions of labeled images creates a mathematical representation of the common pixel patterns between different objects and can detect them with remarkable accuracy.

But when it comes to natural language processing and generation (NLP/NLG), machine learning might not be enough. There are still plenty of things statistical representations can do. There are several cases of AI models translating text with impressive precision or generating coherent text. But while those feats are remarkable, they barely scratch the surface of the human language. These AI models perform their tasks by calculating the possibility that words appear in a certain sequence based on the examples they’ve viewed during training.

Hate-speech detector AI model draws their training from data sets that only include sample sentences and their corresponding toxicity score. In their studies, the authors used publicly available AI models that have been trained on millions of annotated tweets and other social media posts.

But statistics do not represent context. When our brain wants to interpret a sentence, we don’t only consider the sequence of words and how they compare to other sentences we’ve heard before. We also take into account other factors, such as the different characteristics of the person who is speaking. Hearing a sentence from one person might sound offending, while another person saying the same thing might be totally fine.

In their study, the researchers from Carnegie Mellon, AI2 and U of Washington show examples of sentences that would sound hateful and racist if said by a white person but acceptable if a black person said it.

AI hate speech
Depending on who is saying a sentence, it may sound toxic or not (source: University of Washington)

The authors suggest that the people who annotate the data should know about the demographics and characteristics of their authors. This will help them improve the quality of the data sets and train AI models that are much more accurate.

It’s hard to agree on what is hate speech

Annotating the data set with relevant meta-data sounds like a good idea, and the results of the experiments show that it reduces bias in the hate-speech-detection AI algorithms. But there are two problems that would make this solution incomplete.

First, annotating training data with relevant information is an enormous task. In many cases, that information is not available. For instance, tweets don’t contain information about the race, nationality and religion of the author, unless the user explicitly state that information in their bio. Some of that information can be inferred by looking at the timeline of the user and other content they have posted online. But finding and annotating that kind of information is much more difficult than labeling cats and dogs in photos.

But even adding author information would not be enough to automate hate speech. Hate speech is deeply tied to culture, and culture varies across different regions. What is considered hateful or acceptable can vary not only across countries, but also across different cities in the same country. And the relevance of things such as race, gender and religion can also vary when you go do to different geographical areas. And culture is something that changes over time. What is considered the norm today might be considered offending tomorrow.

Hate speech is also very subjective. Humans of similar backgrounds, races and religions often argue on whether something is hateful or not.

It’s hard to see how you could develop an AI training data set that could take into account all those factors and make sense of all these different complicated dialects we’ve developed over thousands of years.

When it comes to vision, hearing and physical reflexes, our brain and nervous system are much more inferior to those of wild animals. But language is the most complicated function of our brains.

All animals have some sort of way to communicate together. Some of the more advanced species even have rudimentary words to represent basic things such as food and danger. But our ability to think in complicated ways and communicate knowledge, opinions and feelings gives us the edge over all other living beings. Neuroscientists still haven’t been able to find out the exact mechanisms of formation and interpretation of language in the human brain.

Many companies think they can outsource their NLP tasks to outside contractors, hoping that human labor will train their AI and eventually create a fully-automated system.

But it’s difficult to imagine anything short of a large-scale human brain being able to make sense of all the different nuances of the diverse languages of the people who inhabit this planet. For the moment, our AI algorithms will be able to find common patterns and help filter down the huge amounts of content we create, but we can’t remove humans from the loop when it comes to detecting hate speech.

  • Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business and politics.
  • This article was originally published on Tech Talks. Read the original article here.

AI deep learning Facebook hate speech Reddit Statistics Feature Trolling Twiter
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Contributor

Related Posts

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19

Can Taxpayers Lose By Challenging SARS?

2026-01-16

Science Is Best Communicated Through Identity And Culture – How Researchers Are Ensuring STEM Serves Their Communities

2026-01-16

Could ChatGPT Convince You To Buy Something?

2026-01-15

Trust Is The New Currency Of The Digital Economy

2026-01-12

Why Financial Crime Risk Demands Regulation And How Africa Is Leading The Way

2026-01-12

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12

4 Comments

  1. Pingback: Why AI Is Struggling To Detect Hate Speech - AI Summary

  2. PC Oyun Haberleri on 2023-05-06 16:21

    Yes.. Nice

    Reply
  3. Efsaneler Ligi on 2023-05-22 13:20

    Wouw Good Content, bravoo

    Reply
  4. LoL RP on 2023-11-16 15:19

    Bravo!

    Reply
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

As countries push ahead with digital transformation, infrastructure planning is evolving. It is no longer…

Cartesian Capital Expands Investor Toolkits With JSE Listings

2026-01-20

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12

How SA’s Largest Wholesale Network is Paving the Way for a Connected, Agile Future

2025-12-02
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The EX60: A Volvo That Talks Back

2026-01-20

Over R270M In Phuthuma Nathi Dividends Remain Unclaimed

2025-11-27

Africa’s Next Voice Revolution, When 5G Meets AI

2025-11-21

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Recent Posts
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
  • The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity
  • Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto
  • Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms
  • The Productivity Myth That’s Costing South Africa Talent
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.