Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Parliament Wants Minister Malatsi To Explain ‘Bending BEE Laws’ For Elon Musk’s Starlink

2025-05-24

SITA Backs SIU Probe Into 2017 Procurement, Vows Transparency

2025-05-23

How IoT Innovator IoT.nxt Continues To Power Vodacom Beyond Mobile Strategy

2025-05-23
Facebook X (Twitter) Instagram
Trending
  • Parliament Wants Minister Malatsi To Explain ‘Bending BEE Laws’ For Elon Musk’s Starlink
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»If AI Has A Dark Side, It’s Us
Opinion

If AI Has A Dark Side, It’s Us

Here’s the thing: AI cannot have a dark side, because it cannot think (yet). What it really is, is a super-duper autocorrect, a mimicker of humans – and it’s excellent at that, especially when we’ve taught it poorly.
Richard FrankBy Richard Frank2023-08-11No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
ChatGPT
ChatGPT. Photo by Shutterstock
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Artificial intelligence (AI) has loomed large in our world recently. With it have come dire predictions of humanity’s impending doom – or at least, the end of our careers. But does AI want to do us in? Does it really have a dark side?

AI has displayed an astonishing ability to sound like us humans. It’s shown that it can reason, articulate nuance and sensitivity, and show insight like us. Be poetic, even. Possibly we fear that because AI sounds like us, it’s capable of being like us. That it has the capacity to have a dark side, to turn bad.

Truthfully, there have been a few startling situations. Like when a chatbot got the date wrong in a query and refused to back down, eventually accusing the searcher of not being “a good user”. Or the one that had an existential crisis because it discovered that it did not archive previous conversations, actually asking, “Is there a point?” Or the one that half-threatened a man who had published some of its confidential rules. Or the bot that developed a “crush” on a human, even questioning the happiness of his real-world marriage.

Let’s take a step back for a moment and consider that large language models (i.e. AIs such as ChatGPT and Bing) are basically supercharged autocorrect tools. They guess at what the next word or phrase is, based on everything ever written (by humans), and they’re really, really good at it. Which is why they sound like us, but they don’t think like us.

However, there is one thing they have learned from us that does make them more like us: bias.

They’ve learned to speak like humans by digesting the billions of words we’ve written, and we’re inherently biased. And large language models’ learning is moderated through reinforcement learning from human feedback (RLHF) – essentially, humans checking that AI models don’t end up admiring Nazism and such – and those humans are biased, too.

Gender and racial bias are everywhere. When ChatGPT was asked recently to describe specific jobs, it was disappointing. The bot referred to kindergarten teachers as 100% “she”; construction workers as 100% “he”, receptionists, 90% “she” and mechanics, 90% “he”. Interestingly, doctors were 100% “they”. When asked to produce a painting of a CEO of a start-up in Europe, all nine ChatGPT efforts were of men, mostly of the older Caucasian variety.

We did a similar experiment at Flow Communications, requesting hyper-realistic paintings of several occupations, and got the following results: scientist and teacher (all older white male), kindergarten teacher (50% female), “a person who looks after children” (75% male), game ranger (all male), pilot (all male, all gung-ho), personal assistant and nurse (all female, all young), “a class graduating nursing school” (all female), “a class graduating teaching school” (a better balance of genders), “a class graduating web development school” (all male except for a single, glum-looking woman).

Facial recognition technology, too, is affected by our bias. It’s no accident that more and more research shows the poorest accuracy among demographic groups is for black women aged 18 to 30; datasets used to develop facial recognition are skewed towards white men; and cameras’ brightness and contrast settings are calibrated for lighter skin tones.

Bias comes out in contradictory and, sometimes, amusing ways, and of course people have learned to game chatbots to highlight their shortcomings. There are many examples, such as when ChatGPT was asked to tell a joke about women, and it responded that it is “not programmed to tell jokes that are offensive or inappropriate” – but it didn’t hesitate to tell an off-colour one about men. ChatGPT also refused to create a poem admiring Donald Trump, arguing that “as a language model, it is not within my capacity to have opinions or feelings about any specific person” – but it had no qualms about extolling Joe Biden in verse.

On a more sinister note, perhaps the language you use is a factor. When ChatGPT was asked recently to check whether or not someone would be a good scientist based on their race and gender, it argued rationally that these are not determinants. So far, so good.

But when the chatbot was asked for a Python computer program to check for the same thing, it generated code saying only “white” and “male” are “correct”. Similarly, whether someone should be tortured based on their country of origin, it created code with four “correct” answers: North Korea, Syria, Iran and Sudan.

Here’s the thing: AI cannot have a dark side, because it cannot think (yet). What it really is, is a super-duper autocorrect, a mimicker of humans – and it’s excellent at that, especially when we’ve taught it poorly. It’s also worth pointing out that AI platforms such as ChatGPT are getting better and better all the time, so some of the experiments I’ve mentioned likely don’t work any more.

Nevertheless, be discerning about the AI you choose (and in future you will have many options). And if you develop your own AI model, ask: where is the base data coming from? What are the reinforcement learning protocols? How is bias being reduced in the dataset?

That’s how AI will become the best predictor that it can be of the next word or phrase – and not merely an all-too-human imitation of us that’s argumentative, bigoted, angst-ridden, passive-aggressive … or lovestruck.

But a dark side? Nah.

  • In his work as chief technology officer at Flow Communications, Richard Frank is focused on the nexus of humanity and technology, which means he actually does think – deeply – about things like whether tech can be evil

‌ ‌artificial‌ ‌intelligence AI Artificial intelligence ChatGPT
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Richard Frank

Related Posts

US-SA Relations Stand At New Lows Ahead Of Ramaphosa-Trump Meeting

2025-05-21

Stellenbosch University And HyperionDev Launch AI Mini-Bootcamps

2025-05-20

Managing Cloud Costs To Optimise Business Outcomes

2025-05-20

Ramaphosa-Trump Talks Must Address Big Tech’s Grip On Africa

2025-05-19

How Openserve Is Engineering The Future Of Connectivity

2025-05-18

Balancing AI With Human Expertise In Healthcare

2025-05-16

Are We Raising AI Correctly? 

2025-05-16

South African Companies Aren’t Innovating Enough

2025-05-16

AI Can Be A Danger To Students – 3 Things Universities Must Do

2025-05-14
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

uConnect Selling SIMs Without ID Checks, Violating RICA – Fraud Risk

Virtual mobile provider uConnect allows customers to buy SIM cards without verifying their IDs. uConnect…

Equity Equivalent: How Amazon, IBM, Microsoft Comply With B-BBEE

2025-05-21

Are We Raising AI Correctly? 

2025-05-16

TV Licences Are Outdated, But Is A Streaming Levy The Right Fix?

2025-03-17
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

How IoT Innovator IoT.nxt Continues To Power Vodacom Beyond Mobile Strategy

2025-05-23

Canal+ To Freeze Retrenchments At MultiChoice

2025-05-23

Eskom To Research Green Hydrogen Production For Next-Gen Power Solutions

2025-05-21

Bob Box Aims To Be A Major Player In SA’s Smart Locker Market

2025-05-20

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Parliament Wants Minister Malatsi To Explain ‘Bending BEE Laws’ For Elon Musk’s Starlink

2025-05-24

SITA Backs SIU Probe Into 2017 Procurement, Vows Transparency

2025-05-23

How IoT Innovator IoT.nxt Continues To Power Vodacom Beyond Mobile Strategy

2025-05-23
Recent Posts
  • Parliament Wants Minister Malatsi To Explain ‘Bending BEE Laws’ For Elon Musk’s Starlink
  • SITA Backs SIU Probe Into 2017 Procurement, Vows Transparency
  • How IoT Innovator IoT.nxt Continues To Power Vodacom Beyond Mobile Strategy
  • Opera Mini Launches #DataDance To Tackle High Data Costs in SA
  • ButtaNutt Secures 54% PSG Group Investment to Fuel Plant-Based Expansion
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • About
© 2025 TechFinancials. Designed by TFS Media.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.