Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next

2026-01-23

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23
Facebook X (Twitter) Instagram
Trending
  • Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Why Humans Can’t Trust AI: You Don’t Know How It Works, What It’s Going To Do Or Whether It’ll Serve Your Interests
Opinion

Why Humans Can’t Trust AI: You Don’t Know How It Works, What It’s Going To Do Or Whether It’ll Serve Your Interests

AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.
The ConversationBy The Conversation2023-09-15No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
AI
Do you trust AI systems, like this driverless taxi, to behave the way you expect them to? AP Photo/Terry Chea
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

By Mark Bailey, National Intelligence University

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Why AI is unpredictable

Trust is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.

A diagram with three columns of dots, two on the left, four in the center and one on the right, with arrows connecting the dots from left to right
In neural networks, the strength of the connections between ‘neurons’ changes as data passes from the input layer through hidden layers to the output layer, enabling the network to ‘learn’ patterns.
Wiso via Wikimedia Commons

Many AI systems are built on deep learning neural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be.

Many of the most powerful AI systems contain trillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the AI explainability problem – the impenetrable black box of AI decision-making.

Consider a variation of the “Trolley Problem.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization – shaped by ethical norms, the perceptions of others and expected behavior – supports trust.

In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.

AI behavior and human expectations

Trust relies not only on predictability, but also on normative or ethical motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a dynamic process, shaped by ethical standards and others’ perceptions.

Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s proving challenging.

The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.

Play
AI expert Stuart Russell explains the AI alignment problem.

Critical systems and trusting AI

One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is the approach taken by the U.S. Department of Defense, which requires that for all AI decision-making, a human must be either in the loop or on the loop. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.

While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.

Avoiding that threshold is especially important because AI is increasingly being integrated into critical systems, which include things such as electric grids, the internet and military systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.

Can people ever trust AI?

AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.

If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.The Conversation

Mark Bailey, Faculty Member and Chair, Cyber Intelligence and Data Science, National Intelligence University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Artificial intelligence deep learning explainable AI Machine learning Trust
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19

Can Taxpayers Lose By Challenging SARS?

2026-01-16

Science Is Best Communicated Through Identity And Culture – How Researchers Are Ensuring STEM Serves Their Communities

2026-01-16

Could ChatGPT Convince You To Buy Something?

2026-01-15

AI Bot Redefines Business Ads with Radio

2026-01-14
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Chery SA to Buy Nissan Rosslyn Plant, Save Jobs

In a major development for South Africa’s automotive industry, Nissan and Chery SA have reached…

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Over R270M In Phuthuma Nathi Dividends Remain Unclaimed

2025-11-27

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next

2026-01-23

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23
Recent Posts
  • Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next
  • How Local Leaders Can Shift Their Trajectory In 2026
  • The EX60 Cross Country: Built For The “Go Anywhere” Attitude
  • Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It
  • Why Rezor’s Exchange Launch Sets a New Benchmark for Web3 Founders — Rahul Rohit Parikh Story of Determination
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.