Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality

2025-08-28

Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path

2025-08-28

Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns

2025-08-28
Facebook X (Twitter) Instagram
Trending
  • Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Why Humans Can’t Trust AI: You Don’t Know How It Works, What It’s Going To Do Or Whether It’ll Serve Your Interests
Opinion

Why Humans Can’t Trust AI: You Don’t Know How It Works, What It’s Going To Do Or Whether It’ll Serve Your Interests

AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.
The ConversationBy The Conversation2023-09-15No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
AI
Do you trust AI systems, like this driverless taxi, to behave the way you expect them to? AP Photo/Terry Chea
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

By Mark Bailey, National Intelligence University

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Why AI is unpredictable

Trust is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.

A diagram with three columns of dots, two on the left, four in the center and one on the right, with arrows connecting the dots from left to right
In neural networks, the strength of the connections between ‘neurons’ changes as data passes from the input layer through hidden layers to the output layer, enabling the network to ‘learn’ patterns.
Wiso via Wikimedia Commons

Many AI systems are built on deep learning neural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be.

Many of the most powerful AI systems contain trillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the AI explainability problem – the impenetrable black box of AI decision-making.

Consider a variation of the “Trolley Problem.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization – shaped by ethical norms, the perceptions of others and expected behavior – supports trust.

In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.

AI behavior and human expectations

Trust relies not only on predictability, but also on normative or ethical motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a dynamic process, shaped by ethical standards and others’ perceptions.

Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s proving challenging.

The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.

Play
AI expert Stuart Russell explains the AI alignment problem.

Critical systems and trusting AI

One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is the approach taken by the U.S. Department of Defense, which requires that for all AI decision-making, a human must be either in the loop or on the loop. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.

While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.

Avoiding that threshold is especially important because AI is increasingly being integrated into critical systems, which include things such as electric grids, the internet and military systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.

Can people ever trust AI?

AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.

If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.The Conversation

Mark Bailey, Faculty Member and Chair, Cyber Intelligence and Data Science, National Intelligence University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Artificial intelligence deep learning explainable AI Machine learning Trust
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

Amid Uncertainty, Navigating The AI transition, A Roadmap For Africa

2025-08-26

Where Are The Black Investors And VCs?

2025-08-25

Why South Africa Is The Hidden Powerhouse For Global Executive Search

2025-08-25

University Of Pretoria Ranked No. 1 In SA For Artificial Intelligence Research

2025-08-24

Unilabs Finance (UNIL) AI Hedge Fund Crosses $14 Million After Announcing Mining Fund Beta Launch

2025-08-23

BNB Momentum Slows—Now Analysts Back This Low-Cap Altcoin for 50x in 2025

2025-08-21

Your WiFi Router Is About To Start Watching You

2025-08-21

Scaling Vision: How AI is Advancing Image Intelligence from Smartphones to Self-Driving Cars

2025-08-21

The AI-Powered Future of Work: Achraf Golli on Education, Workforce Readiness, and the Role of AI

2025-08-21
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

KZN’s First Supercar-Centric Luxury Residential Development Unveiled

The Master Developers of Zimbali Lakes have shifted luxury living into high gear with the…

DFA & Ciena Set 1.6 Tbps World Record On Single Wavelength

2025-08-27

Government Pensions Administration Agency CEO Placed On Precautionary Suspension

2025-08-26

Airtel Africa & Vodacom Forge Landmark Infrastructure Partnership

2025-08-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

SA’s Skhokho 2.0 Puts Enterprise AI In SME Hands

2025-08-28

Please Call Me: After 25 Years, Will SCA’s New Bench Silence ConCourt?

2025-08-26

Vodacom Invests R400M To Expand Network In Free State And Northern Cape

2025-08-26

Elon Musk’s Starlink Backs BEE Equity Equivalents, Not 30% Ownership

2025-08-18

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality

2025-08-28

Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path

2025-08-28

Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns

2025-08-28
Recent Posts
  • Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality
  • Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path
  • Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns
  • Preparing For Windows 11: Transitioning From Planning To Implementation
  • XRP continues to benefit, and Quid Miner Cloud Mining has launched a daily passive income contract
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2025 TechFinancials. Designed by TFS Media.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.