Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Facebook X (Twitter) Instagram
Trending
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»What’s Preventing Artificial Intelligence from Taking the Next Big Leap?
Opinion

What’s Preventing Artificial Intelligence from Taking the Next Big Leap?

ContributorBy Contributor2017-04-28No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

by Ben Dickson

No one will argue that Artificial Intelligence has taken great strides in past years. Thanks to AI we’re getting targeted and personalized ads, becoming better in education, healthcare, agriculture and whatnot.

So what’s preventing Artificial Intelligence from taking the next big leap? Maybe it’s intelligence.

Fact of the matter is, AI algorithms are becoming very smart and efficient at doing specific tasks, but they’re not smart enough to explain their decisions. And neither can their creators.

How does that amount to a problem? It doesn’t, as long as AI is making suggestions and not decisions. So for things such as advertisements and purchase suggestions, it’s okay to put the robots in charge. Even in domains such as diagnosis and treatment of illness, AI can make some very good recommendations and help physicians in making decisions about patient treatment. AI can help glean traffic patterns and make recommendations about reducing congestion in cities. And in fact, this has so far been enough to disrupt the employment landscape.

But when it comes to making critical decisions, we’re still not ready to put autonomous AI-powered systems in full control. Because everyone makes mistakes, whether human or not. And if those mistakes have critical and fatal consequences, someone will have to be held to account.

But there are a lot of fields where automation has already fully replaced humans. Manufacturing is just one of the examples. And those systems go awry often enough. In fact, if well-trained, the margin of error of Artificial Intelligence is negligible. For instance, self-driving cars are bound to reduce road accidents by over 90 percent. So what makes AI any different from other software?

A beautiful woman kissing male robot with love. Two faces very close to each other. Relationship between artificial cyborg and real girl. Closeup portrait of futuristic couple.
A beautiful woman kissing male robot with love. Two faces very close to each other. Relationship between artificial cyborg and real girl. Closeup portrait of futuristic couple. (Photo Credit: www.shutterstock.com)

Transparency—or opacity, depending on your perspective.

Past generations of software were totally transparent. Everything relied on source code, which could be examined to determine the cause of errors. Open source software are available to all for scrutiny. Even close sourced software can be reverse-engineered, and if not, can be opened for examination with the right legal warrant. So if a software stops working as it should and causes damage, it’s relatively easy to determine culpability. Investigators can determine whether the user was to blame for misusing the application, or if the developer was responsible for not fixing the bugs.

Things are not so clearcut with Artificial Intelligence. Developers create algorithms, provide them with data, train them, and then let them learn on their own. Those algorithms usually end up finding patterns and tricks that even their creators can’t fathom. They become opaque, as engineers will say. Alpha Go, the famous google AI that beat the world champion at the ancient board game Go, made moves that left its creators (and the world) stunned.

And therein lies the problem. No one will object if Alpha Go makes a wrong move, or if the most sophisticated advertisement algorithm makes a bad suggestion. The media will talk when Google Photos labels black people as gorillas, or when an AI judge favors white contestants in a beauty contest. Microsoft’s disastrous chatbot will be remembered as a bad joke. But no one gets hurt (at least not seriously) from these systems and so we shrug off their mistakes.

Machine learning and artificial intelligence concept. Man suit hand holding Ai chipsets and blue tone of automate wireless Robot arm in smart factory background
Machine learning and artificial intelligence concept. Man suit hand holding Ai chipsets and blue tone of automate wireless Robot arm in smart factory background (Photo Credit: www.shutterstock.com)

But what about more critical circumstances? What happens to those 10 percent fatalities that self-driving cars can’t prevent? Who will explain why a self-driving car ran over a pedestrian, even the probability is near zero? Who will be held to account? The engineers will say that they can’t explain every decision their product makes. The driver—if the owner can be called that at all—will have no control over the situation. The car, which made the act, is mum.

The same criticality can be extrapolated to other situations such as healthcare, crime fighting and law. Mistakes in those fields can have social, political, and even fatal repercussions. And while humans make mistakes all the time, and at an even more frequent rate than machines, they take responsibilities for their mistakes, got to court, stand trial, pay fines, go to jail.

So where do we move from here? First, we need more transparency. This means AI developers should make sure both the software artifacts (source code, components…) and the data science (stats, formulas, math…) that powers their products are open to scrutiny. This goes against the current norm, which is to keep secrets away from prying eyes. Fortunately, we’re seeing some systematic efforts in this field. But more needs to be done.

Naturally, not everything can be made completely transparent, especially where deep learning is involved. There’ll still be some major opacity where complex functionalities are involved. So second, I strongly believe that in those cases, humans should still be in exclusive charge. AI systems can act as complementary to human efforts, providing experts with research results, patterns, and useful data and helping them in making critical decisions. The point is, the red button must be pressed by someone who can assume responsibility for their actions.

Things will not stay this way forever. Artificial General Intelligence is just around the corner (but we’ve been saying this for quite a while), and when it becomes a reality, we’ll have robots and machines that can reason, make decisions, explain those decisions and bear the consequences. Some say it’ll take decades. Others say it’ll never come.

Until it does though, AI will still have to take a back seat and let the grownups decide.

This article was originally published on Tech Talks. Read the original article here.

Artificial intelligence deep learning Machine learning politics tech
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Contributor

Related Posts

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19

Can Taxpayers Lose By Challenging SARS?

2026-01-16

Science Is Best Communicated Through Identity And Culture – How Researchers Are Ensuring STEM Serves Their Communities

2026-01-16

Could ChatGPT Convince You To Buy Something?

2026-01-15

AI Bot Redefines Business Ads with Radio

2026-01-14

Trust Is The New Currency Of The Digital Economy

2026-01-12

Why Financial Crime Risk Demands Regulation And How Africa Is Leading The Way

2026-01-12
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

As countries push ahead with digital transformation, infrastructure planning is evolving. It is no longer…

Cartesian Capital Expands Investor Toolkits With JSE Listings

2026-01-20

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12

How SA’s Largest Wholesale Network is Paving the Way for a Connected, Agile Future

2025-12-02
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The EX60: A Volvo That Talks Back

2026-01-20

Over R270M In Phuthuma Nathi Dividends Remain Unclaimed

2025-11-27

Africa’s Next Voice Revolution, When 5G Meets AI

2025-11-21

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Recent Posts
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
  • The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity
  • Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto
  • Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms
  • The Productivity Myth That’s Costing South Africa Talent
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.