Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Ethereum Traders Increase Leverage On-Chain As HFDX Liquidity Hits New Highs

2026-01-31

New To On-Chain Perps? HFDX Is Rapidly Emerging As The Beginner-Friendly Option

2026-01-31

Standard Chartered GBA Business Confidence Indices reveal steady business sentiment

2026-01-31
Facebook X (Twitter) Instagram
Trending
  • Ethereum Traders Increase Leverage On-Chain As HFDX Liquidity Hits New Highs
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Artificial Intelligence Researchers Must Learn Ethics
Opinion

Artificial Intelligence Researchers Must Learn Ethics

ContributorBy Contributor2017-08-30No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

By James Harland, RMIT University

Scientists who build artificial intelligence and autonomous systems need a strong ethical understanding of the impact their work could have.

More than 100 technology pioneers recently published an open letter to the United Nations on the topic of lethal autonomous weapons, or “killer robots”.


Read More: How to make robots that we can trust


These people, including the entrepreneur Elon Musk and the founders of several robotics companies, are part of an effort that began in 2015. The original letter called for an end to an arms race that it claimed could be the “third revolution in warfare, after gunpowder and nuclear arms”.

The UN has a role to play, but responsibility for the future of these systems also needs to begin in the lab. The education system that trains our AI researchers needs to school them in ethics as well as coding.

Autonomy in AI

Autonomous systems can make decisions for themselves, with little to no input from humans. This greatly increases the usefulness of robots and similar devices.

For example, an autonomous delivery drone only requires the delivery address, and can then work out for itself the best route to take – overcoming any obstacles that it may encounter along the way, such as adverse weather or a flock of curious seagulls.

Drones deliver more than just food.
www.routexl.com, CC BY-NC-SA

There has been a great deal of research into autonomous systems, and delivery drones are currently being developed by companies such as Amazon. Clearly, the same technology could easily be used to make deliveries that are significantly nastier than food or books.

Drones are also becoming smaller, cheaper and more robust, which means it will soon be feasible for flying armies of thousands of drones to be manufactured and deployed.

The potential for the deployment of weapons systems like this, largely decoupled from human control, prompted the letter urging the UN to “find a way to protect us all from these dangers”.

Ethics and reasoning

Thomas Aquinas.
Wikipedia Commons

Whatever your opinion of such weapons systems, the issue highlights the need for consideration of ethical issues in AI research.

As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning.

It is precisely this kind of reasoning that is increasingly required. For example, driverless cars, which are being tested in the US, will need to be able to make judgements about potentially dangerous situations.

For instance, how should it react if a cat unexpectedly crosses the road? Is it better to run over the cat, or to swerve sharply to avoid it, risking injury to the car’s occupants?

Hopefully such cases will be rare, but the car will need to be designed with some specific principles in mind to guide its decision making. As Virginia Dignum put it when delivering her paper “Responsible Autonomy” at the recent International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne:

The driverless car will have ethics; the question is whose?

A similar theme was explored in the paper “Automating the Doctrine of Double Effect” by Naveen Sundar Govindarajulu and Selmer Bringsjord.

The Doctrine of Double Effect is a means of reasoning about moral issues, such as the right to self-defence under particular circumstances, and is credited to the 13th-century Catholic scholar Thomas Aquinas.

The name Double Effect comes from obtaining a good effect (such as saving someone’s life) as well as a bad effect (harming someone else in the process). This is a way to justify actions such as a drone shooting at a car that is running down pedestrians.

What does this mean for education?

The emergence of ethics as a topic for discussion in AI research suggests that we should also consider how we prepare students for a world in which autonomous systems are increasingly common.

The need for “T-shaped” people has been recently established. Companies are now looking for graduates not just with a specific area of technical depth (the vertical stroke of the T), but also with professional skills and personal qualities (the horizontal stroke). Combined, they are able to see problems from different perspectives and work effectively in multidisciplinary teams.

A Google self-driving car.
Roman Boed, CC BY-NC

Most undergraduate courses in computer science and similar disciplines include a course on professional ethics and practice. These are usually focused on intellectual property, copyright, patents and privacy issues, which are certainly important.

However, it seems clear from the discussions at IJCAI that there is an emerging need for additional material on broader ethical issues.


Read More: Never mind killer robots – even the good ones are scarily unpredictable


Topics could include methods for determining the lesser of two evils, legal concepts such as criminal negligence, and the historical effect of technology on society.

The key point is to enable graduates to integrate ethical and societal perspectives into their work from the very beginning. It also seems appropriate to require research proposals to demonstrate how ethical considerations have been incorporated.

The ConversationAs AI becomes more widely and deeply embedded in everyday life, it is imperative that technologists understand the society in which they live and the effect their inventions may have on it.

  • James Harland, Associate Professor in Computational Logic, RMIT University
  • This article was originally published on The Conversation. Read the original article.

AI Artificial intelligence
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Contributor

Related Posts

South Africa Could Unlock SME Growth By Exploiting AI’s Potential Through Corporate ESD Funds

2026-01-28

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19

AI Can Make The Dead Talk – Why This Doesn’t Comfort Us

2026-01-19

Can Taxpayers Lose By Challenging SARS?

2026-01-16
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Meet The €2.95M Capricorn 01 Zagato Hypercar Rebel

capricorn GROUP (capricorn), the German-based industry leader in automotive and motorsport lightweight technology, presented two…

SARB Holds Repo Rate Steady in Cautious Monetary Policy Decision

2026-01-29

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

How a Major Hotel Group Is Electrifying South Africa’s Travel

2026-01-29

Volvo C70: 30 Years Of The Car That Changed The Way Volvo Looked

2026-01-29

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Ethereum Traders Increase Leverage On-Chain As HFDX Liquidity Hits New Highs

2026-01-31

New To On-Chain Perps? HFDX Is Rapidly Emerging As The Beginner-Friendly Option

2026-01-31

Standard Chartered GBA Business Confidence Indices reveal steady business sentiment

2026-01-31
Recent Posts
  • Ethereum Traders Increase Leverage On-Chain As HFDX Liquidity Hits New Highs
  • New To On-Chain Perps? HFDX Is Rapidly Emerging As The Beginner-Friendly Option
  • Standard Chartered GBA Business Confidence Indices reveal steady business sentiment
  • AFF draws 4,000+ global political and business leaders, inaugural Global Business Summit
  • NSFW AI Chat with Advanced Memory Systems for Contextual Interaction Launches on Dream Companion
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.