Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Dogecoin & Solana Holders Bet Big On Remittix As 300% Bonus Activates For Early Buyers

2026-02-04

Remittix Investors Could See 5x Gains By This Weekend After 300% Exclusive Bonus Goes Live

2026-02-04

Early Remittix Buyers Are Set For Huge Success With Experts Predicting Over 7,000% Gains In 2026

2026-02-04
Facebook X (Twitter) Instagram
Trending
  • Dogecoin & Solana Holders Bet Big On Remittix As 300% Bonus Activates For Early Buyers
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Connected Life»AI Has Social Consequences, But Who Pays The Price? Tech Companies’ Problem With ‘Ethical Debt’
Connected Life

AI Has Social Consequences, But Who Pays The Price? Tech Companies’ Problem With ‘Ethical Debt’

The ConversationBy The Conversation2023-04-201 Comment7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
AI
You don’t have to see the future to know that AI has ethical baggage. Wang Yukun/Moment via Getty Images
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

By Casey Fiesler, University of Colorado Boulder

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Off to the races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger starting to fill.

Within months, Google and Microsoft released their own generative AI programs, which seemed rushed to market in an effort to keep up. Google’s stock prices fell when its chatbot Bard confidently supplied a wrong answer during the company’s own demo. One might expect Microsoft to be particularly cautious when it comes to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

A hand holding a smartphone with a poem written by AI.
Not all AI-generated writing is so delightful.
Smith Collection/Gado/Archive Photos via Getty Images

When the social debt of these rushed releases comes due, I expect that we will hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the future. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a common side effect of many technological revolutions, but magnified in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to produce negative consequences, but it is designed to produce the unforeseen.

However, it is disingenuous to suggest that technologists cannot accurately speculate about what many of these consequences might be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech companies themselves. It was external researchers who found racial bias in widely used commercial facial analysis systems, for example, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the fact. And this pattern doesn’t seem likely to change if companies keep firing ethicists.

Speculating – responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to decrease ethical debt is to take the time to think ahead about things that might go wrong – but this is not something that technologists are necessarily taught to do.

Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Of course, science fiction writers do not tend to be tasked with developing these solutions – but right now, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the ability to consider future consequences at all, including in the very near future.

Half a dozen students, including one in a hijab, chat at long tables, with a professor at the back of the photo.
Learning to speculate about tech’s consequences – not just for tomorrow, but for the here and now.
Maskot/Getty Images

This is a topic I’ve been exploring in my teaching for some time, encouraging students to think through the ethical implications of sci-fi technology in order to prepare them to do the same with technology they might create. One exercise I developed is called the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and found that there are pedagogical benefits to encouraging computing students to imagine what might go wrong in the future – and then brainstorm about how we might avoid that future in the first place.

However, the purpose isn’t to prepare students for those far-flung futures; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups that are underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to hit pause?

In March 2023, an open letter with thousands of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” or even cause a “loss of control of our civilization,” its writers warned.

As critiques of the letter point out, this focus on hypothetical risks ignores actual harms happening today. Nevertheless, I think there is little disagreement among AI ethicists that AI development needs to slow down – that developers throwing up their hands and citing “unintended consequences” is not going to cut it.

We are only a few months into the “AI race” picking up significant speed, and I think it’s already clear that ethical considerations are being left in the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors may not be the ones paying for it.The Conversation

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

South Africa Could Unlock SME Growth By Exploiting AI’s Potential Through Corporate ESD Funds

2026-01-28

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The EX60: A Volvo That Talks Back

2026-01-20

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

New SITA CEO Vows: Faster Digital State, Stronger Security For All

2026-01-15

Stablecoins Are Gaining Ground As Digital Currency In Africa: How To Avoid Risks

2026-01-13

ConvoGPT and Founder Jeremy David Announce ConvoGPT OS with Enterprise Partnership with ElevenLabs

2026-01-08

New Volvo EX60 Promises Up to 810km Range With A Quick Recharge

2026-01-08

The Future Of Work – Skills, Not Fear – South Africa’s Path To An AI-Ready Workforce

2026-01-07

1 Comment

  1. Pingback: AI Has Social Consequences, But Who Pays The Price? Tech Companies’ Problem With ‘Ethical Debt’ - News Online | Concnews

Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Dutch Entrepreneurial Development Bank FMO Invests R340M In Lula To Expand SME funding In SA

South African SME funding platform Lula has secured R340 million in local currency funding from…

Paarl Mall Gets R270M Mega Upgrade

2026-02-02

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Vodacom Reports Robust Q3 Growth, Driven By Diversification And Strategic Moves

2026-02-04

South Africa’s First Institutional Rand Stablecoin, ZARU, Launches

2026-02-03

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Dogecoin & Solana Holders Bet Big On Remittix As 300% Bonus Activates For Early Buyers

2026-02-04

Remittix Investors Could See 5x Gains By This Weekend After 300% Exclusive Bonus Goes Live

2026-02-04

Early Remittix Buyers Are Set For Huge Success With Experts Predicting Over 7,000% Gains In 2026

2026-02-04
Recent Posts
  • Dogecoin & Solana Holders Bet Big On Remittix As 300% Bonus Activates For Early Buyers
  • Remittix Investors Could See 5x Gains By This Weekend After 300% Exclusive Bonus Goes Live
  • Early Remittix Buyers Are Set For Huge Success With Experts Predicting Over 7,000% Gains In 2026
  • Vodacom Reports Robust Q3 Growth, Driven By Diversification And Strategic Moves
  • Can Digitap ($TAP) Save Your Portfolio in the Bear Market? Price Target $1.85: Best Crypto to Buy
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.