Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Facebook X (Twitter) Instagram
Trending
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Boardroom Games»Biased AI Can Be Bad For Your Health – Here’s How To Promote Algorithmic Fairness
Boardroom Games

Biased AI Can Be Bad For Your Health – Here’s How To Promote Algorithmic Fairness

The ConversationBy The Conversation2021-03-09Updated:2021-03-121 Comment5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
health
Robots. Photo by C Technical from Pexels
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Artificial intelligence (AI) holds great promise for improving human health by helping doctors make accurate diagnoses and treatment decisions. It can also lead to discrimination that can harm minorities, women and economically disadvantaged people.

The question is, when health care algorithms discriminate, what recourse do people have?

A prominent example of this kind of discrimination is an algorithm used to refer chronically ill patients to programs that care for high-risk patients. A study in 2019 found that the algorithm favored whites over sicker African Americans in selecting patients for these beneficial services. This is because it used past medical expenditures as a proxy for medical needs.

Poverty and difficulty accessing health care often prevent African Americans from spending as much money on health care as others. The algorithm misinterpreted their low spending as indicating they were healthy and deprived them of critically needed support.

As a professor of law and bioethics, I have analyzed this problem and identified ways to address it.

How algorithms discriminate

What explains algorithmic bias? Historical discrimination is sometimes embedded in training data, and algorithms learn to perpetuate existing discrimination.

For example, doctors often diagnose angina and heart attacks based on symptoms that men experience more commonly than women. Women are consequently underdiagnosed for heart disease. An algorithm designed to help doctors detect cardiac conditions that is trained on historical diagnostic data could learn to focus on men’s symptoms and not on women’s, which would exacerbate the problem of underdiagnosing women.

Also, AI discrimination can be rooted in erroneous assumptions, as in the case of the high-risk care program algorithm.

In another instance, electronic health records software company Epic built an AI-based tool to help medical offices identify patients who are likely to miss appointments. It enabled clinicians to double-book potential no-show visits to avoid losing income. Because a primary variable for assessing the probability of a no-show was previous missed appointments, the AI disproportionately identified economically disadvantaged people.

These are people who often have problems with transportation, child care and taking time off from work. When they did arrive at appointments, physicians had less time to spend with them because of the double-booking.

A man in a white lab coat and wearing a facemask looks at several computer screens showing medical images
AI systems can be a big help in health care, like this Brazilian system that detects lung injuries that indicate COVID-19 infection. The key is ensuring that such systems don’t discriminate based on race or gender.
Nelson Almeida/AFP via Getty Images

Some algorithms explicitly adjust for race. Their developers reviewed clinical data and concluded that generally, African Americans have different health risks and outcomes from others, so they built adjustments into the algorithms with the aim of making the algorithms more accurate.

But the data these adjustments are based on is often outdated, suspect or biased. These algorithms can cause doctors to misdiagnose Black patients and divert resources away from them.

For example, the American Heart Association heart failure risk score, which ranges from 0 to 100, adds 3 points for non-Blacks. It thus identifies non-Black patients as more likely to die of heart disease. Similarly, a kidney stone algorithm adds 3 of 13 points to non-Blacks, thereby assessing them as more likely to have kidney stones. But in both cases the assumptions were wrong. Though these are simple algorithms that are not necessarily incorporated into AI systems, AI developers sometimes make similar assumptions when they develop their algorithms.

Algorithms that adjust for race may be based on inaccurate generalizations and could mislead physicians. Skin color alone does not explain different health risks or outcomes. Instead, differences are often attributable to genetics or socioeconomic factors, which is what algorithms should adjust for.

Furthermore, almost 7% of the population is of mixed ancestry. If algorithms suggest different treatments for African Americans and non-Blacks, how should doctors treat multiracial patients?

[Expertise in your inbox. Sign up for The Conversation’s newsletter and get expert takes on today’s news, every day.]

Promoting algorithmic fairness

There are several avenues for addressing algorithmic bias: litigation, regulation, legislation and best practices.

  1. Disparate impact litigation: Algorithmic bias does not constitute intentional discrimination. AI developers and doctors using AI likely do not mean to hurt patients. Instead, AI can lead them to unintentionally discriminate by having a disparate impact on minorities or women. In the fields of employment and housing, people who feel that they have suffered discrimination can sue for disparate impact discrimination. But the courts have determined that private parties cannot sue for disparate impact in health care cases. In the AI era, this approach makes little sense. Plaintiffs should be allowed to sue for medical practices resulting in unintentional discrimination.
  2. FDA regulation: The Food and Drug Administration is working out how to regulate health-care-related AI. It is currently regulating some forms of AI and not others. To the extent that the FDA oversees AI, it should ensure that problems of bias and discrimination are detected and addressed before AI systems receive approval.
  3. Algorithmic Accountability Act: In 2019, Senators Cory Booker and Ron Wyden and Rep. Yvette D. Clarke introduced the Algorithmic Accountability Act. In part, it would have required companies to study the algorithms they use, identify bias and correct problems they discover. The bill did not become law, but it paved the path for future legislation that could be more successful.
  4. Make fairer AIs: Medical AI developers and users can prioritize algorithmic fairness. It should be a key element in designing, validating and implementing medical AI systems, and health care providers should keep it in mind when choosing and using these systems.

AI is becoming more prevalent in health care. AI discrimination is a serious problem that can hurt many patients, and it’s the responsibility of those in the technology and health care fields to recognize and address it.The Conversation

Sharona Hoffman, Professor of Health Law and Bioethics, Case Western Reserve University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Algorithm care Discrimionation Health patients
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21

The EX60: A Volvo That Talks Back

2026-01-20

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

Ransomware: What It Is And Why It’s Your Problem

2026-01-19

Can Taxpayers Lose By Challenging SARS?

2026-01-16

Science Is Best Communicated Through Identity And Culture – How Researchers Are Ensuring STEM Serves Their Communities

2026-01-16

Could ChatGPT Convince You To Buy Something?

2026-01-15

Trust Is The New Currency Of The Digital Economy

2026-01-12

1 Comment

  1. Pingback: Biased AI Can Be Bad For Your Health – Here’s How To Promote Algorithmic Fairness – ONEO AI

Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

As countries push ahead with digital transformation, infrastructure planning is evolving. It is no longer…

Cartesian Capital Expands Investor Toolkits With JSE Listings

2026-01-20

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12

How SA’s Largest Wholesale Network is Paving the Way for a Connected, Agile Future

2025-12-02
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The EX60: A Volvo That Talks Back

2026-01-20

Over R270M In Phuthuma Nathi Dividends Remain Unclaimed

2025-11-27

Africa’s Next Voice Revolution, When 5G Meets AI

2025-11-21

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity

2026-01-22

Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto

2026-01-21
Recent Posts
  • Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health
  • The Fintech Resilience Gap: Why Africa’s Next Decade Depends On Structural Integrity
  • Resolv Secures $500,000 Pre-Seed To Build The Recovery Layer For Stolen Crypto
  • Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms
  • The Productivity Myth That’s Costing South Africa Talent
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.