Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

BTC Risk Management Drives Adoption Of Structured Perp Strategies On HFDX

2026-02-08

Bitcoin Traders Seek Deeper Perp Liquidity As On-Chain Volumes Hit New Highs

2026-02-08

The Rise of Virtual Items: How Digital Assets Became Valuable

2026-02-08
Facebook X (Twitter) Instagram
Trending
  • BTC Risk Management Drives Adoption Of Structured Perp Strategies On HFDX
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»AI in Medicine Raises Legal and Ethical Concerns
Opinion

AI in Medicine Raises Legal and Ethical Concerns

ContributorBy Contributor2019-09-04No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Medical technology concept. Electronic medical record. metamorworks / Shutterstock.com
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

by Sharona Hoffman

The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.

AI generally refers to computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.

But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.

Potential for discrimination

AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.

The hope is that AI will be able to read radiological images more efficiently than a human.
AP Photo/David Goldman

Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse and even suicide. As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.”

This predictive capability of AI raises significant ethical concerns in health care. If AI generates predictions about your health, I believe that information could one day be included in your electronic health records.

Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example, when they apply for employment or life insurance.

Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.

Such disclosures can lead to discrimination. Employers, for instance, are interested in workers who will be healthy and productive, with few absences and low medical costs. If they believe certain applicants will develop diseases in the future, they will likely reject them. Lenders, landlords, life insurers and others might likewise make adverse decisions about individuals based on AI predictions.

Lack of protections

The Americans with Disabilities Act does not prohibit discrimination based on future medical problems. It applies only to current and past ailments. In response to genetic testing, Congress enacted the Genetic Information Nondiscrimination Act. This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions. No law imposes a similar prohibition with respect to nongenetic predictive data.

AI health prediction can also lead to psychological harm. For example, many people could be traumatized if they learn that they will likely suffer cognitive decline later in life. It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. Imagine obtaining the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products.

When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results. By contrast, we do not have AI counselors who provide similar services to patients.

Yet another concern relates to the doctor-patient relationship. Will AI diminish the role of doctors? Will computers be the ones to make predictions, diagnoses and treatment suggestions, so that doctors simply implement the computers’ instructions? How will patients feel about their doctors if computers have a greater say in making medical determinations?

These concerns are exacerbated by the fact that AI predictions are far from infallible. Many factors can contribute to errors. If the data used to develop an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect. Therefore, patients may suffer discrimination or psychological harm when in fact they are not at risk of the predicted ailments.

A call for caution

What can be done to protect the American public? I have argued in past work for the expansion of the HIPAA Privacy Rule so that it covers anyone who handles health information for business purposes. Privacy protections should apply not only to health care providers and insurers, but also to commercial enterprises. I have also argued that Congress should amend the Americans with Disabilities Act to prohibit discrimination based on forecasts of future diseases.

Physicians who provide patients with AI predictions should ensure that they are thoroughly educated about the pros and cons of such forecasts. Experts should counsel patients about AI just as trained professionals do about genetic testing.

The prospect of AI can over-awe people. Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]The Conversation

Sharona Hoffman, Professor of Health Law and Bioethics, Case Western Reserve University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI ai ethics Artificial intelligence Medicine
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Contributor

Related Posts

Private Credit Rating Agencies Shape Africa’s Access To Debt. Better Oversight Is Needed

2026-02-03

Why South Africa Cannot Afford To Wait For Healthcare Reform

2026-02-02

SA Auto Industry At Crossroads: Cheap Imports Threaten Future

2026-02-02

Stablecoins: The Quiet Revolution South Africa Can’t Ignore

2026-02-02

South Africa Could Unlock SME Growth By Exploiting AI’s Potential Through Corporate ESD Funds

2026-01-28

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026

The crypto presale market in 2026 has seen dozens of projects compete for investor attention.…

Dutch Entrepreneurial Development Bank FMO Invests R340M In Lula To Expand SME funding In SA

2026-02-03

Paarl Mall Gets R270M Mega Upgrade

2026-02-02

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Vodacom Reports Robust Q3 Growth, Driven By Diversification And Strategic Moves

2026-02-04

South Africa’s First Institutional Rand Stablecoin, ZARU, Launches

2026-02-03

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

BTC Risk Management Drives Adoption Of Structured Perp Strategies On HFDX

2026-02-08

Bitcoin Traders Seek Deeper Perp Liquidity As On-Chain Volumes Hit New Highs

2026-02-08

The Rise of Virtual Items: How Digital Assets Became Valuable

2026-02-08
Recent Posts
  • BTC Risk Management Drives Adoption Of Structured Perp Strategies On HFDX
  • Bitcoin Traders Seek Deeper Perp Liquidity As On-Chain Volumes Hit New Highs
  • The Rise of Virtual Items: How Digital Assets Became Valuable
  • Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026
  • Football Fans Can Share Their ‘Super Bowl Spread’  With The Chance To Win an NFL Jersey
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.