Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026

2026-02-07

Football Fans Can Share Their ‘Super Bowl Spread’  With The Chance To Win an NFL Jersey

2026-02-07

Why Traditional Banks Need Mobile Money Solutions to Survive the Next 5 Years

2026-02-07
Facebook X (Twitter) Instagram
Trending
  • Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»How To Tell If A Photo’s Fake? You Probably Can’t. That’s Why New Rules Are Needed
Opinion

How To Tell If A Photo’s Fake? You Probably Can’t. That’s Why New Rules Are Needed

Martin BekkerBy Martin Bekker2025-05-09Updated:2025-05-09No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Deepfake
Deepfake. AI-generated with Freepik
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The problem is simple: it’s hard to know whether a photo’s real or not anymore. Photo manipulation tools are so good, so common and easy to use, that a picture’s truthfulness is no longer guaranteed.

The situation got trickier with the uptake of generative artificial intelligence. Anyone with an internet connection can cook up just about any image, plausible or fantasy, with photorealistic quality, and present it as real. This affects our ability to discern truth in a world increasingly influenced by images.

I teach and research the ethics of artificial intelligence (AI), including how we use and understand digital images.

Many people ask how we can tell if an image has been changed, but that’s fast becoming too difficult. Instead, here I suggest a system where creators and users of images openly state what changes they’ve made. Any similar system will do, but new rules are needed if AI images are to be deployed ethically – at least among those who want to be trusted, especially media.

Doing nothing isn’t an option, because what we believe about media affects how much we trust each other and our institutions. There are several ways forward. Clear labelling of photos is one of them.

Deepfakes and fake news

Photo manipulation was once the preserve of government propaganda teams, and later, expert users of Photoshop, the popular software for editing, altering or creating digital images.

Today, digital photos are automatically subjected to colour-correcting filters on phones and cameras. Some social media tools automatically “prettify” users’ pictures of faces. Is a photo taken of oneself by oneself even real anymore?

The basis of shared social understanding and consensus – trust regarding what one sees – is being eroded. This is accompanied by the apparent rise of untrustworthy (and often malicious) news reporting. We have new language for the situation: fake news (false reporting in general) and deepfakes (deliberately manipulated images, whether for waging war or garnering more social media followers).

Misinformation campaigns using manipulated images can sway elections, deepen divisions, even incite violence. Scepticism towards trustworthy media has untethered ordinary people from fact-based accounting of events, and has fuelled conspiracy theories and fringe groups.

Ethical questions

A further problem for producers of images (personal or professional) is the difficulty of knowing what’s permissable. In a world of doctored images, is it acceptable to prettify yourself? How about editing an ex-partner out of a picture and posting it online?

Would it matter if a well-respected western newspaper published a photo of Russian president Vladimir Putin pulling his face in disgust (an expression that he surely has made at some point, but of which no actual image has been captured, say) using AI?

The ethical boundaries blur further in highly charged contexts. Does it matter if opposition political ads against then-presidential candidate Barack Obama in the US deliberately darkened his skin?

Would generated images of dead bodies in Gaza be more palatable, perhaps more moral, than actual photographs of dead humans? Is a magazine cover showing a model digitally altered to unattainable beauty standards, while not declaring the level of photo manipulation, unethical?

A fix

Part of the solution to this social problem demands two simple and clear actions. First, declare that photo manipulation has taken place. Second, disclose what kind of photo manipulation was carried out.

The first step is straightforward: in the same way pictures are published with author credits, a clear and unobtrusive “enhancement acknowledgement” or EA should be added to caption lines.

The second is about how an image has been altered. Here I call for five “categories of manipulation” (not unlike a film rating). Accountability and clarity create an ethical foundation.

The five categories could be:

C – Corrected

Edits that preserve the essence of the original photo while refining its overall clarity or aesthetic appeal – like colour balance (such as contrast) or lens distortion. Such corrections are often automated (for instance by smartphone cameras) but can be performed manually.

E – Enhanced

Alterations that are mainly about colour or tone adjustments. This extends to slight cosmetic retouching, like the removal of minor blemishes (such as acne) or the artificial addition of makeup, provided the edits don’t reshape physical features or objects. This includes all filters involving colour changes.

B – Body manipulated

This is flagged when a physical feature is altered. Changes in body shape, like slimming arms or enlarging shoulders, or the altering of skin or hair colour, fall under this category.

O – Object manipulated

This declares that the physical position of an object has been changed. A finger or limb moved, a vase added, a person edited out, a background element added or removed.

G – Generated

Entirely fabricated yet photorealistic depictions, such as a scene that never existed, must be flagged here. So, all images created digitally, including by generative AI, but limited to photographic depictions. (An AI-generated cartoon of the pope would be excluded, but a photo-like picture of the pontiff in a puffer jacket is rated G.)

The suggested categories are value-blind: they are (or ought to be) triggered simply by the occurrence of any manipulation. So, colour filters applied to an image of a politician trigger an E category, whether the alteration makes the person appear friendlier or scarier. A critical feature for accepting a rating system like this is that it is transparent and unbiased.

The CEBOG categories above aren’t fixed, there may be overlap: B (Body manipulated) might often imply E (Enhanced), for example.

Feasibility

Responsible photo manipulation software may automatically indicate to users the class of photo manipulation carried out. If needed it could watermark it, or it could simply capture it in the picture’s metadata (as with data about the source, owner or photographer). Automation could very well ensure ease of use, and perhaps reduce human error, encouraging consistent application across platforms.

Of course, displaying the rating will ultimately be an editorial decision, and good users, like good editors, will do this responsibly, hopefully maintaining or improving the reputation of their images and publications. While one would hope that social media would buy into this kind of editorial ideal and encourage labelled images, much room for ambiguity and deception remains.

The success of an initiative like this hinges on technology developers, media organisations and policymakers collaborating to create a shared commitment to transparency in digital media.The Conversation

Martin Bekker, Computational Social Scientist, University of the Witwatersrand

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Artificial intelligence artificial intelligence (AI) Deepfakes Ethics Fake news Generative AI photo manipulation Photography Photoshop
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Martin Bekker

Related Posts

Private Credit Rating Agencies Shape Africa’s Access To Debt. Better Oversight Is Needed

2026-02-03

Why South Africa Cannot Afford To Wait For Healthcare Reform

2026-02-02

SA Auto Industry At Crossroads: Cheap Imports Threaten Future

2026-02-02

Stablecoins: The Quiet Revolution South Africa Can’t Ignore

2026-02-02

South Africa Could Unlock SME Growth By Exploiting AI’s Potential Through Corporate ESD Funds

2026-01-28

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

The Productivity Myth That’s Costing South Africa Talent

2026-01-21
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026

The crypto presale market in 2026 has seen dozens of projects compete for investor attention.…

Dutch Entrepreneurial Development Bank FMO Invests R340M In Lula To Expand SME funding In SA

2026-02-03

Paarl Mall Gets R270M Mega Upgrade

2026-02-02

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

Vodacom Reports Robust Q3 Growth, Driven By Diversification And Strategic Moves

2026-02-04

South Africa’s First Institutional Rand Stablecoin, ZARU, Launches

2026-02-03

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026

2026-02-07

Football Fans Can Share Their ‘Super Bowl Spread’  With The Chance To Win an NFL Jersey

2026-02-07

Why Traditional Banks Need Mobile Money Solutions to Survive the Next 5 Years

2026-02-07
Recent Posts
  • Digitap ($TAP) Crushes NexChain with Real Banking Utility: Best Crypto to Buy in 2026
  • Football Fans Can Share Their ‘Super Bowl Spread’  With The Chance To Win an NFL Jersey
  • Why Traditional Banks Need Mobile Money Solutions to Survive the Next 5 Years
  • Spotify Brings Audiobooks to South Africa
  • Anjouan Corporate Services Reshapes Cross-Border Brokerage Licensing Strategy for UAE-Focused Firms
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.