Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Remittix Will Produce More Gains For Early Investors Than Pepe Coin & Shiba Inu Did

2025-08-30

Cardano Price Prediction For 2026 & Analysts Predict The Chances Of Dogecoin Reaching $1

2025-08-30

Top Cryptos To Invest In At The End Of August: SUI, Pi Coin, Hedera & Remittix

2025-08-30
Facebook X (Twitter) Instagram
Trending
  • Remittix Will Produce More Gains For Early Investors Than Pepe Coin & Shiba Inu Did
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Text-To-Image AI: Powerful, Easy-To-Use Technology For Making Art – And Fakes
Opinion

Text-To-Image AI: Powerful, Easy-To-Use Technology For Making Art – And Fakes

The ConversationBy The Conversation2022-12-06No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Text-to-Image AI
A synthetic image generated by mimicking real faces, left, and a synthetic face generated from the text prompt ‘a photo of a 50-year man with short black hair,’ right. Hany Farid using StyleGAN2 (left) and DALL-E (right), CC BY-ND
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Type “Teddy bears working on new AI research on the moon in the 1980s” into any of the recently released text-to-image artificial intelligence image generators, and after just a few seconds the sophisticated software will produce an eerily pertinent image.

Seemingly bound by only your imagination, this latest trend in synthetic media has delighted many, inspired others and struck fear in some.

Google, research firm OpenAI and AI vendor Stability AI have each developed a text-to-image image generator powerful enough that some observers are questioning whether in the future people will be able to trust the photographic record.

an image of three tiny bears standing on the sandy soil in front of an electronic device
This image was generated from the text prompt ‘Teddy bears working on new AI research on the moon in the 1980s.’
Hany Farid using DALL-E, CC BY-ND

As a computer scientist who specializes in image forensics, I have been thinking a lot about this technology: what it is capable of, how each of the tools have been rolled out to the public, and what lessons can be learned as this technology continues its ballistic trajectory.

Adversarial approach

Although their digital precursor dates back to 1997, the first synthetic images splashed onto the scene just five years ago. In their original incarnation, so-called generative adversarial networks (GANs) were the most common technique for synthesizing images of people, cats, landscapes and anything else.

A GAN consists of two main parts: generator and discriminator. Each is a type of large neural network, which is a set of interconnected processors roughly analogous to neurons.

Tasked with synthesizing an image of a person, the generator starts with a random assortment of pixels and passes this image to the discriminator, which determines if it can distinguish the generated image from real faces. If it can, the discriminator provides feedback to the generator, which modifies some pixels and tries again. These two systems are pitted against each other in an adversarial loop. Eventually the discriminator is incapable of distinguishing the generated image from real images.

Text-to-image

Just as people were starting to grapple with the consequences of GAN-generated deepfakes – including videos that show someone doing or saying something they didn’t – a new player emerged on the scene: text-to-image deepfakes.

In this latest incarnation, a model is trained on a massive set of images, each captioned with a short text description. The model progressively corrupts each image until only visual noise remains, and then trains a neural network to reverse this corruption. Repeating this process hundreds of millions of times, the model learns how to convert pure noise into a coherent image from any caption.

a house cat with bulky opaque goggles on its face
This photolike image was generated using Stable Diffusion with the prompt ‘cat wearing VR goggles.’
Screen capture by The Conversation, CC BY-ND

While GANs are only capable of creating an image of a general category, text-to-image synthesis engines are more powerful. They are capable of creating nearly any image, including images that include an interplay between people and objects with specific and complex interactions, for instance “The president of the United States burning classified documents while sitting around a bonfire on the beach during sunset.”

OpenAI’s text-to-image image generator, DALL-E, took the internet by storm when it was unveiled on Jan. 5, 2021. A beta version of the tool was made available to 1 million users on July 20, 2022. Users around the world have found seemingly endless ways to prompt DALL-E, yielding delightful, bizarre and fantastical imagery.

A wide range of people, from computer scientists to legal scholars and regulators, however, have pondered the potential misuses of the technology. Deep fakes have already been used to create nonconsensual pornography, commit small- and large-scale fraud, and fuel disinformation campaigns. These even more powerful image generators could add jet fuel to these misuses.

Three image generators, three different approaches

Aware of the potential abuses, Google declined to release its text-to-image technology. OpenAI took a more open, and yet still cautious, approach when it initially released its technology to only a few thousand users (myself included). They also placed guardrails on allowable text prompts, including no nudity, hate, violence or identifiable persons. Over time, OpenAI has expanded access, lowered some guardrails and added more features, including the ability to semantically modify and edit real photographs.

Stability AI took yet a different approach, opting for a full release of their Stable Diffusion with no guardrails on what can be synthesized. In response to concerns of potential abuse, the company’s founder, Emad Mostaque, said “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral and legal in how they operate this technology.”

Nevertheless, the second version of Stable Diffusion removed the ability to render images of NSFW content and children because some users had created child abuse images. In responding to calls of censorship, Mostaque pointed out that because Stable Diffusion is open source, users are free to add these features back at their discretion.

The genie is out of the bottle

Regardless of what you think of Google’s or OpenAI’s approach, Stability AI made their decisions largely irrelevant. Shortly after Stability AI’s open-source announcement, OpenAI lowered their guardrails on generating images of recognizable people. When it comes to this type of shared technology, society is at the mercy of the lowest common denominator – in this case, Stability AI.

Play
Text-to-image generators could make it easier for people to create deepfakes.

Stability AI boasts that its open approach wrestles powerful AI technology away from the few, placing it in the hands of the many. I suspect that few would be so quick to celebrate an infectious disease researcher publishing the formula for a deadly airborne virus created from kitchen ingredients, while arguing that this information should be widely available. Image synthesis does not, of course, pose the same direct threat, but the continued erosion of trust has serious consequences ranging from people’s confidence in election outcomes to how society responds to a global pandemic and climate change.

Moving forward, I believe that technologists will need to consider both the upsides and downsides of their technologies and build mitigation strategies before predictable harms occur. I and other researchers will have to continue to develop forensic techniques to distinguish real images from fakes. Regulators are going to have to start taking more seriously how these technologies are being weaponized against individuals, societies and democracies.

And everyone is going to have to learn how to become more discerning and critical about how they consume information online.

This article has been updated to correct the name of the company Stability AI, which was misidentified.The Conversation

Hany Farid, Professor of Computer Science, University of California, Berkeley

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Artificial intelligence Deepfakes Google Machine learning OpenAI technology
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents

2025-08-29

Amid Uncertainty, Navigating The AI transition, A Roadmap For Africa

2025-08-26

Where Are The Black Investors And VCs?

2025-08-25

Why South Africa Is The Hidden Powerhouse For Global Executive Search

2025-08-25

University Of Pretoria Ranked No. 1 In SA For Artificial Intelligence Research

2025-08-24

Unilabs Finance (UNIL) AI Hedge Fund Crosses $14 Million After Announcing Mining Fund Beta Launch

2025-08-23

BNB Momentum Slows—Now Analysts Back This Low-Cap Altcoin for 50x in 2025

2025-08-21

Your WiFi Router Is About To Start Watching You

2025-08-21

Scaling Vision: How AI is Advancing Image Intelligence from Smartphones to Self-Driving Cars

2025-08-21
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

BankservAfrica Rebrands As PayInc

The financial market infrastructure giant BankservAfrica has officially been rebranded to PayInc. The launch, held…

KZN’s First Supercar-Centric Luxury Residential Development Unveiled

2025-08-27

Government Pensions Administration Agency CEO Placed On Precautionary Suspension

2025-08-26

Airtel Africa & Vodacom Forge Landmark Infrastructure Partnership

2025-08-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

SA’s Skhokho 2.0 Puts Enterprise AI In SME Hands

2025-08-28

Please Call Me: After 25 Years, Will SCA’s New Bench Silence ConCourt?

2025-08-26

Vodacom Invests R400M To Expand Network In Free State And Northern Cape

2025-08-26

Elon Musk’s Starlink Backs BEE Equity Equivalents, Not 30% Ownership

2025-08-18

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Remittix Will Produce More Gains For Early Investors Than Pepe Coin & Shiba Inu Did

2025-08-30

Cardano Price Prediction For 2026 & Analysts Predict The Chances Of Dogecoin Reaching $1

2025-08-30

Top Cryptos To Invest In At The End Of August: SUI, Pi Coin, Hedera & Remittix

2025-08-30
Recent Posts
  • Remittix Will Produce More Gains For Early Investors Than Pepe Coin & Shiba Inu Did
  • Cardano Price Prediction For 2026 & Analysts Predict The Chances Of Dogecoin Reaching $1
  • Top Cryptos To Invest In At The End Of August: SUI, Pi Coin, Hedera & Remittix
  • Why Remittix, Solana, Avalanche, & Litecoin Are The Best Altcoins To Buy Today
  • Hoskinson Talks Network Future & ADA Plans As Price Drops With Top Investors Eyeing This New Altcoin
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2025 TechFinancials. Designed by TFS Media.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.