Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next

2026-01-23

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23
Facebook X (Twitter) Instagram
Trending
  • Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Trending News»Deepfake Audio Has A Tell – Researchers Use Fluid Dynamics To Spot Artificial Imposter Voices
Trending News

Deepfake Audio Has A Tell – Researchers Use Fluid Dynamics To Spot Artificial Imposter Voices

The ConversationBy The Conversation2022-09-22No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Deepfake
With deepfake audio, that familiar voice on the other end of the line might not even be human let alone the person you think it is. Knk Phl Prasan Kha Phibuly/EyeEm via Getty Images
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Play
This is not Morgan Freeman, but if you weren’t told that, how would you know?

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

Organic vs. synthetic voices

Humans vocalize by forcing air over the various structures of the vocal tract, including vocal folds, tongue and lips. By rearranging these structures, you alter the acoustical properties of your vocal tract, allowing you to create over 200 distinct sounds, or phonemes. However, human anatomy fundamentally limits the acoustic behavior of these different phonemes, resulting in a relatively small range of correct sounds for each.

Play
How your vocal organs work.

In contrast, audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker. Depending on the exact techniques used, the computer might need to listen to as little as 10 to 20 seconds of audio. This audio is used to extract key information about the unique aspects of the victim’s voice.

The attacker selects a phrase for the deepfake to speak and then, using a modified text-to-speech algorithm, generates an audio sample that sounds like the victim saying the selected phrase. This process of creating a single deepfaked audio sample can be accomplished in a matter of seconds, potentially allowing attackers enough flexibility to use the deepfake voice in a conversation.

Detecting audio deepfakes

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract. Luckily scientists have techniques to estimate what someone – or some being such as a dinosaur – would sound like based on anatomical measurements of its vocal tract.

We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker’s vocal tract during a segment of speech. This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

line drawing diagram showing two focal tracts, one wider and more variable than the other
Deepfaked audio often results in vocal tract reconstructions that resemble drinking straws rather than biological vocal tracts.
Logan Blue et al., CC BY-ND

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have. In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people.

Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect. For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape.

This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.

Why this matters

Today’s world is defined by the digital exchange of media and information. Everything from news to entertainment to conversations with loved ones typically happens via digital exchanges. Even in their infancy, deepfake video and audio undermine the confidence people have in these exchanges, effectively limiting their usefulness.

If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.The Conversation

Logan Blue, PhD student in Computer & Information Science & Engineering, University of Florida and Patrick Traynor, Professor of Computer and Information Science and Engineering, University of Florida

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Artificial intelligence deepfake Digital Media Fluid Dynamics technology
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The Boardroom Challenge: Governing AI, Data And Digital

2026-01-20

AI Bot Redefines Business Ads with Radio

2026-01-14

Volvo EX60 – It’s A Middle Finger To The Petrol Price

2026-01-09

ConvoGPT and Founder Jeremy David Announce ConvoGPT OS with Enterprise Partnership with ElevenLabs

2026-01-08

The Future Of Work – Skills, Not Fear – South Africa’s Path To An AI-Ready Workforce

2026-01-07

WeThinkCode_ Announces New CEO As It Enters Its Second Decade

2026-01-07

AI Agents Arrived In 2025 – Here’s What Happened And The Challenges Ahead In 2026

2025-12-30

AI Unlocks A R3 Trillion Treasure in SA’s Townships

2025-12-23
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

Chery SA to Buy Nissan Rosslyn Plant, Save Jobs

In a major development for South Africa’s automotive industry, Nissan and Chery SA have reached…

Directing The Dual Workforce In The Age of AI Agents

2026-01-22

Huawei Says The Next Wave Of Infrastructure Investment Must Include People, Not Only Platforms

2026-01-21

South Africa: Best Starting Point In Years, With 3 Clear Priorities Ahead

2026-01-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23

Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It

2026-01-23

Mettus Launches Splendi App To Help Young South Africans Manage Their Credit Health

2026-01-22

Over R270M In Phuthuma Nathi Dividends Remain Unclaimed

2025-11-27

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next

2026-01-23

How Local Leaders Can Shift Their Trajectory In 2026

2026-01-23

The EX60 Cross Country: Built For The “Go Anywhere” Attitude

2026-01-23
Recent Posts
  • Why Bitcoin and XRP Holders Are Rethinking Income in 2026—and What Comes Next
  • How Local Leaders Can Shift Their Trajectory In 2026
  • The EX60 Cross Country: Built For The “Go Anywhere” Attitude
  • Why Legal Businesses Must Lead Digital Transformation Rather Than Chase It
  • Why Rezor’s Exchange Launch Sets a New Benchmark for Web3 Founders — Rahul Rohit Parikh Story of Determination
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2026 TechFinancials. Designed by TFS Media. TechFinancials brings you trusted, around-the-clock news on African tech, crypto, and finance. Our goal is to keep you informed in this fast-moving digital world. Now, the serious part (please read this): Trading is Risky: Buying and selling things like cryptocurrencies and CFDs is very risky. Because of leverage, you can lose your money much faster than you might expect. We Are Not Advisors: We are a news website. We do not provide investment, legal, or financial advice. Our content is for information and education only. Do Your Own Research: Never rely on a single source. Always conduct your own research before making any financial decision. A link to another company is not our stamp of approval. You Are Responsible: Your investments are your own. You could lose some or all of your money. Past performance does not predict future results. In short: We report the news. You make the decisions, and you take the risks. Please be careful.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.