To Spot a Fake, You Must First Build One
I wondered if my parents would detect the voice on the line. It was mine, yet not me. It said hello, asked my dad how he was. He paused. 'What is that, Gaby?' he asked. He knew. I admitted it was...
I wondered if my parents would detect the voice on the line. It was mine, yet not me. It said hello, asked my dad how he was. He paused. 'What is that, Gaby?' he asked. He knew. I admitted it was a trick. 'It didn’t work,' he confirmed. 'It sounded like a robot.'
That robotic voice was built by Reality Defender, a company in the emerging field of synthetic media detection. As tools for creating fake audio and video become commonplace, a counter-industry has formed. Firms like Reality Defender, Pindrop, and GetReal apply machine learning to uncover digital forgeries. Their market is estimated at $5.5 billion. Their method is counterintuitive: to identify a deepfake, you must learn to generate one.
The misuse of this technology is widespread. Beyond memes, it fuels fraud, harassment, and political disinformation. Scammers clone voices for fake ransom calls. Corporate deepfake fraud has become industrialized, with one study noting businesses lose an average of $450,000 per incident.
Reality Defender’s approach trains AI against AI. 'Our foundational model uses a student-teacher paradigm,' explained CTO Alex Lisle. 'We show it real media and fake media, and it learns the difference.' For my test, they fine-tuned a model using scant public audio of me speaking Spanish. The result was a functional, if impersonal, conversational agent. With more data—like the English version used on my brother—the mimicry improved, but it still wasn’t perfect for those who know us best.
The real battleground is enterprise. 'How do my institutions know it’s me?' is the question driving adoption, said Nicholas Holland of Pindrop. Companies face fake job applicants and sophisticated phishing where fraudsters impersonate entire teams using digital masks. The old detection tricks, like asking someone to hold fingers to their face, no longer work.
Lisle describes a broken 'trust boundary' we’ve relied on for millennia: seeing and hearing is believing. Hackers now exploit that. Detection tools are currently for organizations with resources and high stakes—banks, not individuals. Consumer solutions aren’t yet viable, partly due to awareness gaps. Reality Defender envisions a future where detection, like antivirus software, is embedded in the platforms we use, scanning before content reaches us.
My family wasn’t fooled. But as the technology advances, the voice pleading for help in a fake kidnapping call might be convincing enough. That’s the fear pushing this industry forward: building fakes to fortify our reality.
Source: The Verge
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →