Trust is the foundation of cooperation, trade, and enterprise decision-making. In the digital age, trust is established through signatures, voices, and virtual interactions. But as deepfake technology rapidly advances, that trust erodes, creating new risks that bypass decades of cybersecurity investment.
In this episode of The AI Forecast, Paul Muller speaks with Jim Brennan, Chief Product and Technical Officer at GetReal Security, about how AI-powered authenticity threats change the enterprise security equation. Their conversation reveals why deepfakes are the new face of social engineering, why technology—not the human eye—must lead the defense, and how leaders can protect their businesses and people.
Paul: Decades of digital transformation gave us the ability to collaborate instantly. But now the very thing we rely on—the little window on our screens—has become the new attack surface. If I can’t trust what I see, the only fallback is expensive, slow, physical interactions.
Jim: A CIO told me, ‘This little window is where I run my business and now, I can’t trust anything coming through it.’ That’s profound. The human eye can’t detect this level of sophistication. Most people are guessing 50/50. That’s why technology, not instinct, has to lead the defense.
Trust fuels cooperation, and cooperation powers business. But deepfakes undermine that trust at its most personal level—the daily conversations and video calls leaders depend on. Jim describes this as a new human-facing interaction layer, which he calls the “display layer,” and Paul jokingly dubbed “Liar 8,” an entirely new attack surface. Unlike firewalls and intrusion detection systems, this is not a technical but a human layer. The medium executives use to communicate and make decisions is now open to manipulation.
Paul: Do boards risk dismissing deepfakes as something that could never happen to them?
Jim: It only takes seeing it once to believe it’s real. However, the real challenge is showing boards what it means for their business. If you lean on big sensational stories, they may shrug them off. The reality is that smaller, everyday incidents are already happening, which resonate far more.
He points to fraudulent hiring as a prime example. Attackers are using deepfakes to impersonate candidates and slip through HR processes. Sometimes the motive is simple financial gain, like pocketing a sign-on bonus. Other times, it’s far more serious: nation-state actors planting impostors inside companies for espionage or large-scale fraud. ‘
Jim: In the last three months, every Fortune 500 and 1000 company I’ve spoken to has told us it’s having issues with fraudulent hiring. HR teams aren’t built to think like attackers, making hiring an easy target.
Paul: We’ve always used technology to fight technology—firewalls, antivirus, intrusion detection. Can we do the same against deepfakes?
Jim: You can’t simply train your way out of this problem. Standing up a black-box model and feeding it real and fake examples won’t cut it. The better approach is to use digital forensics to study the artifacts deepfakes leave behind, whether it’s facial distortions, audio noise, or lighting inconsistencies and then use machine learning to find those signals at scale.
Jim explained that effective defenses must go beyond generic AI, getting “under the covers” of generation tools to identify subtle traces and artifacts. Practically, enterprises can deploy these protections through APIs from platforms like Zoom or Teams, avoiding endpoint installs and keeping defenses scalable. At the same time, awareness is critical—webinars, demos, and simulations give employees the context to pause and think before acting. Technology and training form the two layers needed to protect digital trust.
Jim: We live in an age where you can’t trust anything in this window or screen. New policies for organizations are called for, and new ways of operating are called for as well.
The threat landscape has shifted. Deepfakes are not just a futuristic risk. They are here, undermining both enterprise decision-making and personal safety. From fraudulent hires to AI-cloned ransom calls, digital trust is no longer guaranteed.
The path forward is threefold:
Catch the whole conversation with Jim Brennan on The AI Forecast on Spotify, Apple Podcasts, and YouTube.
This may have been caused by one of the following: