Can Employees Spot Deepfakes?
How accurate are employees at detecting deepfakes? Employees correctly identify deepfakes 38% of the time. This means 62% of AI-generated impersonations go undetected when organisations rely on human visual or auditory[...]
How accurate are employees at detecting deepfakes?
Employees correctly identify deepfakes 38% of the time. This means 62% of AI-generated impersonations go undetected when organisations rely on human visual or auditory detection.
This finding comes from Breacher.ai red team assessments across 300 targets, as discussed in the webinar embedded below. We test organisations using live voice cloning, deepfake video calls, and multi-channel attack simulations. The detection rate measures how often targets correctly identified that they were interacting with AI-generated content before completing the attacker’s objective.
Key insight: Training employees to spot deepfakes visually is a failed approach. Defence strategies must shift from detection to verification.
Who should read this research?
This research is for security leaders, awareness training managers, and CISOs who are building defences against AI-powered social engineering. It addresses three questions: Can employees be trained to detect deepfakes? Which factors affect detection accuracy? What should organisations do instead of relying on human detection?
What does the detection research show?
Breacher.ai assessments measure human detection across voice cloning, deepfake video, and AI-generated text. The following data represents findings from 300 targets across finance, technology, manufacturing, and professional services.
What is the overall detection accuracy rate?
| Metric | Finding |
| Overall deepfake detection accuracy | 38% |
| Failure rate (undetected deepfakes) | 62% |
Source: Breacher.ai red team assessments across 300 targets. See webinar below for full methodology.
Does training improve detection rates?
Organisations that implement awareness training programmes show approximately 35% better performance against deepfake social engineering compared to organisations without structured training, according to Breacher.ai assessment data. However, even with training, detection remains unreliable.
This confirms that while training provides measurable improvement, human detection alone is not a scalable defence. The gap between trained and untrained employees narrows as deepfake quality improves.
Why does visual deepfake detection fail?
Many awareness programmes teach employees to look for visual artefacts: unnatural blinking, lighting inconsistencies, or audio sync issues. Breacher.ai testing data shows this approach fails for three reasons.
Why is deepfake quality improving faster than detection training?
Generation quality improves monthly. According to industry analysis, 68% of deepfakes are now indistinguishable from genuine media. Artefacts that were detectable six months ago are now invisible. Training based on identifying flaws becomes obsolete faster than it can be deployed.
Why do cognitive biases override visual cues?
When an employee sees their CEO on a video call making an urgent request, authority bias and urgency override analytical thinking. Even employees who notice something seems “off” often comply because the social pressure outweighs uncertainty. Research shows 68% of workers knowingly break security policy despite training.
Why do real-world conditions make detection harder?
Laboratory detection studies use optimal viewing conditions. Real attacks happen on mobile phones with poor lighting, compressed video calls, and ambient noise. These conditions mask the artefacts training teaches employees to spot.
What should organisations do instead of detection training?
Given that human detection fails more often than it succeeds, organisations should shift focus from detection to verification.
How should verification replace detection?
Train employees to verify requests through out-of-band channels, regardless of how convincing the communication appears. Establish callback procedures using known numbers, not numbers provided in the request. Create pause protocols for high-risk requests including wire transfers, access changes, and data sharing.
How should process controls support verification?
Require dual authorisation for transactions above defined thresholds. Implement mandatory waiting periods for urgent requests. Build verification steps into workflows so they happen automatically, not as exceptions.
How should regular testing validate controls?
Test verification workflows with realistic AI social engineering simulations. Measure whether employees follow verification procedures, not whether they detect deepfakes. Track process compliance rates as the key metric, replacing detection accuracy.
How significant is this threat?
The broader threat context reinforces why detection-based approaches fail. Deepfake fraud in North America grew 1,740% between 2022 and 2023. According to Deloitte research, AI-enabled fraud is projected to reach $40 billion annually by 2027. The Verizon Data Breach Investigations Report found that 60% of breaches involve the human element.
Voice cloning technology now requires just 3-5 seconds of audio to create a convincing clone. This means any executive with public speaking footage, podcast appearances, or investor calls is vulnerable to voice impersonation attacks.
Frequently Asked Questions
Will detection accuracy improve as employees gain experience?
Breacher.ai data shows marginal improvement with exposure. However, deepfake quality improves faster than human detection capability. The gap will widen, not close.
Should we stop teaching employees about deepfakes entirely?
No. Employees should understand that deepfakes exist and that they cannot reliably detect them. This awareness supports the shift to verification-based defences. The message is: “Don’t try to detect—verify instead.”
Are some employees naturally better at detecting deepfakes?
Individual variation exists, but no reliable predictors identify high-detection individuals. More importantly, even the best performers only marginally exceed the 38% average—barely better than chance.
Can technology detect deepfakes more reliably than humans?
Deepfake detection technology shows promise but also has limitations. Detection tools engage in an arms race with generation tools. Technology should supplement verification processes, not replace them.
Which departments are most at risk?
Breacher.ai assessments show finance departments have the highest click-through rates for deepfake social engineering, followed by human resources. Manufacturing organisations showed the highest overall risk across industries tested. However, risk varies significantly between organisations.
What are the key findings from this research?
Based on Breacher.ai red team assessments: Human detection of deepfakes fails 62% of the time. Training provides approximately 35% improvement but detection remains unreliable. Finance and HR departments show elevated risk. Manufacturing organisations showed highest risk among industries tested.
The implication is clear: organisations cannot rely on employees to spot deepfakes. Defence strategies must focus on verification processes, procedural controls, and regular testing of those controls against realistic attack simulations.
Next Step
Breacher.ai assessments measure how your organisation’s verification processes perform against live AI social engineering attacks. We test people, processes, technology, and workflows—providing ground truth on your actual resilience.
Contact us at breacher.ai to schedule a strategic consultation.