CISO Deepfake Defense Guide: Voice Fraud, Vishing & AI Social Engineering 2026
People. Process. Technology.
Why Deepfake Defense Requires All Three.
UNC1069 didn't break into its targets. It was trusted in. Nation-state actors are now weaponizing employee identities, deepfake video, and AI-generated personas to bypass every technical control your organization has. Your awareness training has never actually been tested against it.
This Is a Human Problem. And a Trust Problem.
There is no silver bullet for deepfake. No EDR alert fires when a threat actor impersonates your CFO on a Zoom call using AI-generated video convincing enough to deceive an experienced finance professional.
The attack surface that deepfake-enabled social engineering occupies (voice, video, messaging platforms, employee identity) sits almost entirely outside the coverage of your existing security stack. What makes this threat categorically different is not the technology. It is the trust architecture it exploits. These attacks don't impersonate strangers. They impersonate colleagues, executives, and business partners, and in some cases using the actual compromised accounts of real people. When a trusted identity initiates contact, your employees don't apply skepticism. They respond. And that is precisely the gap that nation-state actors are walking through right now.
Deepfake fraud attempts surged 3,000% in 2023, according to Onfido's Identity Fraud Report.1 North America saw a 1,740% regional increase in deepfake incidents that same year, per Sumsub research.2 More than 10% of companies have already dealt with attempted or successful deepfake fraud, with damages from successful attacks reaching as high as 10% of annual profits.3 These attacks are no longer experimental. They are operational, industrialized, and being deployed at scale by threat actors with nation-state resources and criminal markets selling the same tooling to anyone with a motive. The question is not if. It is whether your people will recognize it when it happens.
UNC1069: What a Real Infiltration Looks Like
UNC1069 is a North Korean advanced persistent threat group tracked by Google Mandiant since 2018: financially motivated, operationally disciplined, and as of 2025 deploying weaponized AI at every stage of the attack chain.4 Understanding this group's playbook is not an academic exercise. It is a preview of the techniques your employees will face, because the TTPs are being replicated and commoditized across the broader threat ecosystem.
What distinguishes UNC1069 is not technical sophistication alone. It is the group's systematic exploitation of human trust, starting not with malware, but with identity.
Stage 1: Compromise a Trusted Identity First
UNC1069 doesn't cold-contact its targets. It first compromises the accounts of real, known executives within its target industry on the messaging platforms they use every day. When the attack initiates, it arrives from a familiar name: a colleague, a known business contact, a recognized investor, with no spoofed domain and no suspicious sender to flag. The target has no rational basis for skepticism. The message is coming from someone they already trust.
This is employee impersonation weaponized at scale. The attacker is not pretending to be an unknown party. They are operating inside the identity of a real person, using that person's established relationships and professional reputation as the attack vector. Your security awareness training almost certainly never simulated this scenario.
Stage 2: A Calendly Link. A Spoofed Room. A Deepfake CEO.
After rapport is established through the compromised account, UNC1069 sends a Calendly scheduling link, a routine business interaction that triggers no suspicion. The link routes to a spoofed Zoom page hosted on attacker-controlled infrastructure. During the call, the victim encounters an AI-generated deepfake video impersonating a CEO from a separate, legitimate company. The video is convincing enough that victims engage through an entire business meeting as though it is authentic.
Once the target is engaged, the attacker introduces a fabricated technical issue (an audio failure, a screen share glitch) and asks the victim to run diagnostic commands to resolve it. Those commands are malware installers. The victim executes them willingly, believing they are resolving a technical problem on a legitimate call. This is the ClickFix technique: social engineering a human being into self-executing the payload so no exploit is required.
Google Mandiant published its detailed UNC1069 intrusion report on February 9, 2026, confirming the group had crossed from using AI for productivity into deploying AI-generated deepfake lures in active, targeted operations.5 Seven distinct malware families, including SILENCELIFT, DEEPBREATH, and CHROMEPUSH, were deployed in a single engagement. This is not a proof-of-concept. This is operational, nation-state-grade tradecraft available in a commoditized form to any motivated attacker.
Stage 3: Infiltration That Never Looked Like an Attack
UNC1069's operational pattern extends beyond targeted intrusions into outright organizational infiltration. North Korean actors within the same Lazarus Group ecosystem (documented in the KnowBe4 incident) used AI-generated profile photos, fabricated credentials, and deepfake-supported video interviews to embed a threat actor inside an American cybersecurity firm as a full-time remote employee. The hire passed every standard screening process. The threat was not a phishing email. It was a person, or something engineered to appear as one, operating inside the organization's trust perimeter with authorized access from day one.
This is the endpoint of the deepfake infiltration trajectory: not attacks against your organization from outside, but attacks conducted from within it by identities your own hiring and onboarding processes authenticated.
- UNC1069: Deepfake CEO on Spoofed Zoom, February 2026 Attacker compromised a real executive's messaging account, scheduled a meeting via Calendly, and deployed a deepfake CEO during a fake Zoom call. Victim self-executed malware via ClickFix technique. Seven malware families deployed in one intrusion. Confirmed by Google Mandiant, February 9, 2026.5
- UNC1069 / Lazarus: Axios npm Supply Chain Compromise, March 2026 Formally attributed by Google's Threat Intelligence Group on March 31, 2026.6 A library with 100M+ weekly downloads was backdoored, with potential exposure of AWS and GitHub credentials across hundreds of thousands of developer environments. Social engineering was the initial access method.
- KnowBe4: North Korean Actor Hired as a Full-Time Employee A North Korean threat actor used an AI-generated photo, fabricated work history, and deepfake-assisted video interviews to pass KnowBe4's hiring process and gain authorized insider access. Detected not by security screening, but by post-hire behavioral anomalies. Publicly disclosed by KnowBe4, July 2024.7
- Arup: $25 Million Deepfake Video Conference Heist Attackers staged a live video call using simultaneous deepfake likenesses of multiple senior executives. A finance professional authorized a $25 million wire transfer after the synthetic CFO and colleagues appeared on screen and directed it. No technical control intervened. Reported February 2024.8
Why This Threat Evades Every Defense You Have
The deepfake social engineering threat is structurally resistant to the defensive stack most enterprises have built. This is not a flaw in your tools. It is a category mismatch. Your tools were built for a different threat model, and the attackers know it.
Your email gateway doesn't inspect a direct message sent through a compromised account. Your EDR doesn't monitor a phone call. Your SIEM doesn't log a Zoom meeting on a spoofed domain that resolves to a convincing-looking URL. Your MFA doesn't protect against an employee who has already been socially engineered into authorizing a transaction, running a diagnostic command, or granting remote access, because from the system's perspective, that employee authenticated correctly and acted of their own volition.
The attack succeeds not by breaking your controls but by operating entirely in the layer they do not reach: human judgment under social pressure from a trusted identity.
The conventional advice to look for lip-sync errors, unnatural blinking, and lighting artifacts is already obsolete. UNC1069's deepfake video in February 2026 was convincing enough that an experienced professional engaged with it through an entire business meeting. Real-time generation quality will continue to close the remaining gap. The visual tells will not exist. The only durable defense is behavioral: your people knowing how to respond when trust is being weaponized against them, built through experience under realistic adversarial conditions.
Your Awareness Training Has Never Been Tested. That's the Real Problem.
Most organizations equate having awareness training with having a defense. They are not the same thing. A training module tells your employees what a deepfake attack looks like in theory. It cannot tell you, or your employees, how they will actually respond when one is targeting them in real time, with authority pressure, urgency framing, and a trusted identity on the other end of the line.
This is the measurement failure at the center of modern security awareness. You can track completion rates. You can report on quiz scores. You can produce documentation that satisfies your compliance requirements. None of that tells you whether your people's behavioral conditioning holds under realistic adversarial pressure, because no training platform applies realistic adversarial pressure. They simulate email. Email is not the only space where this threat operates.
The MGM breach ($100 million, one phone call to a help desk) happened inside an organization with documented procedures.9 The training had been delivered. The policy existed. It failed at the human execution layer, under exactly the social pressure a completion certificate is incapable of measuring. You don't know whether your processes hold until someone actually tests them. Most organizations have never had that test run.
Why Orchestrated Simulations Are Required for Accurate Measurement
Accurate measurement of human security posture requires adversarial conditions that cannot be manufactured by a vendor with a financial interest in your training outcomes. It requires multi-vector, multi-stage simulations that replicate how threat actors actually operate. Not isolated phishing emails. Coordinated attack chains combining email, voice, SMS, Teams, and video into a single orchestrated campaign timed to exploit cognitive load and authority bias simultaneously.
The reason this matters is diagnostic, not punitive. A single-vector email simulation answers one narrow question: whether your employees click links in suspicious emails. It tells you nothing about whether they would authorize a wire transfer if a deepfake executive appeared on a video call. Nothing about whether your finance team's secondary approval procedure actually executes under pressure when a synthetic CFO is telling them to move fast. Nothing about whether a realistic IT help desk impersonation on Teams would produce a remote access grant.
You cannot measure what you have not simulated. And you cannot simulate the UNC1069 threat model with phishing email templates.
We Are Not a Traditional Awareness Training Company
This distinction matters more than it might appear.
We firmly advocate for awareness training when it is done well. We build custom awareness training content to help organizations close the knowledge gaps our simulations surface. But that is not where we start, and it is not what defines us.
Breacher.ai is a security research firm. Our team carries more than 15 years of practitioner experience spanning enterprise blue team operations, threat intelligence, and adversarial red team engagements. That is the kind of depth that only comes from spending years on the defensive side of real incidents before moving to the offensive side to understand exactly how attackers think, adapt, and exploit the gaps defenders leave open.
92% of organizations we assess are vulnerable to deepfake social engineering. 78% are highly vulnerable, meaning a realistic OSES™ simulation produces a successful outcome against their people or their processes. These are organizations that have awareness training in place. Training completion is not behavioral resilience. The only way to know the difference is an independent adversarial test conducted by practitioners who have no stake in the outcome.
The OSES™ Platform: Measurement That Actually Means Something
Breacher.ai's OSES™ Platform (Orchestrated Social Engineering Simulations™) is the only commercial platform purpose-built to assess human security posture across the full attack surface that deepfake and social engineering threats actually occupy. Not email alone. Not a single channel. The complete kill chain, run by practitioners.
Every OSES™ engagement is designed, executed, and analyzed by security professionals who have run real incident response, built real detection programs, and spent careers understanding the gap between what an attacker does and what a training vendor simulates.
What OSES™ Measures That No Other Platform Can
- Employee Identity Impersonation and Infiltration Scenarios We replicate the UNC1069 playbook: trust established through a known or plausible identity, escalated via a scheduled engagement, measured against whether your people and processes detect or authorize the follow-on action. We test the full infiltration chain, not a suspicious email.
- AI-Powered Vishing with Deepfake Voice Cloning Our ElevenLabs-integrated voice agents deliver executive and IT support personas over live phone calls. These are adaptive conversations with voices engineered using OSINT-derived organizational context. We run real conversations under adversarial conditions, not scripted recordings.
- Multi-Platform Orchestration: Voice, Teams, SMS, Email We coordinate attack chains across every channel the threat actually uses, sequenced and timed to produce the cognitive load and authority pressure that makes these attacks work. We simulate the kill chain, not isolated channels.
- Process Control Validation Under Adversarial Pressure We test whether your secondary approval protocols, callback requirements, and passphrase procedures actually execute when a convincing synthetic authority figure is on the line demanding urgency. Documentation is not the same as execution under pressure. We find the gap.
- Independent Third-Party Assessment of Your Awareness Program We measure whether your existing awareness training has produced durable behavioral change, not whether employees completed modules. We deliver that finding as an independent security research firm with no interest in selling you the remediation that follows.
What Your Organization Should Do Right Now
Assume Your Awareness Training Has Not Been Independently Validated
If the only measurement of your program's effectiveness is completion rates and simulated email click-throughs, you have compliance documentation, not validated behavioral resilience. Commission an independent adversarial assessment that tests your people the way UNC1069 tests them: across all channels, under realistic pressure, with adversarial tradecraft drawn from current threat intelligence.
Map Infiltration Risk, Not Just Phishing Risk
The UNC1069 threat pattern is an infiltration attack, not a phishing campaign. It begins with identity compromise and progresses through your organization's trust architecture rather than its technical controls. Identify which roles and processes represent your highest infiltration exposure. Finance, IT help desk, HR, and executive-adjacent roles carry disproportionate risk and require targeted simulation, not general awareness training.
Test Your Process Controls Before an Adversary Does
Document the controls you have in place for high-risk transactions, then have them stress-tested by practitioners whose job is to defeat them under realistic adversarial conditions. If your secondary approval process has never been tested against a convincing synthetic executive persona, you don't know whether it holds. MGM's help desk had a process. The process failed under pressure. The test you haven't run is the one that reveals the gap.
Build a Reporting Path for Suspected Deepfake
Your employees need to know what to do when they suspect a deepfake, not just what to look for. Define a clear escalation chain, a verification workflow, and a reporting channel. Without a structured response path, even employees who recognize an attack may not know how to act on that recognition before the damage is done.
Find Out Whether Your Awareness Training Actually Works
Breacher.ai is an independent security research firm, not an awareness training vendor. We run OSES™ simulations that put your people, your processes, and your organization's real behavioral resilience under the same adversarial pressure that UNC1069 and other sophisticated threat actors are applying right now.
References
- Onfido. Identity Fraud Report 2024. A 3,000% year-over-year increase in deepfake fraud attempts in 2023, driven by the accessibility of cheap generative AI tools. onfido.com
- Sumsub. Identity Fraud Report 2023. A 10x global increase in deepfake incidents from 2022 to 2023, including a 1,740% surge in North America. sumsub.com
- Business.com. Deepfake Fraud Statistics, 2024. More than 10% of companies have dealt with attempted or successful deepfake fraud; damages from successful attacks reached as high as 10% of annual profits.
- Google Mandiant / GTIG. UNC1069 Threat Actor Profile. Tracking of UNC1069 (CryptoCore / BlueNoroff-affiliated) as a North Korea-nexus financially motivated APT since at least April 2018. cloud.google.com
- Google Mandiant. UNC1069 Intrusion Report, February 9, 2026. Deployment of AI-generated deepfake video lures and seven malware families — SILENCELIFT, DEEPBREATH, CHROMEPUSH, and others — in a single targeted intrusion against a FinTech entity. gbhackers.com
- Google Threat Intelligence Group (GTIG). Axios npm Supply Chain Attribution, March 31, 2026. Formal attribution of the Axios JavaScript library backdoor to UNC1069, affecting a package with over 100 million weekly downloads. cyberwarrior76.substack.com
- KnowBe4. North Korean IT Worker Incident Disclosure, July 2024. A North Korean threat actor used an AI-generated profile photo and fabricated credentials to pass KnowBe4's hiring process and gain authorized remote access as a software engineer. blog.knowbe4.com
- Arup / CNN. $25 Million Deepfake Video Conference Fraud, February 2024. A finance employee at Arup was deceived by a deepfake video conference impersonating multiple executives simultaneously, resulting in 15 wire transfers totaling $25 million. weforum.org
- MGM Resorts International. Q3 2023 Financial Disclosure. MGM confirmed a $100 million impact to its third-quarter 2023 results following a ransomware attack initiated by a vishing call to its IT help desk, in which Scattered Spider (UNC3944) impersonated an employee to obtain credentials. netwrix.com

