UNC 1069 Deepfake
UNC1069 Is Using Deepfakes
To Steal Millions. Are You Ready?
North Korean threat actors just deployed AI-generated deepfake video in a live attack against a cryptocurrency executive—impersonating a CEO on a fake Zoom call to deliver malware. Mandiant confirmed the intrusion this week. Here's what happened, what it means, and why most organizations would fail the same test.
This Isn't Theoretical Anymore
On February 9, 2026, Google's Mandiant published a detailed investigation into a targeted intrusion attributed to UNC1069—a North Korean state-sponsored threat group that has been conducting financially motivated cyber operations since at least 2018. The findings confirmed what offensive security professionals have been warning about for years: deepfake technology has crossed from proof-of-concept demonstrations into active, weaponized deployment against real targets.
The attack wasn't a mass phishing campaign. It wasn't a spray-and-pray credential harvester. It was a surgically targeted social engineering operation that used a compromised executive's Telegram account, a spoofed Zoom meeting, and a real-time deepfake video of a known cryptocurrency CEO—all orchestrated to deliver seven distinct malware families onto a single victim's machine.
This is the attack surface we've been building simulations against at Breacher.ai. And the fact that a nation-state threat actor just operationalized it against a live target should be the loudest wake-up call your security team has heard this year.
This analysis is based on Mandiant's published investigation: "UNC1069 Targets Cryptocurrency Sector with New Tooling and AI-Enabled Social Engineering" (February 9, 2026). Mandiant attributes UNC1069 to North Korea with high confidence. The group is also tracked as CryptoCore and MASAN by the broader cybersecurity community.
The Attack: Step by Step
Understanding how this attack unfolded is critical—because every phase exploited a human decision point, not a technical vulnerability. The perimeter didn't fail. The firewall didn't fail. The endpoint protection didn't fail. A person made a series of reasonable decisions that any untrained employee would make, and those decisions handed full system access to a hostile nation-state.
Phase 1: Compromised Identity
UNC1069 hijacked the Telegram account of a legitimate executive at a cryptocurrency company. The true owner later posted warnings from a separate social media account that their Telegram had been compromised—but by then, the threat actor had already used that trusted identity to initiate contact with the target. The victim had no reason to question the authenticity of the outreach. It came from a known, trusted contact.
Phase 2: The Deepfake Meeting
After building rapport via Telegram, UNC1069 sent a Calendly link to schedule a 30-minute meeting. The meeting link redirected to a spoofed Zoom page hosted on attacker-controlled infrastructure. During the call, the victim was presented with a deepfake video impersonating a CEO from another cryptocurrency company. The video was convincing enough that the victim continued engaging with the call as though it were legitimate.
Phase 3: The ClickFix Trap
Once the victim was engaged in the fake meeting, the deepfake video facilitated a ruse: the victim appeared to be experiencing audio issues. The attacker directed the victim to run "troubleshooting" commands on their system—commands that were embedded with a malicious payload. The victim executed the commands, initiating the infection chain. On macOS, the command chain included a curl request that piped a remote script directly into zsh. On Windows, the attack used mshta to execute a remote payload.
Phase 4: Full System Compromise
From that single execution, UNC1069 deployed seven distinct malware families onto the victim's machine—an unusually aggressive volume of tooling for a single host. The arsenal included WAVESHAPER (a C++ backdoor), HYPERCALL (a Golang downloader), HIDDENCALL (a hands-on-keyboard backdoor), SUGARLOADER (a known DPRK downloader), SILENCELIFT (a new toehold backdoor), DEEPBREATH (a data miner targeting credentials, browser data, and Telegram), and CHROMEPUSH (a malicious browser extension disguised as a Google Docs editor).
Why This Attack Matters Beyond Crypto
It's easy to read this report and think: "We're not in crypto. This doesn't apply to us." That's exactly the kind of thinking that gets organizations compromised.
UNC1069 targeted a cryptocurrency executive because that's where the immediate financial return is. But the attack methodology—compromised identity, deepfake video on a spoofed meeting platform, social engineering into malware execution—works against any organization in any sector. The victim didn't fall for this because they work in crypto. They fell for it because the social engineering was indistinguishable from a legitimate business interaction.
Consider how this same playbook translates to your environment:
- Finance & Treasury A deepfake video call impersonating your CFO instructs a controller to authorize an emergency wire transfer. The voice matches. The face matches. The Zoom background looks right. No technical exploit required—just a human being doing what their "boss" told them to do.
- Legal & M&A A deepfake of outside counsel requests sensitive deal documents during a fabricated "urgent" video call. The attorney's voice has been cloned from a publicly available conference recording. The associate on the receiving end complies.
- IT & Engineering A deepfake of a CISO on a Teams call instructs a sysadmin to run a script to "patch a critical vulnerability." The script installs a backdoor. The sysadmin followed what appeared to be a direct order from leadership.
- Executive Leadership A deepfake video call from a "board member" pressures the CEO into sharing credentials for a board portal ahead of a fabricated emergency session. The CEO has no established protocol to verify the request.
The common thread isn't the industry. It's the absence of any organizational process to verify that the person on the other end of a video call is who they claim to be. That gap exists in almost every enterprise today.
The volume of tooling deployed on a single host indicates a highly determined effort to harvest credentials, browser data, and session tokens to facilitate financial theft.
The AI Escalation Is Accelerating
UNC1069 didn't develop its deepfake capabilities overnight. Google's Threat Intelligence Group has been tracking this group's progression toward AI-enabled operations since at least November 2025, when they first documented UNC1069's shift from using generative AI for basic productivity tasks to deploying AI-powered lures in active operations.
The evolution is worth understanding, because it mirrors what we're seeing across the broader threat landscape:
- Stage 1: AI for Productivity Threat actors use large language models to draft phishing emails, translate lures into target languages, and conduct reconnaissance. This is where most threat groups operated 12–18 months ago.
- Stage 2: AI for Content Generation Threat actors begin using AI to generate deepfake images, clone voices from publicly available audio (earnings calls, conference talks, podcasts), and create synthetic personas for social engineering. UNC1069 was documented at this stage in late 2025.
- Stage 3: AI in Active Operations Threat actors deploy deepfake video and cloned audio in real-time during live attacks—impersonating specific individuals on video calls to manipulate targets into taking actions. This is where UNC1069 is now.
Mandiant also noted that UNC1069 is using tools like Google's Gemini to develop malware, conduct operational research, and assist with reconnaissance. Kaspersky separately reported that the overlapping threat group Bluenoroff is using GPT-4o to modify images—further confirming that generative AI tooling is now standard in the North Korean offensive playbook.
This isn't a single threat actor experimenting. This is the industrialization of AI-enabled social engineering at the nation-state level.
Mandiant confirmed the attack served two objectives: immediate cryptocurrency theft and harvesting victim identity data to fuel future social engineering campaigns. Every compromised identity becomes source material for the next deepfake. Every stolen credential enables the next account takeover. The attack loop is self-reinforcing.
What the UNC1069 Attack Reveals About Your Defenses
If you're a CISO reading this report and thinking about your own organization's exposure, here are the questions you should be asking—and the honest answers most enterprises would give:
- Do we have a verification protocol for video calls? Most organizations don't. There's no established process to confirm that the person on a Zoom, Teams, or WebEx call is who they appear to be—especially for ad-hoc or externally scheduled meetings. UNC1069 exploited exactly this gap.
- Could our employees detect a deepfake in a live video call? Almost certainly not. Our assessment data shows 63% of employees can't distinguish synthetic content from real content even when they know they're being tested. In a high-pressure, real-time scenario—where they believe they're speaking with a known executive—the detection rate drops further.
- Would our finance team execute a wire transfer based on a video call? If the call came from a spoofed meeting link sent by a compromised executive's account, featured a convincing deepfake of the CFO, and included a plausible business justification—many would. The social engineering in this attack was designed to bypass exactly the kind of "does this feel right?" gut check that most organizations rely on.
- Have we ever tested our organization against a deepfake attack? This is the critical question. If the answer is no, then you have no empirical data on your exposure. You're relying on assumptions about your team's ability to detect threats they've never encountered. UNC1069 just demonstrated that these attacks work against sophisticated targets. The only way to know whether your organization is prepared is to test it.
The Layer 7 Problem
The UNC1069 intrusion is a textbook example of what happens when adversaries bypass every technical control by targeting the one layer that can't be patched: the human layer.
Layer 7 is the new perimeter. Every dollar your organization has invested in endpoint protection, network segmentation, SIEM correlation, and zero trust architecture was irrelevant in this attack. The victim's machine had no EDR agent installed—but even if it had, the initial compromise vector was a user voluntarily executing commands on their own system. No malware signature was triggered at the point of entry. No network anomaly was flagged. The deepfake video call didn't generate a security alert.
The entire kill chain—from initial contact through full system compromise—was enabled by social engineering. The technical sophistication came after the human was already compromised. And that's the pattern we see in every deepfake attack we simulate: the technology isn't the vulnerability. The absence of human-layer testing is the vulnerability.
UNC1069 just proved that deepfake social engineering works against real targets in real operations. This isn't a future threat—it's a current one, deployed by a nation-state threat actor with a documented track record of financial theft. If your organization hasn't been tested against these attacks, you don't know your exposure. And if you don't know your exposure, you can't defend against it.
What CISOs Should Do Right Now
This isn't a checklist of aspirational recommendations. These are the specific, actionable steps that would have disrupted the UNC1069 kill chain at multiple points—and that your organization can implement immediately:
- Conduct a Deepfake Resilience Assessment Test your organization against the same attack techniques UNC1069 used—voice clones, synthetic video, compromised identity social engineering, and multi-channel attack chains. You need empirical data on your exposure, not assumptions.
- Establish Video Call Verification Protocols Implement out-of-band verification for any video call that involves sensitive decisions, financial transactions, or access requests. A callback to a known number, a pre-shared passphrase, or a secondary channel confirmation would have broken the UNC1069 attack chain before malware was ever delivered.
- Train Specifically for AI-Enabled Social Engineering Traditional phishing simulations—the "click the link" tests—don't prepare employees for deepfake attacks. Your training program needs to include exposure to synthetic voice, synthetic video, and blended multi-channel social engineering scenarios. Employees need to experience these attacks in a controlled environment before they encounter them in the wild.
- Harden Meeting Platform Policies Enforce policies that prevent employees from joining meetings on unrecognized or externally hosted meeting platforms. UNC1069 used a spoofed Zoom domain—a simple URL inspection policy or a managed meeting platform requirement would have flagged this.
- Deploy EDR on Every Endpoint The victim in this case had no EDR agent installed. While EDR alone wouldn't have prevented the initial social engineering, it would have detected the post-compromise malware deployment and potentially contained the damage. No endpoint should be unmonitored.
Only 8% of the organizations we assess show no susceptibility to deepfake social engineering. The other 92% have never been tested with the attacks that are actually being deployed against them.
The Threat Is Evolving. Your Testing Should Too.
UNC1069 is not the only threat actor building these capabilities. They're just the one that got caught this week. The tooling to create deepfake video, clone voices, and orchestrate multi-channel social engineering attacks is becoming more accessible, more convincing, and less expensive every month. The barrier to entry for AI-enabled social engineering is dropping fast—and the organizations that wait to test their defenses until after they've been compromised will be the ones funding North Korea's next operation.
At Breacher.ai, we run the same attack simulations that UNC1069 just deployed in the wild—voice cloning, deepfake video calls, compromised identity chains, and multi-channel social engineering—against Fortune 500 security teams. We test people, processes, and technology simultaneously, because that's how real adversaries operate. And we deliver the empirical data CISOs need to understand their actual exposure to the threats that are actually being used against enterprises today.
If this report made you ask whether your organization would pass the same test that UNC1069's target just failed—that's the right question. The next step is finding out the answer.
Test Your Deepfake Resilience
UNC1069 just proved these attacks work in the real world. Find out whether your organization would detect them—before a threat actor does it for you.