$11 Million Lost to a Deepfake Video Call: The UXLINK Hack

A deepfake video conference gave attackers access to UXLINK's systems, resulting in $11 million in losses. The attacker didn't exploit a code vulnerability—they exploited trust, impersonating a business partner convincingly enough to[...]

Categories: Deepfake,Published On: January 23rd, 2026,

A deepfake video conference gave attackers access to UXLINK’s systems, resulting in $11 million in losses. The attacker didn’t exploit a code vulnerability—they exploited trust, impersonating a business partner convincingly enough to compromise a team member’s device and Telegram account.

This incident demonstrates why technical security controls alone cannot protect organisations against AI-powered social engineering.

What Happened

On September 22nd, 2024, an attacker posed as a legitimate business partner using deepfake video technology. Through convincing video conferences, they built trust with a UXLINK team member and gained access to their personal device and communications.

With that access, the attacker:

  • Seized control of a critical smart contract
  • Minted billions of fraudulent tokens
  • Accessed treasury and ecosystem funds
  • Inflated token supply to over 10 trillion

The attack was initially mistaken for a rug pull—an inside job. Investigation has now revealed it was external, enabled entirely by AI-generated impersonation.

Why This Attack Worked

The attacker succeeded because verification processes didn’t account for AI impersonation.

Video conferencing creates inherent trust. When someone appears on camera claiming to be a known contact, most people don’t question authenticity. There’s no standard protocol for verifying that the person on a video call is who they appear to be.

This is the gap attackers now exploit.

Voice cloning requires 3-5 seconds of audio. Deepfake video can be generated in under an hour with accessible tools. The technology barrier that once protected organisations from impersonation attacks no longer exists.

The Process Failures

This wasn’t a failure of individual judgment. It was a failure of verification workflows.

No out-of-band confirmation: Access was granted based on video call trust alone, without secondary verification through a separate channel.

No callback procedures: There was no protocol requiring the team member to independently contact the supposed partner through known, verified contact details.

No escalation trigger: Requests involving system access didn’t trigger additional approval steps or security team involvement.

These are process gaps—not human errors. The team member followed what appeared to be reasonable interaction with a trusted contact. The processes in place simply weren’t designed for a world where video and audio can be fabricated.

What Organisations Should Learn

The UXLINK incident is a case study in why security programs must extend beyond code audits and technical controls.

Verification workflows need updating. Any request involving system access, financial transfers, or sensitive information should require out-of-band confirmation—regardless of how convincing the requestor appears.

Video calls are no longer proof of identity. Policies that treat video presence as verification are now exploitable. Organisations need callback procedures that don’t rely on contact information provided during the suspicious interaction.

Training must address AI threats specifically. General phishing awareness doesn’t prepare teams for deepfake impersonation. Employees need to understand what’s now possible and what verification steps to take.

Testing should simulate real attack patterns. Template-based phishing simulations don’t reveal whether your organisation would catch a deepfake video call. Testing must reflect how attackers actually operate.

The Broader Pattern

UXLINK isn’t an isolated incident.

  • Arup lost $25 million to a deepfake video conference impersonating their CFO and executives
  • A UK energy firm transferred €220,000 after a CEO voice clone call
  • Ferrari blocked an attempted attack only because an employee asked a verification question the deepfake couldn’t answer

Deepfake fraud exceeded $200 million in Q1 2025 alone—and that figure only reflects reported incidents. There’s no mandatory disclosure requirement for AI-enabled fraud, meaning most attacks go unreported.

Key Takeaways

The UXLINK hack succeeded because:

  1. Deepfake technology made impersonation convincing
  2. Verification processes didn’t require out-of-band confirmation
  3. Video call presence was treated as proof of identity
  4. No escalation protocol existed for access requests

Organisations with similar process gaps are vulnerable to the same attack pattern.

The question isn’t whether your team would recognise a deepfake. The question is whether your processes would catch one even if individuals were fooled.

Frequently Asked Questions

How did the attacker gain access to UXLINK’s systems?

The attacker used deepfake video technology to impersonate a business partner during video conferences. This built enough trust to compromise a team member’s device and Telegram account, which provided access to critical smart contracts.

Could this attack have been prevented?

Yes. Out-of-band verification—confirming requests through a separate, independently verified channel—would have exposed the impersonation. Callback procedures requiring contact through known details (not information provided by the caller) are effective against this attack pattern.

What makes deepfake attacks different from traditional phishing?

Traditional phishing relies on text-based deception. Deepfake attacks use AI-generated video and audio to impersonate known, trusted contacts in real-time. This exploits the inherent trust people place in video communication.

How can organisations test whether they’re vulnerable?

Template-based phishing simulations don’t test for deepfake resilience. Organisations need assessments that simulate live AI social engineering—including voice cloning and video impersonation—against their actual verification workflows.

Are deepfake attacks becoming more common?

Yes. Voice cloning now requires only 3-5 seconds of audio. Deepfake video creation takes under an hour with accessible tools. Reported deepfake fraud exceeded $200 million in Q1 2025, though actual figures are likely higher due to underreporting.

Latest Posts

  • $11 Million Lost to a Deepfake Video Call: The UXLINK Hack

  • How Multi-Channel AI Deepfake Attacks Bypass Security Controls

  • Can Employees Spot Deepfakes?

Table Of Contents

About the Author: Emma Francey

Specializing in Content Marketing and SEO with a knack for distilling complex information into easy reading. Here at Breacher we're working on getting as much exposure as we can to this important issue. We'd love you to share our content to help others prepare.

Share this post