How Multi-Channel AI Deepfake Attacks Bypass Security Controls

What is a multi-channel deepfake attack? A multi-channel deepfake attack is a staged social engineering sequence that uses AI-generated content across multiple communication channels to establish credibility and bypass security controls. These[...]

Categories: Deepfake,Published On: January 20th, 2026,

What is a multi-channel deepfake attack?

A multi-channel deepfake attack is a staged social engineering sequence that uses AI-generated content across multiple communication channels to establish credibility and bypass security controls.

These attacks succeed because each channel reinforces the others. A voicemail transcribes to text, creating a written record. A follow-up SMS references the voicemail. A video call confirms the “identity” established in previous contacts. By the time the target is asked to take action, they have experienced multiple touchpoints that all support the same narrative.

Breacher.ai red team assessments show that multi-stage attack sequences are highly effective at bypassing security controls that protect individual channels. This document explains why, based on findings from assessments across 300 targets.

Who should understand multi-channel attack techniques?

This threat briefing is for security leaders, SOC teams, and awareness training managers who need to understand how sophisticated AI attacks evade current defences. It addresses:

  • How multi-channel attacks are structured
  • Why they bypass traditional security controls
  • Which attack patterns pose the highest risk
  • How organisations can defend against staged sequences

Why do multi-channel attacks bypass security controls?

Most security controls are designed to protect individual channels. Email security scans attachments and links. Voice systems may flag known spoofed numbers. Video platforms check for uninvited participants. Multi-channel attacks exploit gaps between these siloed defences.

Why do single-channel controls fail against staged attacks?

Breacher.ai testing demonstrates this with the voicemail-to-SMS technique: A deepfake voicemail arrives and transcribes to text. Voice security has no malicious link to flag. The transcription creates a written record that looks legitimate. After waiting for the voicemail to transcribe as a text message, an SMS follows with a link. Because the target has already “interacted” with the sender via voicemail, iPhone Safe Links treats the SMS link as trusted and makes it clickable. The security control is bypassed entirely.

Why does credibility compound across channels?

By the time a target receives a video call, they have already received a voicemail and SMS from “the same person.” They expect the call. They have seen the sender’s name multiple times. The video call does not need to establish credibility from scratch—it inherits credibility from previous touchpoints. This is why Breacher.ai assessments find that combinations of phone calls, voicemails, SMS, and email are highly effective.

What attack patterns do attackers commonly use?

Breacher.ai assessments identify recurring attack patterns that organisations face.

Attack Pattern Why It Works
Voicemail → SMS Voicemail transcription creates written record; SMS link bypasses Safe Links because prior “interaction” detected
Calendar → Video Call Target expects the meeting—it’s on their calendar. Deepfake call feels legitimate, not suspicious
Video + Voice Combined Most convincing attack method per Breacher.ai testing. Video reinforces voice, increasing perceived legitimacy

Source: Breacher.ai red team assessments. Video + voice combinations identified as most effective attack vector.

Voicemail-to-SMS attack sequence

  1. Attacker leaves voicemail using cloned executive voice
  2. Voicemail transcribes to text and appears in target’s inbox
  3. Attacker waits a couple of minutes for transcription to complete
  4. SMS arrives: “Did you get my voicemail? I need this handled urgently.”
  5. SMS link is now clickable because Safe Links perceives prior interaction

As Jason Thatcher explains: “By dropping the voicemail and then waiting for the transcription of the text message, we actually bypass that safety feature entirely. So once we send the follow-up SMS link, it’s hot and it’s clickable and it’s active.”

Calendar invite-to-video call attack sequence

  1. Attacker sends calendar invite from spoofed executive account
  2. Subject: “Quick sync—confidential matter”
  3. Target accepts meeting or it auto-populates on calendar
  4. At meeting time, target joins video call
  5. Deepfake executive appears, makes urgent request

What real-world incidents demonstrate multi-channel attacks?

Arup: $25 million loss

In February 2024, engineering firm Arup lost $25 million to a deepfake attack (CNN report). The attack used a staged video conference where the target saw deepfake versions of the CFO and other executives. Prior communications established the context for the call. The target had multiple data points confirming the meeting was legitimate before the wire transfer request was made.

Ferrari: Attack stopped by verification question

In a widely reported incident, Ferrari executives received what appeared to be communications from their CEO requesting an urgent financial action. The attack was only stopped because one executive asked a verification question that the attacker could not answer (Bloomberg report).

Which departments and industries face the highest risk?

Category Risk Level
Finance departments Highest click-through rates—outsized compared to company baseline
Human Resources Second highest risk for deepfake social engineering
Manufacturing (industry) Highest overall risk among industries tested
Organisation variation Range from 5% to 30-40% click-through rates

Source: Breacher.ai red team assessments across 300 targets.

How can organisations defend against multi-channel attacks?

Defending against staged attacks requires defences that work across channels, not within them.

Verification protocols

  • Out-of-band verification: Verify requests through a channel not used in the attack. If you receive a video call, verify via a known phone number.
  • Known-good contact methods: Never call back numbers provided in suspicious communications. Use internal directories.
  • Verification questions: Establish pre-arranged questions for high-value requests. The Ferrari attack was stopped this way.

Process controls

  • Dual authorisation: Require two people to approve high-value actions. Forces attackers to compromise multiple individuals.
  • Cooling-off periods: Implement waiting periods for urgent requests. Defeats urgency-based manipulation.
  • Request escalation: Unexpected high-value requests trigger automatic escalation regardless of apparent sender.

Training approach

  • Scenario-based training: Train on multi-step sequences, not just single-channel examples.
  • Verification focus: Previous communications do not validate current requests. Verify independently.
  • Multi-channel testing: Single-channel phishing tests do not prepare employees for coordinated attacks.

Breacher.ai assessment data shows organisations with awareness training programmes perform approximately 35% better against deepfake social engineering compared to organisations without structured training.

Frequently Asked Questions

Can technology detect multi-channel attack sequences?

Cross-channel correlation is an emerging capability but remains limited. Most security stacks operate in silos—email security does not share signals with voice security. As Steven Shapiro notes, bringing signals together from various channels is critical because cross-channel signals are what give accuracy to detection.

Why is voice particularly dangerous in multi-channel attacks?

Breacher.ai testing indicates that voice is one of the most dangerous factors for deepfake social engineering, especially in corporate environments. Even in a video deepfake, it is the voice that does most of the social engineering work.

Key takeaways

Based on Breacher.ai red team assessments across 300 targets:

  • Multi-channel staged attacks are highly effective at bypassing security controls designed for individual channels
  • Credibility compounds across touchpoints, making later requests more convincing
  • Video-plus-voice combinations are the most convincing attack method
  • Finance and HR departments face elevated risk; manufacturing shows highest industry risk
  • Defence requires cross-channel verification and process controls
  • Testing must include multi-channel scenarios to prepare employees for real attacks

Next Step

Breacher.ai assessments include multi-channel attack simulations that test how your organisation responds to staged sequences. We use live voice cloning, deepfake video calls, and coordinated attack chains to identify vulnerabilities in your cross-channel defences. Contact us at breacher.ai to schedule a strategic consultation.

Latest Posts

  • How Multi-Channel AI Deepfake Attacks Bypass Security Controls

  • Can Employees Spot Deepfakes?

  • AI Powered Social Engineering Simulations

Table Of Contents

About the Author: Emma Francey

Specializing in Content Marketing and SEO with a knack for distilling complex information into easy reading. Here at Breacher we're working on getting as much exposure as we can to this important issue. We'd love you to share our content to help others prepare.

Share this post