Stop Waiting for Deepfake Watermarks: Why Regulation Won’t Protect Your Organisation
Will deepfake watermarks and regulation protect organizations from criminal attacks? No. Deepfake watermarks and regulatory disclosure requirements will not protect organisations from criminal attacks. Technical markers can be stripped or bypassed. Open-source[...]
Will deepfake watermarks and regulation protect organizations from criminal attacks?
No. Deepfake watermarks and regulatory disclosure requirements will not protect organisations from criminal attacks. Technical markers can be stripped or bypassed. Open-source models operate outside regulatory frameworks. Criminal attackers do not comply with disclosure laws. Organisations waiting for regulation to solve the deepfake problem are waiting for a solution that will not materialise in time to address current threats.
Can technical watermarks identify deepfake content?
In theory, yes. In practice, not reliably for security purposes. Some AI platforms embed technical markers in generated content, including metadata tags, invisible watermarks, or statistical patterns that detection systems can identify. However, these markers have fundamental limitations that prevent them from serving as security controls.
Technical markers can be removed. Any marker that is added to content can potentially be stripped, modified, or masked by someone with sufficient technical knowledge. Watermarks embedded in video can be degraded through re-encoding, cropping, or post-processing.
More fundamentally, markers can be avoided entirely by using tools that do not add them. Open-source deepfake models that anyone can download and run locally do not include watermarking. Attackers choosing their tools will simply select those without built-in markers.
Why won’t regulation solve the deepfake threat?
Regulatory approaches to deepfake disclosure face structural barriers that limit their effectiveness against criminal use.
| Regulatory Approach | Limitation |
| Mandatory watermarking | Only applies to compliant platforms. Open-source tools and foreign services operate outside the requirement. |
| Disclosure requirements | Criminals conducting fraud do not disclose that their content is synthetic. Laws against fraud already exist. |
| Platform liability | Attacks often bypass major platforms entirely. Deepfake video calls happen directly on Teams/Zoom without platform upload. |
| Criminal penalties | Deepfake fraud is already illegal under existing fraud laws. Additional penalties do not deter attackers who already risk prosecution. |
What about open-source deepfake models?
Open-source deepfake tools represent the core challenge for any regulatory approach. These models can be downloaded, modified, and run locally without any connection to regulated platforms or services.
Once an open-source model is released, it cannot be recalled. Code repositories may remove files, but copies proliferate across mirrors, file-sharing networks, and private channels. Attackers with technical capability can access and deploy these tools regardless of regulatory status.
Regulatory requirements that apply only to commercial AI services create a two-tier system: legitimate users work with marked, regulated tools while criminal attackers use unmarked, unregulated alternatives. The markers help identify legitimate content but do not impede malicious use.
Don’t criminals still face consequences if caught?
Deepfake-enabled fraud is already illegal under existing criminal statutes. Wire fraud, identity theft, and impersonation carry significant penalties in most jurisdictions. Additional laws specifically targeting deepfakes add to the legal framework but do not create new deterrence for attackers who already accept legal risk.
The challenge is not legal ambiguity but enforcement. Many deepfake attacks originate from jurisdictions with limited cooperation on cybercrime prosecution. Attribution is difficult. By the time an attack is investigated, funds may already be irrecoverable.
For organisations, the legal consequences that attackers may eventually face do not prevent the initial attack or recover losses. Prevention must rely on controls within the organisation’s own environment.
What should organisations do instead of waiting for regulation?
Organisations should treat deepfake risk as a current operational threat requiring immediate controls, not a future problem that regulation will solve.
Implement verification procedures that do not depend on trusting voice or video identity. Out-of-band verification through channels attackers cannot control provides protection regardless of deepfake sophistication or regulatory environment.
Test whether your controls work through AI social engineering red team assessments. Assumptions about employee behaviour and process compliance should be validated before attackers test them.
Train employees on the reality that voice and video confirmation is no longer sufficient identity verification. This is a fundamental shift in security assumptions that awareness programs must address.
Will detection technology eventually solve this?
Detection technology is improving but faces an asymmetric challenge. Detection systems must identify all deepfakes while attackers only need to evade detection once. As detection improves, generation techniques adapt.
Detection tools can be part of a defence strategy but should not be the primary control. Process-based defences that do not depend on detecting manipulation provide more reliable protection across different attack sophistication levels.
Frequently Asked Questions
Aren’t major AI companies implementing watermarks?
Some commercial AI platforms include watermarking. However, criminal attackers typically use open-source tools or modified versions of commercial tools specifically to avoid detection. Watermarks on legitimate content do not prevent illegitimate content from being created without them.
Won’t the EU AI Act address this?
The EU AI Act includes disclosure requirements for AI-generated content. However, it applies to entities operating within EU jurisdiction and using compliant tools. Criminal deepfake operations typically operate outside these boundaries intentionally.
Should we ignore regulatory developments entirely?
No. Regulatory frameworks may provide useful standards and may affect legitimate use cases. However, security planning should not depend on regulatory solutions materialising. Build defences that work regardless of external developments.