When what you’re Hearing is fake: Deepfake Audio is the Menace.
Deepfake: What’s Hype and Real?
Deepfake is a complex problem, and it’s not just a workspace problem. Deepfake impacts families too, which is why we so strongly advocate Security Awareness Training. Simply raising awareness is one of the most effective tools against this threat.
But, what do you focus on and how to cut through all the noise? What’s hype and where should you be paying attention?
Audio is the correct answer. Why? Look at the data. ~70% of known publicly Deepfake security incidents are Audio based. People do not fall victim to silent video’s. Our human senses are reduced to just audio as well, making it harder to detect. Those subtle irregularities and shades you can use to potentially spot a deepfake… aren’t there.
When it comes to protecting an organization, audio is the vector to watch currently. In 100% of the successful Deepfake attacks, audio was used. Video too, not to downplay. But, even when video is a vector it’s actually the audio being used to trick some one.
There is an emphasis on video and preventive controls on this vector. It’s not the threat to focus solely on.
>Audio is where you should actually be focused. Voice phishing or vishing is one of the top tactics to watch.<
Here’s where it gets tricky and why it’s a problem. There are far less security controls on mobile devices.
Granted a Ransomware attack won’t occur from a mobile phone. But, it’s a good first entry point for fraud or a foothold into an organization.
The attack surface to watch is increasingly mobile devices. Your security team doesn’t have a great way to protect their employees on personal mobile devices outside of email gateway. That leaves human decision making as one of the best controls for this threat.
The emerging landscape to watch: Deepfake Audio, Voice Phishing, Employee Personal Phones, Social Engineering.