Voice cloning on the rise
2024 is here and it came quickly. Along with it the rise of AI. I didn’t really believe or buy the hype of AI at first. Then, I started getting hands on and am convinced that AI will fundamentally change and alter our lives over the next few years. AI is a genie out of the bottle and there’s no putting it back at this point. Concerning is there is no good governance today for AI, this will lead to emerging threats for Cybersecurity teams. AI will change the way we work and live.
Mostly for good.
But, also for bad.
At the end of the day Cybersecurity is a risk management domain: either plan for, respond to, address, avoid and or mitigate risks. AI is just one of the many risks an organization should be aware of.
AI is a blip on the radar currently as a cyber threat. But, I don’t think that’s going to be the case much longer for 2024. I predict the rise of social engineering attacks leveraging AI as a force multiplier. A step further, I believe that we’ll see a move away from Desktop attacks towards Mobile. And 2024 will be the human factor for defensive posture and strategy. There is a mis-perception that humans are the “weakest link” and this thinking is wrong. Humans are the most “vulnerable” and it’s within the domain of Cybersecurity to ensure people are educated and trained against emerging threats.
Voice Cloning poses a unique challenge to security teams and we will see a proliferation of these types of attacks in 2024. It’s easier to create a voice clone and use a phish vs. exploit a vulnerability to gain initial access. Phishing and social engineering continue to be the most effective entry to an organization. Period. As security detection tools and blocking improve on desktop, attackers will pivot to less protected surfaces and that’s mobile. The humans you protect, and their mobile devices are the simplest way into an organization and lack all the defensive tools that exist on desktop today. Mobile is the side door that’s not nearly as protected as the front. Follow the path of effectiveness and least resistance and that’s where we will see attacks rise.
With the amount of PII available today from numerous breaches, like phone numbers, names and email addresses…it’s a no brainer from the shoes of an attacker. Always think like your enemy, pick your most effective tactic. AI just made that tactic much easier, much more convincing and much much more dangerous.
Consider this: Today I can clone a voice from some one, manipulate it to say anything I want. I can then spoof a phone number and target some one with that cloned voice. This is not a dystopian future, but something that exists today… and it’s frighteningly easy to do as well. If we think for a moment that AI won’t be abused for social engineering, then we are not adhering to the fundamentals of the cybersecurity domain. Know you’re risks, know their impact, know the probability of them occurring and prepare for them. If you’re not following the first part of “knowing” then your approach to cybersecurity is flawed. Downplaying the risk that AI poses isn’t just flawed, it’s dangerous.
AI is truly one of the most terrifying threats we’ve faced. I don’t think we’ve even begun to comprehend how dangerous it is. Neither have we seen the end stage of what AI can do. There will be additional AI based threats that we haven’t even considered in 2024. If we aren’t planning for this threat today, imagine the dangers to your user base. I am truly terrified for them, the users that sit in various departments in the organization. They are about to be targeted with some of the most advanced social engineering we will ever see. I truly hope I’m wrong about this. But, time will tell.
As Cybersecurity professionals we forget that the average user may not have the level of knowledge regarding cybersecurity we posses. Neither do they keep cybersecurity in the forefront of their minds in their day to day job. Some one in marketing is thinking about their next ad placement or next social media post. Cybersecurity awareness and education is our job, to help them keep Cybersecurity in mind. Cybersecurity awareness training is probably the most hated, but painfully, most important as well. Promoting an open and collaborative culture for Cybersecurity awareness is fundamental. Remember, Cybersecurity is a cross-functional domain, it’s everyone’s job.
TL;DR you should be thinking about AI and the threat it presents using a risk based approach. And you should be taking steps to protect your users through education and awareness regarding AI. You should not downplay this risk, it may be small today, but the impact is huge and the probability grows exponentially every day. We offer NextGen training and testing to help users protect themselves against this threat. We take a mobile first approach and also give users the ability to report targeted phishing directly from their phones, then alert users to a current attack. It’s a novel approach, but shouldn’t be.. alert, educate, train, simulate, promote inclusion and empower people to defend and protect themselves.