Your Guide to Preparing for an AI Deepfake Attack
Available for Free Download: Here
A recent report from the Treasury Department highlights the risk of AI being used in Cyber Attacks against financial organizations – here. Any company that handles financial transactions should be at heightened awareness as well. This includes: Law Firms, Real Estate, Crypto and Credit Unions.
The nature of Deepfake being used with malicious intent to defraud or divert wire transfers is a logical target. As the barrier has been lowered for creating realistic deepfake, the barrier for skill has been lowered as well. This is what makes Deepfake dangerous, what was once possible only with high skill can be done in a few clicks and a few minutes. The quality has also improved exponentially, it is remarkable how fast it has improved in just the past year.
2024′ will be the year of Deepfake. Regulation and governance is lagging behind.
Recently a Hong Kong multi-national was duped into wiring $25 Million in funds using Deepfake Video. This attack was lucrative enough for copycat attacks to occur due to it’s success.
Looking at data, South East Asia is a hotspot at the moment (Vietnam, Hong Kong, Thailand.) For publicly reported Deepfake attacks, the volume appears to be low but is starting to move to Europe. With publicly reported attacks using deepfake in the Netherlands targeting Financial institutions. North America hasn’t seen the volume of attacks compared to other hotspots.. but it doesn’t mean this is not the time to be vigilant. Now is the best time to prepare and ensure appropriate controls and verification checks are in place. Prepare ahead or prepare for the worst… the worst has yet to come.
There is also a massive amount of bad advice for Deepfake: look for subtle clues or irregularities like robotic tones etc.. This advice works today, sure. But, in the near future Deepfake will be indistinguishable from reality. The video’s being teased by Sora and Open AI are truly remarkable. Suggesting that Deepfake is not a threat because of the current state is really poor advice. Soon, it will be like for like and you won’t have those subtle clues or irregularities to spot. This advice trains people on the wrong behavior and can create a false sense of safety.
Best approach, focus on the situation and apply a healthy amount of skepticism. Ensure the appropriate communication paths and channels are in place to verify requests and transactions. Have processes in place to double verify a request. Instead of training employees on subtle irregularities, focus on the behavior and scenario. Is something being asked that’s unusual or is there a sense of urgency? Ensuring people are empowered to challenge and verify is key. But, mostly importantly awareness and education. You can’t defend against something you don’t know about… Our motto is defend with Knowledge and it is the first and most important step. Know your adversary.
With the barrier being lowered for creating deepfake and the amount of skill required, the barrier of trust has to be lowered as well. Your adversary has become 10x as well and armed with better tools. The notion that Deepfake is created with a hoodie wearing hacker is absolutely wrong. The ease to create deepfake can be done by any one today. As reality becomes more convoluted with Deepfake, it’s important to approach media with a lower level of trust. Verify the source and accuracy of the information before making a decision. Sad truth, but it is the state we are in today.
TL;DR Verify the source of information as real or authentic. Be knowledgeable of Deepfake emerging threats and what they are and what they can do.
NOW is the time to prepare and ensure you have effective controls in place against Deepfake attacks.
If you need help verifying and testing your controls, we can help.