Audio 1 Is Deepfake


Deepfake AI Voice Cloning Attack

What are they?

Deepfake voice cloning attacks are when a target is mimicked and their voice is stolen and cloned to trick some one into performing an action. With AI, the ability to replicate a users voice is fairly easy. We’ll show you one example and how they can be done.

Anyone can be targeted and have their voice replicated and manipulated. These are emerging threats, you can watch the video below to learn about Deepfake based threats. We gave a couple of tips on how to protect yourself below as well.

-Audio File Extracted

The original audio file is either downloaded from the internet or captured from a call. Any audio file can be extracted and any one can have their voice impersonated and cloned. The audio file is then uploaded into AI voice cloning software which will train the audio. The audio file is then manipulated using AI text to speech software. Bad actors will clone a voice of someone and alter the audio to trick users into performing actions like opening a malicious attachment or diverting payments.

-Phish attack launched

The audio file is then used via SMS to a victim’s phone number, as an attachment or used in a phone call. Tricking the user into thinking it’s a message from some one else.

Most security tools are not able to detect these sorts of attacks, the best option is awareness and testing.

-To Protect Yourself

Audio cloning is an emerging threat from AI where a users voice can be manipulated or cloned.

There are a few clues to tell if audio is cloned or if it is a phish attempt and a few actions you can take to protect yourself:

1. Use a passphrase or “Safe Word” at the workplace or at home to verify that the person you are talking to is real.

2. Scrutinize any message that conveys urgency or asks to perform an action on behalf of some one else that seems abnormal.

3. Don’t answer the phone if you don’t know the person, let it go to voicemail. This may be some one recording your voice and using it later on for an attack.

These are emerging threats from AI use and adaptation and are becoming more common.

You can see how voice cloning is done consider watching this video: https://www.youtube.com/watch?v=xgORvK0FpAM