Listen to the podcast here:  “Future Focused” podcast

 

When Christopher Lind was a kid, a simple system his family had in place saved him from being kidnapped after school one day. If they hadn’t had that system, his life would have been very different today—if he was lucky enough to be alive.

Find out that simple system and heaps more valuable insights in this episode of the “Future Focused” podcast, where host Christopher Lind interviews Jason Thatcher, the founder of Breacher AI, and delves into the complexities of deepfake technology and its implications on security and safety in the world today. 

They discuss the growing threats posed by deepfakes, AI’s duality, and the necessary measures to protect against them without losing one’s sense of trust.

Meet the Host

Christopher Lind is a globally recognized, digital-first HR leader living at the intersection of business, technology, and the human experience. He is the author of Relentless Intention and serves as the Chief Learning Officer at ChenMed. 

Beyond his professional achievements, Christopher is a devoted husband and father of seven children, all under the age of 12.

On Christopher’s channel, you’ll find the latest in developing skills and technology, navigating career and corporate culture, and finding harmony between personal and professional life.

Understanding Deepfakes and Their Implications

In the podcast, Jason emphasized that deepfakes represent a significant threat. These AI-generated manipulations of audio, video, and images can deceive even the most discerning individuals, making it difficult to distinguish between reality and fabricated content.

“We are entering the age of distrust where it’s going to be harder and harder to know what is real anymore,” said Jason Thatcher.

 Critical Threats Posed by Deepfakes

  • Cybersecurity Risks: Deepfakes facilitate social engineering attacks, identity theft, and financial fraud. A recent example showcases this potential financial impact when a company lost $25 million due to a deepfake audio call.

“If someone had taken the 30 seconds to call help desk and say hey, I’m going to test to see if this is a vulnerability. You could have saved $100 million,” Thatcher noted regarding the MGM attack.

  • Misinformation: With elections approaching, the potential for deepfakes to spread misinformation and sway public opinion is a growing concern.
  • Personal Exploitation: From bullying in schools to extortion, deepfakes can be used to target individuals on a personal level.

The Duality of AI

While AI and deepfake technology can be used for malicious purposes, Jason also pointed out the positive applications. For instance, individuals who have lost their ability to speak can use voice cloning technology to communicate again. However, the balance between beneficial and harmful uses of AI is delicate.

“I talk a lot about we are entering the age of distrust. It’s going to be harder and harder to know what even is real anymore,” Thatcher explained.

Trust in the Age of Deepfakes

The conversation also touched on the importance of maintaining trust in the age of deepfakes. They talked about the idea that the goal is not to lose trust entirely but to implement systems and processes that allow individuals and organizations to verify authenticity and continue to trust to some degree.

“You have to be able to take a step back and say, do I really trust what I’m being told or what I’m seeing? It’s about verifying that trust and not just taking things at face value,” Jason emphasized.

Steps to Mitigate Deepfake Threats

  1. Awareness and Education: Jason stressed the importance of educating employees and the public about the existence and risks of deepfakes. Understanding that these threats are real is the first step in combating them.

“It starts with one; you have to test your susceptibility to it. It’s understanding your risk and your vulnerability around deep fake.”

  1. Testing Vulnerabilities: Conducting regular assessments to identify potential vulnerabilities within an organization’s security infrastructure is crucial. Implementing deepfake penetration testing and attack simulations is highly valuable.
  2. Verification Processes: Simple measures, such as using background music in videos to disrupt AI’s ability to clone voices or establishing passphrases for verification, can help mitigate risks at this point.
  3. AI Detection Tools: It is essential to develop and implement AI-based detection tools that can identify deep fake content. These tools can analyze subtle irregularities that human senses miss.

“Using an AI model, you can reasonably determine what’s deep fake versus not. But it’s going to be this constant battle, right, as it gets better, you’ve got to improve the model too.” Jason said. 

Looking Forward

Jason remains optimistic about the future of AI despite the challenges. He believes that with the proper measures in place, the benefits of AI can outweigh the risks. However, he also stresses the need for ongoing vigilance and adaptation to stay ahead in this technology arms race.

As deepfake technology continues to evolve, it’s essential for individuals and organizations to stay informed and implement robust security measures. By doing so, they can protect themselves and others from the potential threats.

Learn more about deep fake technology and how to protect yourself in the full episode here:

Securing Reality in the Age of Deepfakes: Protecting Integrity in an AI-Driven World with Jason Thatcher

In This Article

About the Author: Emma Francey

Specializing in Content Marketing and SEO with a knack for distilling complex information into easy reading. Here at Breacher we're working on getting as much exposure as we can to this important issue. We'd love you to share our content to help others prepare.