Redirecting you to
Blog Post Feb 24, 2022

What Deepfakes Mean for Cybersecurity

With deepfake technology, bad actors can impersonate others and gain access to sensitive data. Learn more about this threat to cybersecurity and how to prevent it.

Business email compromise (BEC) and other spear phishing attacks have long been a favorite for bad actors looking to steal cash from unsuspecting victims.

The idea is simple: get employees to send money or information by impersonating a person in a position of power via email. These days, employees may consider themselves experts at sniffing out untrustworthy communications. Unfortunately, bad actors know this, and they’ve added a new component to their schemes: artificial intelligence (AI). We’ve entered the era of deepfake attacks, and it could have far-reaching security implications.

What is a “Deepfake?”

Deepfake technology allows users to impersonate others with startling accuracy. They are created via AI-generated media (imagery, video, audio). While the most prominent examples of deepfakes focus on celebrities or politicians, just about anyone can use technology to create fake media about anyone else. All the creator needs are images, videos, and audio recordings of the target.

How They Work

Bad actors have access to a type of machine learning technology called deep learning that teaches neural networks to create fakes by learning from existing images and videos of the target.

Most people don’t realize how far deepfake technology has come in recent years—or how easy it is to use. It’s not a technology reserved for computer whizzes and cybercriminal masterminds. Anyone can seek out deepfake software and services on the internet and have a relatively convincing representation of another person within minutes.

The widespread availability of AI deepfake technology invites two questions:

  1. Can we trust anything we see or hear?
  2. What do deepfakes mean for identity verification?

Is It Easy to Spot a Deepfake?

The short answer is no. Not anymore. Deepfake videos can be hard to identify, especially if the impersonation shows the individual acting in what appears to be a reasonable manner. It’s even more challenging when the context changes to a medium we are more comfortable with.

Many people view text-based internet communication with skepticism, but what about a phone call from a manager, client, or CEO? A bank manager in China fell victim to a phishing attack executed this way in 2020. The scam—which resulted in the manager transferring $35 million—was at least the second time a deepfake enabled a successful phishing scheme. In the first instance, malicious actors impersonated a company’s CEO to get employees to transfer €220,000.

How Deepfakes Pose a Threat to Security

The growing sophistication of deepfakes and the availability of the technology needed to make them may have serious implications for security procedures. As passwords are used less and less, biometrics have risen as a trusted form of identity validation. It makes sense. Until very recently, most people would never have imagined it possible to create such realistic representations of another person. However, deepfake technology allows physical attributes—like irises, voices, and faces—to be replicated with relative ease.

Preventing Cyberattacks & Threats

It’s essential that security teams and individuals alike keep in mind that deepfakes may be used against them. Don’t let it be an afterthought. Anyone we would consider a high value target may want to choose biometric authentication methods with care—and with the understanding that, as deepfakes become more sophisticated, some biometric authentication methods may be rendered useless.

See how you can get ahead of these potential cybersecurity threats with Sectigo’s Digital Identity Management for Zero Trust.

To learn more about how deepfakes may affect security, listen to Root Causes, episode 198 “Deep Voice Fakes.”