In 2024, the FBI's Internet Crime Complaint Center recorded $16.6 billion in cybercrime losses — a 33% increase from the previous year. A growing share of that damage is driven by deepfake technology: AI-generated video, audio, and text so convincing that even trained professionals are falling for it. One British engineering firm, Arup, lost $25.6 million from a single deepfake video call that impersonated the company's CFO. These are no longer futuristic threats. They are happening now, and traditional scam-spotting advice is no longer enough.
What Are Deepfake Scams?
Deepfake scams use artificial intelligence to create synthetic but realistic-looking video, audio, or text that impersonates real people. The technology has advanced to the point where a short audio clip or a few social media photos can be enough to generate a convincing fake. Scammers are using these capabilities in several distinct attack types.
Executive Impersonation
This is the corporate version of deepfake fraud, and it is already causing massive financial damage. Attackers create AI-generated video or audio of company executives — typically the CEO or CFO — and use it to instruct employees to transfer funds, share sensitive data, or authorize fraudulent transactions. The Arup attack is the most publicized example, but security researchers believe most corporate deepfake incidents go unreported.
AI-Driven Phishing
AI-generated phishing emails now achieve click-through rates four times higher than human-crafted phishing messages. Unlike traditional phishing, which often contains grammatical errors and generic language, AI phishing produces polished, context-aware messages that mimic the writing style of real individuals or organizations. These emails are personalized using data scraped from social media, data broker sites, and previous breaches.
Deepfake Job Interview Fraud
Scammers are using deepfake video and voice technology to impersonate job candidates during remote interviews. The goal is to get hired at a company — particularly in IT or finance — and then exploit internal access to steal data, install malware, or redirect payments. This attack vector has grown rapidly with the normalization of remote work.
AI Romance Scams
Romance scams have been around for decades, but AI has supercharged them. Scammers now use deepfake video calls to "prove" they are who they claim to be, and AI chatbots maintain ongoing text conversations that feel emotionally authentic. Experian's 2026 forecast identifies emotionally intelligent AI bots as one of the top emerging threats, specifically because they can sustain long-term relationships with victims before requesting money.
The Old Rules No Longer Apply
Traditional advice like "look for spelling errors" or "check if the sender's name matches" is increasingly useless against AI-powered attacks. AI-generated phishing emails are grammatically flawless, contextually relevant, and personalized. Deepfake video calls can replicate a person's face and voice in real time. You need new detection strategies — and a healthy dose of skepticism — to stay safe in 2026.
How to Spot a Deepfake
Deepfake technology is improving rapidly, but current AI-generated content still has detectable flaws. Train yourself to look for these signs during video calls, voice calls, and in video messages:
- Unnatural facial movements: Watch for faces that seem slightly "off" — awkward lip syncing, stiff expressions, or movements that don't match the audio. The area around the jawline and hairline often shows artifacts.
- Lack of blinking: Many deepfake models still struggle with natural blink patterns. If someone on a video call barely blinks, that is a red flag.
- Flat or inconsistent tone: AI-generated voices may sound slightly monotone, overly smooth, or have unnatural pauses. Listen for a voice that lacks the micro-variations present in natural speech.
- Slurred or distorted speech: When deepfake audio encounters certain sounds or words, it can produce subtle distortions — words that blend together or syllables that sound slightly warped.
- Irregular breathing: Real people breathe. Deepfake audio often lacks natural breathing patterns between sentences.
- Visual glitches: Look for flickering around the edges of the face, inconsistent lighting on different parts of the image, or backgrounds that warp or shift unnaturally.
- Mismatched context: If someone claims to be calling from their office but the background doesn't match, or if their clothing or setting seems inconsistent with what you would expect, dig deeper.
Six Rules for Protecting Yourself
-
Slow down before acting
Deepfake scams exploit urgency. Whether it is a boss demanding an immediate wire transfer or a loved one claiming to be in an emergency, the pressure to act fast is intentional. Take a breath. Verify before you respond.
-
Verify through a secondary trusted channel
If you receive a suspicious video call, voice message, or email from someone you know, hang up and contact them directly using a phone number or communication method you already have on file — not one provided in the suspicious message. This single habit can prevent the vast majority of deepfake scams.
-
Never send money based on a video or voice call alone
No matter how real the person on the other end looks or sounds, never authorize financial transactions, share account details, or send money based solely on a video or voice interaction. Establish verification protocols — such as a shared code word with family members or a multi-person approval process at work.
-
Use AI detection tools
A growing number of tools can analyze audio and video for synthetic content markers. While not foolproof, these tools add another layer of defense. Some email security platforms and browser extensions now include built-in deepfake detection capabilities.
-
Limit your public digital footprint
Deepfake scammers need source material — photos, video, and audio of their targets — to create convincing fakes. They also need personal details to make their approach feel credible. The more information about you that exists online, the easier it is for scammers to target you with a personalized deepfake attack.
-
Educate your workplace and family
Make sure your colleagues, family members — especially elderly relatives — and friends understand that video and audio can be faked. Establish verification protocols with the people in your life so that everyone knows to double-check before acting on unusual requests.
Create a Family Verification Code
Agree on a secret word or phrase with close family members that you would use to verify identity during an unexpected call or video chat. If someone claiming to be your child or spouse cannot provide the code word, do not send money or share sensitive information — no matter how real they look or sound on the screen.
What to Do If You Are Targeted
If you suspect you have been the target of a deepfake scam — or have already fallen victim to one — take these steps immediately:
- Stop all communication with the suspected scammer and do not send any additional money or information.
- Document everything: Save screenshots, call logs, emails, and any recordings of the interaction.
- Contact your bank immediately if money was transferred. Request a hold or reversal on the transaction.
- Report to the FBI's IC3 at ic3.gov — deepfake fraud is a federal concern and the IC3 tracks these cases.
- File a report with the FTC at reportfraud.ftc.gov.
- Alert your employer if the scam targeted you through a work context.
- Monitor your accounts closely for unauthorized activity in the weeks following the incident.
The Data Broker Connection
Deepfake scams do not happen in a vacuum. To create a convincing impersonation, scammers need more than just AI tools — they need personal information about their targets. Your name, employer, family members, phone number, email address, home address, and social connections all make a deepfake approach more believable. Where do scammers find this information? Overwhelmingly, from data broker and people-search sites that compile and sell your personal details to anyone willing to pay.
Consumers lost $12.5 billion to fraud according to recent reports, and a significant portion of that fraud begins with personal data harvested from data brokers. The less information about you that is publicly available, the harder it is for scammers to build a convincing deepfake scenario targeting you specifically.
How PrivacyOn Helps
PrivacyOn removes your personal data from 100+ data broker and people-search sites, cutting off the supply of information that scammers use to research and target their victims. By continuously monitoring for and removing your data from these sites, PrivacyOn reduces the raw material available for deepfake impersonation, AI-driven phishing, and other personalized attacks. In an era where AI makes scams more convincing than ever, reducing your data exposure is one of the most effective defenses you have.