SecurityApril 25, 20269 min read

How to Protect Yourself From AI Impersonation Scams

SC

By Sarah Chen

Head of Privacy Research

How to Protect Yourself From AI Impersonation Scams

Artificial intelligence has made impersonation scams terrifyingly effective. In 2026, AI can clone a voice from just a few seconds of audio, generate realistic deepfake videos in real time, and craft personalized phishing messages that are nearly indistinguishable from legitimate communication. One in four Americans has already received an AI-generated deepfake voice call, and losses from AI-powered fraud are projected to exceed $40 billion annually by 2027.

How AI Impersonation Scams Work

AI impersonation scams use synthetic media — cloned voices, deepfake video, and AI-generated text — to trick you into believing you are communicating with someone you trust. The most common forms include:

Voice Cloning Scams

Scammers use AI tools to clone the voice of a family member, friend, or business associate using audio scraped from social media videos, voicemail greetings, or public recordings. They then call you pretending to be that person — often claiming an emergency that requires immediate action, like wiring money or sharing account information.

Voice cloning technology has crossed what researchers call the "indistinguishable threshold" — meaning human listeners can no longer reliably tell the difference between a cloned voice and the real person. Vishing (voice phishing) attacks surged by 442% between 2024 and 2025.

Deepfake Video Scams

Real-time deepfake technology allows scammers to impersonate someone on a video call. In one high-profile case, a deepfake video call impersonating multiple executives at a global engineering firm resulted in a $25.6 million loss. Deepfake video scams surged 700% in 2025 and continue to accelerate.

AI-Generated Phishing

Large language models enable scammers to craft phishing emails and text messages that perfectly mimic the writing style, tone, and vocabulary of real people and organizations. These are far more convincing than the poorly written phishing attempts of the past.

The Scale of the Problem

One in 10 Americans has already been directly targeted by a voice clone scam. The United Nations has issued a global wake-up call about weaponized AI fraud, and the U.S. Congress is considering the AI Fraud Accountability Act of 2026 (Senate bill S.3982) to create federal criminal penalties for digital impersonation fraud.

The Family Safe Word: Your Best Defense

The single most effective defense against AI voice cloning scams is also the simplest: a family safe word. This is a secret code word or phrase that you establish with your family members and close contacts. If someone calls you claiming to be your child, parent, or spouse, you ask them for the safe word. An AI clone cannot provide it because it only exists in the memories of the people who agreed on it.

How to set up a family safe word:

  1. Choose something unusual — Pick a word or phrase that would not come up in normal conversation and is not related to public information about your family. Avoid pet names, birthdays, or anything someone could guess from social media.
  2. Share it in person — Tell your family members the safe word face-to-face or through an already-verified secure channel. Never share it over email, text, or social media.
  3. Practice using it — Make sure everyone in the family knows the safe word and understands that they should ask for it if they receive an unexpected call requesting money or sensitive information.
  4. Change it periodically — Update the safe word every few months to maintain security.

What If You Do Not Have a Safe Word Yet?

If you receive a suspicious call and do not have a safe word established, hang up and call the person back at their known number. Do not use a number provided during the suspicious call. If you cannot reach them, contact another family member or friend to verify the story before taking any action.

How to Recognize AI Impersonation Attempts

While AI-generated media is increasingly convincing, there are still warning signs to watch for:

Voice Calls

  • Urgency and pressure: The caller insists you must act immediately — send money, share account details, or click a link right now.
  • Unusual requests: A family member asking you to wire money, buy gift cards, or share passwords is almost always a scam.
  • Emotional manipulation: The caller claims to be in danger, arrested, injured, or kidnapped to override your rational thinking.
  • Audio artifacts: Listen for slight robotic undertones, unnatural pauses, or background noise that does not match the claimed location.
  • Inability to go off-script: Ask unexpected questions that a real person would easily answer but a script-based AI might stumble on.

Video Calls

  • Lighting inconsistencies: Deepfake videos sometimes have unnatural lighting on the face compared to the background.
  • Edge artifacts: Look for blurring or flickering around the edges of the face, especially near the hairline, ears, and jawline.
  • Unnatural blinking: Early deepfakes struggled with realistic blinking patterns. While this has improved, it can still be a tell.
  • Request to keep the camera one-way: If the other party insists that you keep your camera off or refuses to show their face at certain angles, be suspicious.

Text and Email

  • Unusual sender addresses: Check email addresses carefully for subtle misspellings or extra characters.
  • Requests for sensitive information: Legitimate organizations and people you know will not ask for passwords, Social Security numbers, or financial details via email or text.
  • Links and attachments: Do not click links or download attachments from unexpected messages, even if they appear to come from someone you know.

How to Reduce Your Exposure to Voice Cloning

AI voice cloning requires audio samples of your voice. The less audio of you available online, the harder it is for scammers to clone you:

  • Limit public video and audio: Be selective about posting videos with your voice on social media. Consider making your accounts private.
  • Update your voicemail greeting: Keep it short and generic rather than recording a long, detailed greeting that provides ample cloning material.
  • Be cautious with unknown callers: Scammers sometimes call you first to record your voice saying specific words. Do not engage with suspicious callers — hang up.
  • Remove personal data from broker sites: The less information scammers can find about you online, the less convincing their impersonation attempts will be. Your phone number, address, family relationships, and employer details all help scammers craft more believable scenarios.

What to Do If You Have Been Targeted

  1. Do not send money or share information. If you suspect a scam, stop all communication immediately.
  2. Verify independently. Contact the person being impersonated through a known, trusted channel.
  3. Report the scam. File a report with the FTC at reportfraud.ftc.gov and with your local law enforcement.
  4. Alert your bank. If you shared financial information or sent money, contact your bank or credit card company immediately to freeze accounts and dispute transactions.
  5. Warn others. Let friends and family know about the scam so they are not targeted next.
  6. Document everything. Save call logs, messages, and any other evidence of the scam attempt.

Protect Your Data to Prevent Impersonation

AI impersonation scams are most effective when scammers know a lot about you — your family members' names, where you work, your home address, and your daily routine. Much of this information is freely available on data broker sites.

PrivacyOn removes your personal data from over 100 data broker sites, making it significantly harder for scammers to research you, your family, and your social connections. We also include dark web monitoring to alert you if your personal information — including phone numbers, email addresses, and passwords — surfaces in breach databases used by fraudsters.

With AI-powered scams growing more sophisticated every month, reducing your digital footprint is one of the most practical steps you can take to protect yourself and your family. Plans start at $8.33/month with coverage for up to 5 family members.

SC
Sarah Chen

Head of Privacy Research

CIPP/US CertifiedIAPP MemberB.S. Computer Science

CIPP/US-certified privacy researcher with over a decade of experience helping consumers remove their personal information from data brokers.

Ready to Protect Your Privacy?

Let PrivacyOn automatically remove your personal information from data broker sites and keep it removed.