SecurityMay 14, 202610 min read

How to Protect Yourself From AI-Powered Identity Fraud

SC

By Sarah Chen

Head of Privacy Research

How to Protect Yourself From AI-Powered Identity Fraud

Artificial intelligence has supercharged identity fraud. In 2025, generative AI-enabled fraud surged by 1,210%, and losses from AI-facilitated fraud in the United States are projected to reach $40 billion by 2027. From deepfake video calls to synthetic identities assembled from stolen data fragments, AI is giving criminals tools that are faster, cheaper, and more convincing than ever before.

How AI Is Transforming Identity Fraud

Identity fraud isn't new, but AI has fundamentally changed the economics and sophistication of attacks. Here's what's different in 2026:

Deepfake Impersonation

AI-generated deepfakes can now convincingly replicate a person's face and voice in real time. In one high-profile case, engineering firm Arup lost $25.6 million after an employee was deceived by a deepfake video call that impersonated senior executives. In 2025, 30% of high-impact corporate impersonation incidents involved deepfakes.

These attacks aren't limited to corporations. Criminals use deepfakes to impersonate family members in emergency scams, bypass video-based identity verification (KYC) systems, and create fake social media profiles to build trust before launching scams.

Synthetic Identity Fraud

Synthetic identity fraud combines real data fragments — a stolen Social Security number, a fabricated name, a legitimate address — to create entirely fictional identities that pass standard verification checks. These synthetic identities can open credit cards, take out loans, and even build credit histories over months or years before \"busting out\" with maximum fraud.

The barrier to entry is shockingly low: synthetic identity kits sell for approximately $5 on dark web marketplaces, and dark LLM subscriptions (AI tools designed for criminal use) cost between $30 and $200 per month.

Your Data Fuels These Attacks

AI-powered identity fraud depends on real personal data as raw material. Data brokers, people-search sites, and leaked databases provide the names, addresses, Social Security numbers, and other details that criminals need to create convincing deepfakes and synthetic identities. Removing your data from these sources directly reduces your attack surface.

AI-Enhanced Phishing

AI-generated phishing emails now achieve click-through rates more than four times higher than human-crafted versions. Large language models can generate perfectly written, contextually personalized messages at scale — no more typos and awkward phrasing that once helped people spot phishing attempts.

AI also enables \"spear phishing at scale,\" where attackers scrape your social media, public records, and data broker profiles to craft highly personalized messages that reference real details from your life.

The 2026 Threat Landscape by the Numbers

  • 73% of organizations were directly affected by cyber-enabled fraud in 2025
  • 72% of business leaders identified AI-enabled fraud and deepfakes as top operational challenges for 2026
  • 78% of security professionals expect AI-powered fraud to increase further throughout 2026
  • 60% of US companies reported increased fraud losses between 2024 and 2025
  • Only 26% of companies have tested a formal fraud response plan covering AI-powered attacks

How to Protect Yourself

While no defense is foolproof against AI-enhanced threats, a layered approach significantly reduces your risk:

1. Minimize Your Data Exposure

The less personal data available about you online, the harder it is for criminals to craft convincing deepfakes, synthetic identities, or personalized phishing attacks.

  • Remove yourself from data brokers — Opt out of people-search sites like Spokeo, BeenVerified, and Whitepages. A service like PrivacyOn automates this across 100+ brokers.
  • Audit your social media — Limit what you share publicly. Criminals mine social profiles for voice samples (for AI cloning), photos (for deepfakes), and personal details (for social engineering).
  • Search for yourself regularly — Google your name, phone number, and address to see what's publicly available. Remove what you can.

2. Establish Verification Protocols

AI can replicate voices and faces, so you need out-of-band verification for anything involving money or sensitive information.

  • Create family code words — Establish a pre-shared passphrase with family members that can be used to verify identity during unexpected calls or video chats
  • Use callback verification — If someone calls claiming to be from your bank, employer, or a government agency, hang up and call back using a number you find independently
  • Require dual approval — For significant financial transactions, require two people to approve the transfer through separate communication channels

The Code Word Defense

A simple family code word remains one of the most effective defenses against AI voice cloning and deepfake video scams. Choose something memorable but not guessable from public information — not a pet's name, birthday, or anything posted on social media. Share it only in person.

3. Strengthen Your Account Security

  • Enable hardware security keys — Physical FIDO2/WebAuthn keys (like YubiKey) are resistant to phishing attacks, even AI-enhanced ones
  • Use unique, strong passwords — A password manager generates and stores unique passwords for every account, eliminating credential reuse
  • Freeze your credit — Place freezes at Equifax, Experian, and TransUnion to prevent synthetic identity fraud using your SSN
  • Enable transaction alerts — Set up real-time notifications for all financial accounts so you spot unauthorized activity immediately

4. Monitor for Identity Misuse

  • Check your credit reports — Review all three bureau reports at AnnualCreditReport.com for accounts you don't recognize
  • Monitor the dark web — Services that scan dark web marketplaces and forums can alert you when your data appears for sale
  • Watch for synthetic identity signals — Unexpected credit inquiries, unfamiliar accounts, or mail from financial institutions you've never contacted may indicate synthetic identity fraud using your SSN

5. Verify Before You Trust

  • Be skeptical of urgency — AI-powered scams often create artificial time pressure (\"wire the money now or lose the deal\"). Legitimate requests can wait for verification.
  • Check for deepfake artifacts — While rapidly improving, deepfakes may still show inconsistencies: unnatural eye blinking, audio-lip sync issues, strange lighting around the face edges, or odd hand movements
  • Verify unexpected requests through a second channel — If you get an email from your boss requesting a wire transfer, confirm via phone or in person before acting

What to Do If You're a Victim

If you suspect AI-powered identity fraud:

  1. File a report at IdentityTheft.gov — The FTC's recovery site creates a personalized action plan
  2. Place fraud alerts — Contact one of the three credit bureaus to place a fraud alert (it will be shared with the others)
  3. File a police report — This creates documentation you'll need to dispute fraudulent accounts
  4. Contact affected institutions — Alert your bank, credit card companies, and any other affected accounts immediately
  5. Document the AI element — If the fraud involved a deepfake or AI-generated content, save any evidence. This information helps law enforcement track emerging tactics.

How PrivacyOn Helps

PrivacyOn provides a comprehensive defense against AI-powered identity fraud by removing your personal data from 100+ data broker sites — cutting off the raw material criminals need for synthetic identities and personalized attacks. Combined with dark web monitoring that alerts you when your data appears in underground marketplaces, and 24/7 continuous surveillance for new exposures, PrivacyOn reduces your vulnerability to the AI-driven threats that define 2026's fraud landscape.

SC
Sarah Chen

Head of Privacy Research

CIPP/US CertifiedIAPP MemberB.S. Computer Science

CIPP/US-certified privacy researcher with over a decade of experience helping consumers remove their personal information from data brokers.

Ready to Protect Your Privacy?

Let PrivacyOn automatically remove your personal information from data broker sites and keep it removed.