The creation and distribution of non-consensual deepfake pornography is now a federal crime in the United States. The TAKE IT DOWN Act, signed into law in May 2025, gives victims powerful tools to demand removal of AI-generated intimate imagery. Here's what you need to know to protect yourself and take action if you're targeted.
The Scale of the Problem
AI-generated non-consensual intimate imagery has exploded in recent years. Anyone with a clear photo of your face — from social media, a professional headshot, or even a casual snapshot — can potentially be targeted. The technology requires no technical expertise, and the resulting images are increasingly realistic.
Victims include everyday people, students, professionals, and public figures. The impacts are devastating: psychological trauma, reputational damage, career consequences, and in extreme cases, self-harm. Understanding your rights and the tools available to fight back is essential.
Federal Protection Is Now Law
As of May 2025, the TAKE IT DOWN Act makes it a federal crime to knowingly publish non-consensual intimate imagery — whether real or AI-generated. Violators face up to three years in prison and substantial fines. Platforms must remove reported content within 48 hours.
Your Legal Protections in 2026
The TAKE IT DOWN Act (Federal)
The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act provides:
- Criminal penalties: Knowingly publishing non-consensual intimate imagery (real or AI-generated) carries up to 3 years imprisonment and fines
- Platform removal mandate: Covered platforms must remove reported content within 48 hours of receiving a valid report
- Reporting mechanism: Platforms must establish clear notice-and-removal processes (compliance deadline: May 19, 2026)
- Broad coverage: Applies to both real intimate images shared without consent and AI-generated deepfakes
The DEFIANCE Act (Federal Civil Remedy)
The Disrupt Explicit Forged Images and Non-Consensual Edits Act, passed by the Senate in January 2026, provides:
- Civil damages: Victims can sue creators of deepfake intimate imagery for monetary damages
- Pseudonymous litigation: Victims can file lawsuits without publicly revealing their identity
- Accountability: Holds perpetrators financially responsible for the harm caused
State Laws
The majority of states have enacted laws addressing at least one category of AI-generated non-consensual imagery. States with particularly strong protections include Florida, Illinois, Washington, Oregon, New Jersey, Michigan, Pennsylvania, and Arizona. Check your state's specific statutes for additional protections beyond federal law.
What to Do If You're a Victim
Step 1: Document the Content
Before requesting removal, document the evidence:
- Take screenshots with visible URLs and timestamps
- Save the page URL (web address) where the content appears
- Note the platform name and any user profiles associated with the content
- Record when you first discovered the content
- Save any messages from the perpetrator if known
Do Not Share the Content Further
While documenting is important, do not forward or share the images with others — even for advice. Minimize the number of people who see the content. Trusted legal counsel and law enforcement are appropriate recipients of evidence.
Step 2: Report to the Platform
Under the TAKE IT DOWN Act, platforms must remove reported content within 48 hours. Use these reporting channels:
- Meta (Facebook/Instagram): Dedicated reporting form for non-consensual intimate images
- Google: Report content for removal from search results via their removal request tool
- X (Twitter): Report non-consensual nudity through their safety reporting
- Reddit: Report under their involuntary pornography policy
- Pornhub/MindGeek sites: Content removal request form
Include in your report: the specific URLs, a statement that the imagery is non-consensual, and that you are requesting removal under the TAKE IT DOWN Act.
Step 3: Contact Law Enforcement
File a police report with your local law enforcement agency. Additionally:
- Report to the FBI's Internet Crime Complaint Center (IC3) at ic3.gov
- Contact the National Center for Missing & Exploited Children (NCMEC) if the victim is a minor
- Consider reaching out to your state attorney general's office
Step 4: Seek Support
The Cyber Civil Rights Initiative (CCRI) offers free crisis support specifically for victims of non-consensual intimate imagery. They provide:
- Crisis counseling and emotional support
- Help navigating platform takedown processes
- Referrals to pro-bono legal assistance
- Guidance on law enforcement reporting
Step 5: Remove From Search Results
Even after the source platform removes the content, it may still appear in search engine results or cached copies:
- Submit a removal request to Google Search through their personal content removal tool
- Request removal from Bing via Microsoft's content removal process
- Check image search results and request removal of cached thumbnails
Prevention Strategies
Limit Facial Image Availability
While you can't eliminate all photos of yourself online, you can reduce easy access:
- Review social media privacy settings — limit who can see your photos
- Opt out of facial recognition databases where possible
- Remove unnecessary photos from public profiles
- Be cautious about high-resolution face photos in public posts
Remove Personal Information From Data Brokers
Data brokers make it easy for perpetrators to identify and target victims. They provide:
- Full names linked to photos from social media
- Addresses that enable in-person harassment alongside digital abuse
- Employment information that facilitates workplace harassment campaigns
- Family connections that can be used for threats or extortion
Monitor for Your Images Online
Set up regular monitoring to catch new appearances of non-consensual content:
- Use Google Alerts for your name in combination with terms that might indicate explicit content
- Periodically reverse-image search your professional photos
- Monitor dark web forums where this content is sometimes shared
How PrivacyOn Helps Protect You
PrivacyOn reduces your vulnerability to deepfake targeting by removing the personal information that perpetrators use to identify, harass, and extort victims. By scrubbing your data from 100+ broker sites, PrivacyOn makes it harder for bad actors to connect your face to your real identity, workplace, or home address. The service's dark web monitoring also alerts you if your personal information — including images — surfaces in places it shouldn't be, giving you early warning to take action.