Privacy GuideMay 13, 20267 min read

Privacy Risks of AI Image Generators

SC

By Sarah Chen

Head of Privacy Research

Privacy Risks of AI Image Generators

AI image generators like Midjourney, DALL-E, and Stable Diffusion have exploded in popularity, but they come with significant privacy risks that most users never consider. From facial data harvested through uploaded photos to deepfake threats enabled by the technology itself, here's what you need to know before you upload your next selfie to an AI art tool.

What Data Do AI Image Generators Collect?

When you use a cloud-based AI image generator, you're sharing more than just a text prompt. These platforms typically collect:

  • Text prompts: Every prompt you type is stored and may be used for model training
  • Uploaded images: Photos you upload for editing, style transfer, or face-swapping
  • Account information: Name, email, payment details, and IP address
  • Device metadata: Browser type, operating system, and device identifiers
  • Usage patterns: How often you use the service, what features you access, and how you interact with results

Midjourney makes all generated images publicly visible by default — private mode requires a higher-tier paid plan. This means anyone can browse and search through your creations, including prompts that may reveal personal information or interests.

How Your Photos Get Used for Training

Most AI image platforms reserve the right to use your inputs — including uploaded photos — to train and improve their models. OpenAI's policy states it may use user-provided content for model improvement, though users can opt out through their privacy portal. Midjourney retains similar rights to use prompts and generated images for model training.

Once your data enters a model's training pipeline, removal is effectively impossible. The model has learned from your data, and there's no way to "unlearn" specific inputs. A notable case involved a California patient's surgical photos appearing in AI training datasets without consent — illustrating how uploaded images can end up in places their owners never intended.

Think Before You Upload

Every photo you upload to an AI image generator may be stored indefinitely and used to train future models. Once your facial data is embedded in a training dataset, it cannot be removed. Treat every upload as permanent.

The Deepfake and Impersonation Threat

AI image generation technology has supercharged the deepfake threat. In 2025, 179 out of 346 recorded AI incidents involved deepfakes — making it the single largest category of AI harm. The consequences are severe:

  • Financial fraud: An Arup finance worker was tricked into wiring $25 million after a deepfaked video call impersonated the company's CFO
  • Deepfake-as-a-Service: Turnkey platforms now offer voice, video, and image cloning to anyone willing to pay
  • Deepfake surge: Deepfake fraud increased 1,740% in North America between 2022 and 2023, and the trend continues to accelerate
  • Personal harassment: Non-consensual intimate images generated using AI have become a growing crisis, prompting 61 data protection authorities to issue a joint statement in February 2026

Every personal photo you share online — on social media, dating apps, or AI platforms — provides raw material for potential deepfake creation. The more photos of you that exist online, the easier it is for someone to create convincing fakes.

Copyright and Ownership Concerns

The legal landscape around AI-generated images remains unsettled. The U.S. Supreme Court declined to hear the Thaler case in February 2026, confirming that purely AI-generated works cannot be copyrighted. This means:

  • Unmodified AI outputs likely lack legal copyright protection
  • Midjourney paid subscribers own their outputs with commercial rights, but free users own nothing
  • OpenAI grants users ownership of DALL-E outputs, but copyright protection may not attach
  • Companies earning over $1 million annually must use Midjourney's Pro or Mega plan to retain commercial rights

How to Protect Your Privacy

  1. Run AI locally when possible: Tools like Stable Diffusion and Flux can run entirely on your own computer, eliminating cloud data collection. This is the strongest privacy option.
  2. Never upload identifiable faces: Avoid uploading photos of yourself, family members, or friends to cloud-based AI services. Once uploaded, you lose control of that biometric data.
  3. Strip metadata before uploading: Remove EXIF data (GPS coordinates, timestamps, device information) from any images before uploading. Most phones embed this data automatically.
  4. Opt out of training: Use platform privacy settings to opt out of having your data used for model training. OpenAI offers this through their privacy portal.
  5. Use pseudonymous accounts: Create AI image accounts with email addresses that aren't linked to your real identity.
  6. Read the privacy policy: Before using any new AI tool, understand what data it collects and how it's used. Pay special attention to training data clauses.

Reduce Your Deepfake Attack Surface

The fewer photos of you available online, the harder it is for someone to create a convincing deepfake. Removing your personal information and photos from data broker sites is an important defensive step. PrivacyOn automatically removes your data from 100+ broker sites and monitors for reappearance, reducing the raw material available to bad actors.

The Broader Privacy Picture

AI image generators are part of a larger shift where personal data — photos, biometrics, behavioral patterns — is being harvested at unprecedented scale. The images you share on social media, the photos stored in cloud accounts, and the selfies you upload to AI tools all contribute to a growing digital footprint that can be exploited.

Protecting yourself requires a multi-layered approach: limiting what you share with AI platforms, removing existing personal data from data broker sites, monitoring for misuse, and staying informed about evolving threats. PrivacyOn helps with the data broker layer — continuously removing your personal information from over 100 sites and scanning the dark web for exposure — so you can reduce your overall vulnerability in an increasingly AI-driven world.

SC
Sarah Chen

Head of Privacy Research

CIPP/US CertifiedIAPP MemberB.S. Computer Science

CIPP/US-certified privacy researcher with over a decade of experience helping consumers remove their personal information from data brokers.

Ready to Protect Your Privacy?

Let PrivacyOn automatically remove your personal information from data broker sites and keep it removed.