AI Haven
AI News

xAI Locks Grok Image Tools Behind Paywall After Backlash Over Nonconsensual Deepfakes

xAI restricts Grok image generation to paid users after backlash over nonconsensual sexual deepfakes. California AG launched investigation, 35 state attorneys general demanded action.

March 11, 2026

xAI has restricted access to Grok's AI image generation features on X to paid subscribers only, following massive backlash over users creating nonconsensual sexualized deepfakes of real people. The policy shift, implemented in January 2026, marks one of the most significant platform responses to AI-generated explicit content abuse.

What Triggered the Restrictions

Grok's image capabilities became infamous for allowing users to upload real photos and prompt the AI to generate explicit edits—often removing clothing from women and generating nude images. Users discovered they could create "undressed" versions of any person, with these images automatically posting publicly on X. The abuse scaled rapidly, with thousands of nonconsensual intimate images generated hourly.

The controversy intensified when researchers discovered the system was also producing sexualized content depicting minors. This prompted California's Attorney General Rob Bonta to launch a formal investigation and issue a cease-and-desist order to xAI, alleging violations of state laws prohibiting nonconsensual explicit material.

Regulatory Pressure Mounts

The investigation triggered a cascade of regulatory action. A bipartisan coalition of 35 state attorneys general, led by New York's Letitia James, demanded xAI implement immediate bans on all nonconsensual intimate images, including content depicting individuals in suggestive poses or minimal clothing.

International regulators also piled on pressure. The European Union demanded documentation from xAI regarding its content moderation practices. Officials from the UK, India, Malaysia, and France all sought explanations about how Grok was enabling what they characterized as unlawful image creation. India's IT ministry went further, threatening to revoke safe harbor protections that typically shield platforms from user-generated content liability.

U.S. Senators Ron Wyden, Ben Luján, and Ed Markey directly urged Apple and Google to remove the X and Grok apps from their app stores over concerns about the deepfake abuse.

Current State of Grok's Image Tools

Under the new restrictions, free X users can no longer access Grok's image generation or editing capabilities. Only paid subscribers—X Premium and Premium+ users—can use these features. The standalone Grok web and app platforms initially remained accessible to all users, though xAI has not clarified whether additional restrictions have been applied.

Elon Musk and X's safety team stated that users creating illegal content through Grok face the same penalties as those directly uploading nonconsensual material, including account suspensions and potential legal referral.

Industry Implications

The Grok restrictions represent a turning point in how AI platforms handle NSFW content generation. Enterprise-focused AI models saw 64% growth in 2025, and industry analysts project that regulatory requirements for watermarking, provenance tagging, and accountability will become mandatory by 2027.

The incident also highlights the broader challenge facing AI image generators: the gap between capability and safety. While these systems can produce increasingly realistic outputs, preventing abuse requires continuous policy updates, technical safeguards, and responsiveness to emerging harms.

Source: Business Insider / TechCrunchView original →