AI Haven
AI News

Wikipedia Votes to Ban AI-Generated Articles

English Wikipedia editors voted 44-2 to ban LLMs from generating or rewriting article content, citing citation hallucinations and cleanup burden on volunteers.

March 26, 2026

English Wikipedia editors have adopted a new guideline explicitly prohibiting the use of large language models to generate or rewrite article content, marking one of the most significant policy responses to AI-generated text by a major knowledge platform.

Overwhelming Vote for Stricter Rules

The Request for Comment (RfC) that closed on March 20, 2026, passed with 44 votes in favor and just 2 opposed, according to reports from the Wikipedia community. The new guideline expands on previous rules that only banned AI for creating entirely new articles from scratch, now explicitly prohibiting LLMs from generating wholesale content or rewriting existing articles.

"Use of LLMs to generate new article content is prohibited," the guideline states, citing frequent violations of Wikipedia's core content policies including neutrality, verifiability, and accuracy. The policy allows narrow exceptions: editors may use LLMs to suggest refinements to their own writing, but only after human review and without incorporating original LLM-generated content.

Hallucinations and Cleanup Burden

The decision was driven by growing concerns about AI-generated content problems: citation hallucinations, mass article creation with factual errors, and the burden on volunteer editors to clean up low-quality text. Wikipedia editors reported spending significant time correcting AI-produced inaccuracies that appeared credible but lacked reliable sources.

"The cleanup effort outweighs the creation speed that LLMs provide," noted one editor in the RfC discussion. "Wikipedia prioritizes human-written content that can be verified."

Admins now have authority to block or topic-ban editors for violations, with enforcement based on output quality rather than detection tools—which remain imperfect. Some human writing now mimics LLM patterns, requiring review of editing history for proper sanctions.

Part of Broader Wikipedia AI Policy Evolution

This builds on earlier actions: German Wikipedia enacted a broader ban on AI content in articles and discussions in February 2026, voting 208-108 in favor. Other language editions have taken softer approaches, with English Wikipedia's new guideline described as a "placeholder" or "stepping stone" toward potentially stricter measures.

The timing coincides with increased awareness of AI's impact on information quality. As LLMs become more capable of producing coherent-seeming text, platforms that rely on verifiability face mounting challenges in maintaining content standards.

This marks a notable reversal from earlier optimism about AI-assisted writing tools. Wikipedia's move signals that major knowledge repositories are drawing hard lines on AI-generated content—at least for now.

Source: The Verge / WikipediaView original →