Wikipedia Enacts Ban on AI-Generated Articles, Tightens Editing Policy
Wikipedia has officially banned the use of AI-generated text in its articles, drawing a firm line against the automated creation of encyclopedic content. The new policy, a direct response to the proliferation of generative AI tools, aims to safeguard the platform's foundational principle of human-sourced and verified knowledge. While the blanket prohibition is clear, the rules carve out a narrow, strictly regulated exception for AI-assisted copyediting, signaling a nuanced but cautious approach to the technology's role in the editing process.
The Wikimedia Foundation, the nonprofit behind Wikipedia, has implemented the policy to address concerns over accuracy, reliability, and the potential for AI to introduce undetected bias or factual errors at scale. Editors are now prohibited from submitting content created by large language models (LLMs) like ChatGPT for article creation. However, they may use AI tools for limited tasks such as correcting grammar, spelling, and simple formatting, provided they disclose the use and take full responsibility for the final output. This creates a new layer of procedural scrutiny for editors.
The move places Wikipedia at the forefront of a broader institutional struggle to define the boundaries of AI assistance in knowledge curation. It establishes a precedent for other reference and educational platforms grappling with similar integrity challenges. The policy also increases pressure on volunteer editors to vigilantly police contributions, potentially slowing editing workflows while aiming to fortify the encyclopedia's trustworthiness against an emerging wave of synthetic content.