How Generative AI is Affecting Wikipedia

Generative AI is slowly changing Wikipedia’s content.

When you try out tools like OpenAI and ChatGPT, you’ll see they produce text that sounds human. The issue, though, is they can add wrong details.

Wikipedia, known for offering well-sourced information to millions, is now using these AI tools to make, summarize, and refresh articles.

This article will show how generative AI impacts Wikipedia.

Content

  • What is Generative AI?
  • Generative AI and Wikipedia
  • Effect of Generative AI on Wikipedia
  • Conclusion

What is Generative AI?

Generative AI refers to a type of artificial intelligence that can produce fresh content, designs, or concepts using machine learning algorithms. It works by analyzing a prompt you provide, which could be text, an image, a video, or a design, and then creating new content based on that input.

A lot of Wikipedia contributors use generative AI tools like OpenAI’s ChatGPT for their writing. These tools, however, sometimes “hallucinate” and create fake citations, which can lead to spreading false information.

Jimmy “Jimbo” Wales, the founder of Wikipedia and the Wikimedia Foundation, acknowledges that the information from generative AI isn’t always reliable. He shared a story about asking ChatGPT whether an airplane hit the Empire State Building. The bot initially said no, yet then described how a B25 bomber did crash into the building, contradicting its earlier response.

Generative AI and Wikipedia

For over two decades, Wikipedia has been built on content made and edited by volunteers from around the globe. Now, you can find the site in 334 languages, offering info on almost any topic.

Lately, though, there’s been worry about AI making more articles and summaries on the site. These summaries might seem right at first but turn out to be totally wrong when checked closely.

Besides worries about wrong data, folks in the Wikipedia community have found that generative AI is citing sources and research papers that aren’t real.

The danger for Wikipedia is that each time someone adds unchecked content, the site’s quality might drop.

Effects of Generative AI on Wikipedia

  1. Misinformation and Disinformation

Every day, millions turn to Wikipedia for trustworthy information on topics that impact their lives and choices. But with AI-generated content appearing on the site, it’s tough to tell if the seemingly accurate content is actually verified. This could lead to people doubting Wikipedia’s credibility when they find misleading information.

  1. Fake Citations

Tools such as OpenAI’s ChatGPT gather information from many sources but usually don’t reveal where it comes from. This can cause new forms of plagiarism and disrespect the rights of original authors. Because citing sources is crucial for researchers, this might lead to academic work with wrong citations.

  1. Lack of Empathy

Generative AI is just a tool. It can’t feel emotions like humans do, such as empathy. This impacts its writing, making it dull and emotionless. Editors end up working twice as hard, constantly revising to align the articles and summaries with the site’s tone.

  1. Problems for Future Models

Many AI firms train their models with data from Wikipedia. If Wikipedia posts are made by AI, then future models might depend on this data, which could contain errors and falsehoods.

A report indicates that the Wikimedia Foundation, which manages the free encyclopedia, plans to create tools to help volunteers spot content made by bots. This doesn’t mean editors won’t have oversight challenges.

Conclusion

Though some believe that generative AI could spell the end for Wikipedia, this idea might be overstated. Yet, as more AI-produced content appears on Wikipedia, it might gradually lose trust with users worldwide.