Artificial Intelligence Could Revolutionize Wikipedia, But Raises Concerns Over Accuracy

Kara Frederick, tech director at the Heritage Foundation, discusses the need for regulations on artificial intelligence as lawmakers and tech titans discuss the potential risks. A new study published in Nature Machine Intelligence suggests that using artificial intelligence (AI) to police entries on open-source encyclopedias like Wikipedia could lead to a more reliable source of information for users. The study, conducted by researchers who developed an AI system called “SIDE,” found that AI tools can help identify and rectify inaccurate or incomplete references on Wikipedia, ultimately improving its quality and reliability.

The AI system analyzed Wikipedia references to identify missing or broken links and determine if the references supported the claims made in the article. In cases where the references fell short, the system suggested alternative references that would be more useful. The researchers tested the system by having it review Wikipedia’s featured articles and suggest references. The AI’s choices for references were already cited in nearly 50% of cases. Furthermore, 21% of users preferred the citations generated by AI, compared to 10% who preferred human citations, with 39% expressing no preference.

One advantage of an AI-driven encyclopedia is the removal of human bias. AI can consider multiple interpretations, verify facts, and monitor research or reporting around an entry, eliminating narrative bias. Additionally, AI can operate 24/7 without tiring, ensuring high productivity. However, there are potential pitfalls, such as proprietary algorithms that may make it difficult for users to understand how the AI generates information accurately.

Samuel Mangold-Lenett, a staff editor at The Federalist, believes an AI-run Wikipedia could produce superior results compared to human versions. He suggests that an AI-generated version of Wikipedia would allow for rapid processing of massive datasets and utilization of every relevant available source, resulting in ironclad fact-checking and potentially eliminating human error and bias.

Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), believes the definition of a “better” version of Wikipedia could be debatable. However, he acknowledges that AI would likely lead to a more comprehensive encyclopedia with better grammar and improved management of links across entries. Siegel also highlights the need for a prompt management process to update the encyclopedia quickly with new information.

While AI could revolutionize Wikipedia, Siegel argues for a human-AI partnership. Humans would still be responsible for editing and quality control to ensure completeness, timeliness, and accuracy of information.

In conclusion, the study demonstrates the potential benefits of using AI to enhance the reliability and quality of information on platforms like Wikipedia. However, concerns remain regarding the transparency of AI algorithms and the need for human oversight to maintain accuracy and relevance. As technology continues to advance, finding the right balance between human and AI collaboration will be crucial in creating a more trustworthy and comprehensive online encyclopedia.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

0
Would love your thoughts, please comment.x
()
x