AI bots have been plaguing Wikipedia for a long time, causing huge surges to the website’s bandwidth and straining its infrastructure—but the Wikimedia Foundation has rolled out a proactive potential solution.
Bots can often cause more trouble than run-of-the-mill human users, as they are more likely to access obscure, sometimes almost forgotten content rather than popular or trending articles, making it harder for servers to deal with the traffic, Ars Technica notes.
Amid these issues, the Foundation has announced a partnership with Google-owned firm Kaggle to roll out a new dataset in beta, filled with structured Wikipedia content in English and French. Since the dataset is built with machine learning workflows in mind, Kaggle said it will be “immediately usable for modeling, benchmarking, alignment, fine-tuning, and exploratory analysis.”
AI developers using the dataset will benefit from “high-utility elements” including article abstracts, short descriptions, infobox-style key-value data, image links, and clearly segmented article sections.
All the content is derived from Wikipedia and is freely licensed under two open-source licenses: the Creative Commons Attribution-ShareAlike 4.0 and the GNU Free Documentation License (GFDL), though public domain or alternative licenses may apply in some cases.
We’ve seen organizations use plenty of other, less collaborative, approaches to dealing with the threat of AI bots. Reddit, another highly popular source of AI training data, has introduced progressively stricter controls to stop bots from accessing the platform, after instituting a highly controversial change to its API policies in 2023 to force AI firms to pay up for its data.
Many other organizations, such as The New York Times, have resorted to legal means to fix their issues with AI scraping bots, though their motivation was financial rather than performance-related.
Recommended by Our Editors
The lawsuit, the first launched by a major publisher against an AI firm, alleged that ChatGPT maker OpenAI was liable for billions in damages due to using millions of its articles to train the company’s AI models without permission.
But plenty of tech giants have pursued the diplomatic approach to data scraping. Reports emerged in late 2023 that Apple had offered major new publications like NBC News, Condé Nast, and IAC up to $50 million for the rights to use its content for AI training.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Will McCurdy
Contributor
