On Wednesday, Wikimedia Deutschland revealed a new database designed to make Wikipedia’s extensive information more easily available to AI systems.
Named the Wikidata Embedding Project, this platform utilizes a vector-based semantic search method—a process that enables computers to interpret the meanings and connections between words—on the vast data from Wikipedia and its related sites, which together hold close to 120 million records.
By integrating support for the Model Context Protocol (MCP)—a standard that enables AI to interact with data sources—the initiative allows LLMs to access the data through natural language queries more effectively.
Wikimedia’s German division developed the project in partnership with neural search company Jina.AI and DataStax, a real-time data training firm owned by IBM.
For years, Wikidata has provided machine-readable information from Wikimedia sites, but previous tools only supported keyword searches and SPARQL, a specialized query language. The updated system is better suited for retrieval-augmented generation (RAG) setups, which let AI models incorporate external knowledge, giving developers the ability to anchor their models in content reviewed by Wikipedia editors.
The data is organized to deliver essential semantic context. For example, searching for “scientist” in the database will yield lists of notable nuclear scientists, scientists affiliated with Bell Labs, translations of “scientist” in various languages, an approved Wikimedia image of scientists at work, and related terms like “researcher” and “scholar.”
Anyone can access the database on Toolforge. Additionally, Wikidata will host a webinar for developers interested in the project on October 9th.
This initiative arrives at a time when AI developers are urgently seeking reliable, high-quality data to refine their models. Training environments have grown more advanced—often built as intricate systems rather than simple datasets—but they still depend on carefully curated information. For applications demanding high precision, trustworthy data is crucial. While Wikipedia may have its critics, its content is far more fact-based than broad collections like Common Crawl, which aggregates vast numbers of web pages from the internet.
Sometimes, the pursuit of top-tier data can be costly for AI companies. For instance, in August, Anthropic agreed to pay $1.5 billion to settle a lawsuit with a group of authors whose works were used for training, resolving all related claims.
In a statement to the media, Wikidata AI project manager Philippe Saadé highlighted the project’s independence from major tech firms or leading AI labs. “The launch of this Embedding Project demonstrates that advanced AI doesn’t need to be dominated by a few corporations,” Saadé said. “It can be open, collaborative, and designed to benefit everyone.”