Elon Musk’s ‘Grokipedia’ Is Certainly No Wikipedia

Wikipedia is a valuable online resource that, despite massive changes to the web, has managed to remain truly great to this day. Along with millions of other users, I visit the site daily to learn something new or check my existing knowledge. In an age of relentless AI, Wikipedia is something of an antidote.
If you look at Wikipedia and think “that’s good, but an AI version would be much better”, you might just be Elon Musk. Musk’s AI company, xAI, just launched Grokipedia (yes, that’s the name), an online encyclopedia that closely resembles Wikipedia in name and appearance. But under the hood, the two could hardly be more different. Even though the new “encyclopedia” is still in its infancy, I would say it’s not worth using, at least not for anything real.
The Grokipedia experience
When you load the Grokipedia website, it looks pretty standard. You see the name Grokipedia, next to the version number (v0.1, at the time of writing), next to a search bar and an “Available Items” counter (885,279). Searching for an item is also simple: you enter a query and a list of available items appears for you to select from. Once you view an article, it looks like Wikipedia, but extremely basic: there are no images, only text, although you can use the sidebar to move between sections of the article. You will also find sources, noted by numbers, which correspond to the References section at the bottom of each article.
The main difference between Grokipedia and a simple version of Wikipedia is that these articles are not written and edited by real people. Instead, each article is generated and “verified” by Grok, xAI’s large language model (LLM). LLMs are able to generate large amounts of text in a short time and include sources from which they pull their information, which might make Grokipedia’s talk interesting to some. However, LLMs also tend to hallucinate, or, in other words, make things up. Sometimes the sources from which the AI draws are unreliable or facetious; other times, the AI takes it upon itself to “lie” and generate text that simply isn’t true. In either case, the information is unreliable, especially not at first glance, which is why it’s disturbing to see that much of the experiment is powered entirely by Grok, without human intervention.
Grokipedia vs. Wikipedia
Musk presents Grokipedia as a “massive improvement” over Wikipedia, which he has criticized for pushing propaganda, particularly toward left-wing ideas and politics. It is therefore ironic that some of these Grokipedia entries are themselves taken from Wikipedia. As The Verge’s Jay Peters points out, articles like MacBook Air note the following at the bottom: “Content is adapted from Wikipedia, licensed under the Creative Commons Attribution-ShareAlike 4.0 license.” » Additionally, Peters discovered that some Grokipedia articles, such as the PlayStation 5 and the Lincoln Mark VIII, are almost identical copies of the corresponding articles on Wikipedia.
If you’ve followed Musk’s politics and political activities in recent years, you won’t be surprised to learn that he falls on the right side of the political spectrum. This might give pause to anyone considering using Grokipedia as an unbiased news source, especially since Musk has continually retooled Grok to generate responses more favorable to right-wing views. Critics like Musk claim that Wikipedia is biased to the left, but Grokipedia is entirely produced by an AI model with abject bias.
You will see that you have very different experiences when reading certain topics on Wikipedia and Grokipedia. The Wikipedia article Tylenol, for example, says the following:
In 2025, Donald Trump made several statements about a controversial and unproven link between autism and Tylenol. These statements, about the link between Tylenol during pregnancy and autism, are based on unreliable sources and without scientific evidence.
Compare this to Grokipedia, which devotes three paragraphs to the subject, the first of which begins:
Multiple observational studies and meta-analyses have identified associations between prenatal exposure to acetaminophen (the active ingredient in Tylenol) and increased risks of neurodevelopmental disorders (NDDs) in offspring, including attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorders (ASD).
That said, the second paragraph highlights some of the problems with these studies, while the third points out that some agencies suggest that “the benefits outweigh the unproven risks.”
Likewise, as spotted by WIRED, the Grokipedia article, Transgender, highlights the belief that social media may have acted as a “contagion” to the increase in transgender identification. Not only is this a common right-wing claim, but this particular word could have been taken from a post from a right-wing X account. The Wikipedia article, as one would expect, does not address this claim at all.
Grokipedia also welcomes unproven, controversial, or downright absurd claims. As Rolling Stone points out, he refers to “Pizzagate,” a conspiracy theory that led to an actual shooting, as “allegations,” a “hypothesis,” and a “narrative.” Grokipedia gives credence to the “Great Replacement,” a racist theory started by white supremacists.
Should you use Grokipedia?
Here’s the short answer: no. The problem I have with Grokipedia is twofold: first, no encyclopedia will be reliable if it is created almost entirely by AI models. Sure, some information may be accurate, and it’s great that you can see the sources the bot uses, but when the risk of hallucination is built into the technology with no way around it, choosing to avoid human intervention en masse ensures that inaccuracies will affect a large portion of Grokipedia’s knowledge base.
As if that wasn’t enough, this Grokipedia is built on an LLM that Musk is openly tinkering with to generate results that more closely match his worldview and the worldview of a particular political ideology. Hallucinations and Prejudices: Just the Ingredients You Need for a encyclopedia.
The problem with Wikipedia is that it is written and edited by humans. These humans can hold other human authors accountable, adding new information when it becomes available and correcting errors when they encounter them. It may be frustrating to read that your favorite Secretary of Health and Human Services “has promoted vaccine misinformation and public health conspiracy theories,” but that is the objective, scientific reality. Removing these objective descriptions and reframing the discussion in a way that fits a distorted worldview does not make Grokipedia better than Wikipedia, but makes it useless.




