AI can’t make good video game worlds yet, and it might never be able to

It is Hindsighta weekly newsletter featuring an essential story from the world of technology. For more on the games industry’s resistance to generative AI, follow Jay Peters. Hindsight arrives in our subscribers’ inboxes at 8 a.m. ET. Register for Hindsight here.
Long before the explosion of generative AI, video game developers were creating games that could generate their own worlds. Think of titles like Minecraft or even the original from 1980 Thug this is the basis of the term “roguelike”; these games and many others create worlds on the fly with certain rules and parameters. Human developers work painstakingly to ensure that the worlds their games can create are engaging to explore and filled with things to do, and at their best, these types of games can be replayable for years because of how new the environments and experiences can feel every time you play.
But just as other creative industries are pushing back against a crumbling AI future, generative AI is coming for video games, too. Although it may never catch up to the best of what humans can do today.
Generative AI in video games has become a lightning rod, with players getting upset over game errors and half of developers thinking generative AI is bad for the industry.
The big video game companies are jumping into the murky waters of AI anyway. PUBG Manufacturer Krafton transforms into an “AI First” games company, EA partners with Stability AI for “transformative” game creation tools, and Ubisoft, in a major reorganization, promises it would make “accelerated investments behind player-facing generative AI.” The CEO of Nexon, owner of the company that achieved last year’s mega-success Arc Raidersperhaps puts it most ominously: “I think it’s important to assume that all gaming companies are now using AI. » (Some indie developers disagree.)
The biggest video game companies often present their commitments as a way to streamline and help game development, which is becoming increasingly expensive. But the adoption of generative AI tools poses a potential threat to jobs in a sector already infamous for waves of layoffs.
Last month, Google launched Project Genie, a “search-first prototype” that lets users generate sandbox worlds using text or image prompts that they can explore for 60 seconds. Currently, the tool is only available in the United States to those subscribed to Google’s AI Ultra plan for $249.99 per month.
Project Genie is powered by Google’s Genie 3 AI global model, which the company touts as a “key stepping stone on the path to AGI” that can enable “AI agents that can reason, problem-solve, and act in the real world,” and Google says the model’s potential uses go “well beyond gaming.” But it got a lot of attention in the industry: It was the first real indication of how generative AI tools could be used for video game development, just as tools like OpenAI’s DALL-E and Sora showed what might be possible with AI-generated images and videos.
In my testing, Project Genie was barely able to generate even vaguely interesting experiences. The “worlds” don’t allow users to do much besides wander around using the arrow keys. Once the 60 seconds are up, you can’t do anything with what you’ve generated except upload a recording of what you’ve done, which means you can’t plug what you’ve generated into a traditional video game engine either.
Of course, Project Genie allowed me to generate terrible unauthorized Nintendo knockoffs (apparently based on the online videos Genie 3 is trained on), which raised many familiar concerns about copyright and AI tools. But they weren’t even in the same quality universe as the worlds of a hand-crafted Nintendo game. The worlds were silent, the physics were sloppy, and the environments seemed rudimentary.
The day after Project Genie’s announcement, the stock prices of some of the biggest video game companies, including Take-Two, Roblox, and Unity, plummeted. This helped to limit the damage a little. Take-Two president Karl Slatoff, for example, strongly opposed Genie in a conference call a few days later, arguing that Genie was not yet a threat to traditional games. “Genie is not a game engine,” he said, emphasizing that the technology “is certainly not a replacement for the creative process” and that, to him, the tool is more like “an interactive, procedurally generated video at this point.” (Stock prices rallied in the days that followed.)
Google will almost certainly continue to improve its global Genie models and tools for generating interactive experiences. It’s unclear if he’ll want to improve the experiences as games or if he’ll instead focus on finding ways for Genie to help him on his ambitious march toward AGI.
However, other AI company executives are already pushing for interactive AI experiences. xAI’s Elon Musk recently claimed that “real-time” and “high-quality” video games “personalized for each individual” would be available “next year,” and in December he said building an “AI game studio” was a “major project” for xAI. (As with many of Musk’s claims, take his predictions and timelines with a grain of salt.) Meta’s Mark Zuckerberg, who is now making AI the new social media after the company cut jobs in his Metaverse group, envisions a future in which people create a game from a prompt and share it with people in their feeds. Even Roblox, a gaming company, explains how creators will be able to use AI world models and prompts to generate and modify game worlds in real time, what it calls “real-time dreaming.”
But even in the most ambitious vision where AI technology is capable of generating worlds as responsive and interesting to explore as a video game running locally on a home console, a PC, or your smartphone, creating a video game involves much more than just creating a world. The best games have engaging gameplay, include interesting things to do, and feature original artwork, sounds, writing, and characters. And it can take years for human developers to make sure all the pieces work perfectly together.
AI technology is not yet ready to generate games, and anyone who thinks so is kidding themselves. But AI-generated video is still bad, and it was still used to make a bunch of bad Super Bowl commercials, so tech companies are probably still going to put a lot of effort toward games created with generative AI. In an already volatile industry, even the idea that AI tools could rival what humans can create could have far-reaching long-term consequences.
But the complexity of games is different from AI video, which has improved significantly in a short time but has fewer variables to consider. AI game creation tools will almost certainly improve, but the results may never close the gap to what humans can create.
- In a lengthy article on
- Although the gaming industry probably shouldn’t feel threatened by global AI models yet, generative AI tools will continue to be controversial in game development. Even Larian Studios, beloved for games like Baldur’s Gate 3is not immune to backlash.
- Steam requires developers to disclose when their games use generative AI to generate content, but in a recent change, developers do not have to disclose whether they used “AI-based tools” in their game development environments.
- Some games, such as text gaming Hidden door and Amazon’s Snoop Dogg game on its Luna cloud gaming service, are kissing generative AI as a central aspect of the game.
- Joost van Dreunen, professor of games at NYU, has his take on the situation surrounding Project Genie.
- Scientific American has an excellent explanation of how global models work.




