Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze

Google is shaking has formed the team behind Project Mariner, its AI agent that can navigate the Chrome browser and perform tasks on a user’s behalf, WIRED has learned. In recent months, some Google Labs staff who worked on the research prototype have moved on to higher priority projects, according to two people familiar with the matter.
A Google spokesperson confirmed the changes, but said the IT usage capabilities developed under Project Mariner will now be integrated into the company’s agent strategy. Google has already integrated some of these features into other agent products, including the recently launched Gemini Agent, the spokesperson added.
The change comes as Google and other AI labs race to respond to the rise of high-performance agents like OpenClaw. Although these tools are primarily used by developers today, Silicon Valley believes they could soon power general-purpose assistants for individuals and businesses. Nvidia CEO Jensen Huang likened the buzzy tool to a new operating system for agentic computers. “Today, every company in the world needs to have an OpenClaw strategy,” he said at the company’s developer conference earlier this week.
Google CEO Sundar Pichai highlighted Project Mariner at last year’s I/O conference. At the time, browser agents seemed like the industry’s next big bet, with OpenAI and Perplexity launching consumer agents that promised to automate online tasks for users. Agents could click, scroll, and fill out forms on a web page, much like a human. However, adoption of these products has struggled to meet industry expectations.
Perplexity’s Comet browsing agent only reached 2.8 million weekly active users as of December 2025. Meanwhile, OpenAI’s ChatGPT agent has reportedly fallen to less than 1 million weekly active users in recent months. Compared to the hundreds of millions of users who talk to ChatGPT every week, using the browser agent essentially amounts to a rounding error.
New agents in town
The dynamics in the AI world have changed dramatically over the past year in favor of agents like Claude Code and OpenClaw (whose creator was hired by OpenAI). Unlike web browser agents, these systems control computers via the command line, which has proven to be a more reliable way to accomplish tasks. Some of these products include computer usage as a feature, among other agent capabilities. In comparison, navigation agents now seem somewhat limited as a standalone product.
Kian Katanforoosh, CEO of AI development platform Workera who lectures on AI at Stanford, says part of the reason computer-based agents haven’t taken off is because of their enormous computing needs. Most of these agents work by taking a series of screenshots of a web page, feeding them into an AI model, and then taking action based on what they see. Processing all this information can be slow and sometimes unreliable.
“What Claude Code and OpenClaw have shown is that it’s actually much more efficient to work with the terminal, because the terminal is text-based and LLMs are text-based,” Katanforoosh said. “It probably takes 10 to 100 times fewer steps to achieve the same results.”
This is not to say that navigation agents are not improving or that research into computer use is at an impasse.
Last month, startup Standard Intelligence released a computer usage model trained on videos rather than screenshots. The startup claims to have developed a video encoder that can compress videos within the pop-up window of an AI model, which it says is 50 times more efficient than previous computer models. To show off the capabilities of its AI model, the startup connected it to a car, a live video feed, and a computer keyboard. The model was briefly able to drive autonomously around San Francisco.



