Nvidia, Microsoft back Anthropic in a $45 billion bid for AI scale

Three tech giants have just tightened the AI race by tightening their grip on each other. Nvidia, Microsoft and Anthropic announced Tuesday that they have formed a three-way partnership that connects Nvidia’s next-generation chips and systems, Azure’s data center infrastructure and Anthropic’s Claude models, creating a loop that locks billions in spending – and influence – across all three.
Anthropic has committed about $30 billion to purchase compute on the Azure platform, an amount large enough to secure 1 gigawatt of capacity. Meanwhile, Nvidia has pledged up to $10 billion and Microsoft up to $5 billion in direct investment in Anthropic. Microsoft said Claude will be trained and deployed on Azure using Nvidia accelerators. And these commitments come with a deeper technical alignment: Nvidia and Anthropic will collaborate on design and engineering, forming a “deep technology partnership.”
Overall, this means Nvidia gets deeper visibility into how frontier systems are built, Microsoft gets Claude as its second flagship model and can wean itself off its OpenAI partnership, and Anthropic gets the industrial-scale infrastructure it needs to continue growing.
“We’re going to be customers of each other more and more,” Microsoft CEO Satya Nadella said, opening a video with the company’s three CEOs. “We will use Anthropic models, they will use our infrastructure and we will market together to help our customers realize the value of AI.” Dario Amodei, CEO of Anthropic, added: “We are very excited to obtain additional capacity that we can use both to train our models…and to sell together. »
Investors have adopted a more cautious attitude. Shares of Microsoft and Nvidia both slipped on the news — down about 3% and 2% respectively at midday — reflecting Wall Street’s broader skepticism about big bets on AI, as investors question whether the current wave of infrastructure spending is ahead of near-term returns.
Tuesday’s deal comes amid aggressive infrastructure expansion — and a market trying to figure out how long AI development can run at full capacity. Demand for high-end GPUs continues to outstrip supply; Microsoft is adding capacity to its data centers at a rate normally associated with national infrastructure projects; and Anthropic scaled Claude quickly enough to secure multi-year cloud commitments from multiple hyperscalers.
A three-way deal of this size gives each of them a clearer runway – and sends a message to rivals about who plans to dominate the next training cycle.
The partnership is part of an increasingly aggressive wave of AI alliances and the broader realignment occurring across the model ecosystem. OpenAI moved much of its cloud pipeline to Amazon earlier this month as part of a multi-year, $38 billion deal with AWS. Google continues to develop its Gemini ecosystem in its consumer and professional products. Meta is still pursuing its open source strategy with Llama. These developments have pushed competition into a new phase, where modeling labs, chipmakers and hyperscalers lock in together under long-term contracts rather than short-cycle infrastructure purchases.
Nadella said in the CEO video that the partnership only works if companies stop treating capacity as something to be protected and not shared, saying the industry needs to “move beyond any kind of zero-sum narrative or win-win hype.” But the money currently flowing through AI still rewards scale rather than cooperation, and the biggest players continue to structure deals to secure their own lanes first.
Anthropic enters the arrangement with the most to gain. The company already has multimodal agreements with Google and Amazon (which will remain Anthropic’s primary cloud provider and training partner); it now has a third hyperscaler with direct ownership ties. Claude’s trajectory has been steep – a year ago the startup was fighting for cloud capacity – but maintaining that momentum requires access to massive, predictable compute – and the capital to secure it. Azure Engagement does both. It gives Anthropic the ability to train larger and more frequent model iterations, it strengthens its business case and aligns its roadmap with two of the most powerful players in the industry.
Nvidia CEO Jensen Huang called the collaboration a “dream come true,” praised Anthropic’s “seminal” work in AI safety, and said Nvidia engineers “love Claude code.” He presented this relationship as an opportunity to move workloads from Anthropic to Blackwell and then to Vera Rubin, with the intention of delivering an order of magnitude performance gain that could reframe cost and speed for frontier models.
“The world is just realizing where we are in the AI journey,” he said. “What’s really cool is they’re going to need a lot more Azure Compute resources, and they’re going to need a lot more GPUs, and we’re just excited to partner with you, Dario, to bring AI to the world.
Right now, Nvidia’s chips are at the center of almost every pioneer model, Azure’s capacity is growing at a ridiculous rate, and Claude’s rise has given Anthropic the leverage to gain long-term support from both sides. The partnership effectively builds a protected lane for all three – one that keeps competitors guessing and keeping capital flowing.
The deal puts the three companies at the center of the AI race. Each brings a different part of the stack, and each now has multi-year commitments linking its AI roadmap to the other two. As rivals deepen their own alliances, the Microsoft-Nvidia-Anthropic bloc shows how far the industry is willing to go to secure the infrastructure behind the next generation of AI.




