Microsoft and OpenAI Are Rewriting Their Deal, and It Actually Makes Sense

2 0 0

Microsoft and OpenAI just announced they’re reworking their partnership. Again. But this time, the changes feel less like a power grab and more like two grown-up companies figuring out what actually works.

The original deal from 2019 was a mess of exclusivity clauses, vague revenue splits, and mutual dependencies that made sense when OpenAI was a scrappy research lab and Microsoft was desperate for AI talent. Fast forward to 2026, and both sides have changed. OpenAI is a multi-billion-dollar operation running its own infrastructure. Microsoft has Azure, Copilot, and a dozen other AI products that don’t need a single OpenAI API call to function.

So what’s different now? The headline is that Microsoft no longer has exclusive rights to OpenAI’s models. That’s a big shift. Previously, Microsoft had a lock on commercializing GPT models through Azure. Now OpenAI can shop its tech around to other cloud providers, which honestly was inevitable. No company building foundation models wants to be married to one cloud vendor forever, especially when AWS and Google Cloud are throwing money at AI startups.

But don’t read this as a breakup. Microsoft still has a significant financial stake in OpenAI, and the revenue-sharing model has been simplified. Instead of a complex tiered system that required quarterly audits and lawyers on speed dial, it’s now a straightforward percentage of API revenue. Microsoft also gets continued access to OpenAI’s frontier models for its own products. Copilot isn’t going anywhere.

The term is longer too. The original agreement had rolling clauses that created uncertainty every couple of years. Now both sides have committed to at least a decade, which gives engineering teams breathing room to plan product roadmaps without worrying about the rug getting pulled.

One thing that caught my attention: the compute provisions are now uncapped. Microsoft had previously limited how much Azure compute OpenAI could use, presumably to keep costs predictable. That cap is gone. OpenAI can scale its training and inference workloads as needed, and Microsoft gets paid for the hardware. This is higher than I expected, given how much compute these models consume. But I suppose Microsoft decided it’s better to have OpenAI burning through Azure GPUs than moving to Google TPUs.

The original announcement was full of corporate-speak about “continued innovation” and “long-term clarity,” but the reality is simpler: both companies realized the old deal was too restrictive for the world they now live in. OpenAI needs flexibility to compete with Anthropic, Google, and Meta. Microsoft needs to sell Azure to everyone, not just OpenAI customers. This new structure lets them do both without pretending they’re best friends.

If there’s a loser here, it’s probably the smaller cloud providers who were hoping OpenAI would become a multi-cloud neutral player. They’ll get access to the models, sure, but Microsoft still has preferential pricing and first look at new capabilities. It’s not a level playing field, but it’s more open than it was.

I’ve been watching this partnership since the early days, and this is the most sensible version of it yet. No more awkward exclusivity, no more quarterly renegotiations. Just two companies that have figured out how to coexist profitably. Let’s see if it lasts another seven years.

Comments (0)

Be the first to comment!