Microsoft will continue purchasing AI chips from companies like Nvidia and AMD even after introducing its own in-house processor, according to CEO Satya Nadella.
This week, Microsoft began deploying its first internally developed AI chips at one of its data centers, with additional rollouts planned over the coming months. The new chip, called Maia 200, is built specifically for AI inference — the demanding task of running trained AI models in real-world applications. Microsoft claims the processor delivers higher performance than Amazon’s latest Trainium chips and Google’s newest Tensor Processing Units.
Major cloud providers have increasingly turned to designing their own AI hardware, driven largely by the high cost and limited availability of Nvidia’s most advanced GPUs. Despite this shift, Nadella emphasized that Microsoft has no intention of abandoning external chip suppliers.
He highlighted Microsoft’s strong partnerships with Nvidia and AMD, noting that innovation is happening on all sides. Nadella added that developing custom hardware does not mean the company will rely exclusively on its own technology, stressing the importance of flexibility rather than full vertical integration.
The Maia 200 chip will be used internally by Microsoft’s Superintelligence team, which is responsible for building the company’s most advanced AI models. The team is led by Mustafa Suleyman, a former Google DeepMind co-founder. Microsoft is also developing its own AI models as part of a longer-term strategy to reduce dependence on external providers such as OpenAI and Anthropic.
In addition to internal use, Maia 200 will support OpenAI models running on Microsoft’s Azure cloud platform. However, access to cutting-edge AI hardware remains highly competitive, both for customers and for internal teams.
Suleyman acknowledged this challenge in a post on X, expressing excitement that his team would be the first to use the Maia 200 chip as they work on next-generation AI systems.

