OpenAI has taken another major step in shaping the future of artificial intelligence by partnering with semiconductor giant Broadcom to design and develop its own custom AI chips. The move signals OpenAI’s growing ambition to control not only its AI models and software but also the powerful hardware that fuels them.
The two California companies didn’t disclose the financial terms of the deal but said they will start deploying the new racks of customized “AI accelerators” late next year.
It’s the latest big deal between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI.
According to Reuters, this is a multi-year strategic collaboration that will see OpenAI take the lead in chip and system design, while Broadcom focuses on manufacturing, networking, and large-scale deployment. The companies plan to begin rolling out the first generation of these custom AI accelerators in 2026, expanding toward 10 gigawatts of compute capacity by 2029.
The global demand for AI computing power has skyrocketed since the release of OpenAI’s ChatGPT in 2022. The company’s reliance on external chip suppliers such as Nvidia and AMD has come with high costs, long wait times, and limited flexibility. By working directly with Broadcom, OpenAI aims to reduce dependency on third-party suppliers, gain more control over its hardware, and build chips that are specifically optimized for its own large language models.
“What’s real about this announcement is OpenAI’s intention of having its own custom chips,” said analyst Gil Luria, head of technology research at D.A. Davidson. “The rest is fantastical. OpenAI has made, at this point, approaching $1 trillion of commitments, and it’s a company that only has $15 billion of revenue.”
This partnership will enable OpenAI to fine-tune chips for its unique workloads, improving efficiency, energy consumption, and processing speed. This could help OpenAI deliver faster responses and more powerful AI tools to users worldwide.
The deal also helps OpenAI diversify its chip supply, reducing its reliance on Nvidia’s GPUs, which currently dominate the market. While Nvidia remains a key partner, supplying billions of dollars’ worth of hardware to OpenAI, having a custom solution built in partnership with Broadcom adds resilience and long-term flexibility to OpenAI’s compute strategy.
One of the most remarkable aspects of this collaboration is how AI itself is being used to design chips. OpenAI President Greg Brockman revealed that the company’s own models have already been used to optimize chip layouts—an engineering process that typically takes human designers weeks to complete. These AI-generated optimizations have shown potential to make designs more compact and efficient, accelerating the overall development cycle.
Source: Business Insider
Brockman clarified that the AI isn’t doing something entirely beyond human capability, it’s simply doing it faster and more efficiently, allowing engineers to focus on higher-level innovation. This approach reflects a broader trend in the industry: using AI not only to build products but to enhance the engineering process itself.
OpenAI’s decision to move into custom hardware has wide-ranging implications for the AI ecosystem. It shows that leading AI firms are moving toward vertical integration, controlling every layer of the technology stack from data to software to silicon. Companies like Google, Amazon, and Meta have already invested in developing their own AI chips, and now OpenAI is joining that race.
Industry experts believe this partnership could gradually challenge Nvidia’s dominance in AI hardware, though that may take years. Nvidia’s stronghold lies not only in hardware but also in its mature software ecosystem, CUDA, libraries, and developer tools that competitors still struggle to match. Nevertheless, OpenAI’s collaboration with Broadcom represents a strategic move to carve out independence and reduce reliance on a single supplier.
Financial details of the deal have not been disclosed, but analysts estimate that scaling AI infrastructure to this level, 10 gigawatts of compute could require tens of billions of dollars in investment. Even so, OpenAI’s growing ecosystem, which includes partnerships with Microsoft, AMD, and Nvidia, provides the foundation needed to sustain this expansion.
This partnership goes beyond chips, it’s a statement about the future of AI innovation. By integrating hardware and software design, OpenAI hopes to build systems that are more powerful, energy-efficient, and tightly optimized for next-generation AI models. The move could also inspire a wave of AI-driven hardware design, where algorithms assist in every stage of creation, from architecture planning to physical layout.
However, challenges remain. Designing custom chips at scale is extremely complex, involving yield management, cooling, power distribution, and data center integration. There’s also the risk that rapid model evolution might outpace hardware development cycles. Despite these hurdles, OpenAI’s long-term commitment suggests confidence in its ability to innovate across both hardware and software.
This marks one of the boldest infrastructure expansions in OpenAI’s history, positioning the company as a true full-stack AI provider, one that not only builds intelligent systems but also the machines that power them.
OpenAI’s collaboration with Broadcom is more than a business deal. It’s a vision for the future of AI infrastructure. By combining Broadcom’s hardware expertise with OpenAI’s deep understanding of AI workloads, the partnership could redefine how AI systems are built and scaled. As AI demand continues to grow globally, this move gives OpenAI the flexibility, efficiency, and control it needs to stay ahead in the competitive landscape.
Whether these chips will eventually challenge Nvidia’s dominance remains to be seen, but one thing is certain: OpenAI is no longer just creating AI models, it’s building the foundation on which the next generation of artificial intelligence will run.