OpenAI, the developer behind ChatGPT and other artificial intelligence (AI) models, has struck a multiyear computing deal with Cerebras Systems, an emerging alternative to Nvidia.
The agreement, valued at around $10 billion, will provide up to 750 megawatts (MW) of high-performance computing capacity between 2026 and 2028, aiming to shorten AI response times and strengthen the company’s infrastructure.
The AI developer will integrate Cerebras’ systems to support large language models (LLMs) and other AI applications that demand significant computing power.
The deal covers a phased deployment of up to 750 MW, equivalent to the energy output of a small nuclear plant. This capacity will enable LLMs to operate faster and more efficiently, improving performance across ChatGPT and other AI services.
Cerebras offers a Wafer-Scale Engine (WSE) technology that turns an entire silicon wafer into a single massive chip, unlike traditional graphics processing units (GPUs).
This approach reduces bottlenecks between memory and computation, allowing AI models to generate real-time responses (inference) more quickly. The move also reduces reliance on Nvidia hardware while increasing infrastructure diversity and energy efficiency.
Integrating Cerebras’ low-latency systems will significantly shorten response times for user queries.
The upgrade is expected to enhance Natural Language Processing (NLP), visual content generation, code creation, and AI agents. Faster inference will improve the experience for ChatGPT users and other AI services.
Deployment is scheduled to be completed by 2028, strengthening both the resilience and efficiency of the company’s AI infrastructure.
The OpenAI-Cerebras partnership is expected to increase competition in the AI computing market while promoting alternatives to GPU-centric solutions.
For Cerebras, the deal provides a substantial revenue boost and a high-profile reference ahead of a potential initial public offering (IPO). The collaboration also encourages other AI infrastructure providers to innovate, enhancing technological diversity and competition across the industry.
The company has historically relied heavily on Nvidia GPUs. This agreement reflects a broader strategy to reduce computing costs and expand infrastructure options through custom chip solutions.
Alongside projects with Broadcom and Advanced Micro Devices (AMD), the partnership ensures that AI models operate reliably while minimizing supply chain risks.