OpenAI Collaborates with Chipmakers for In-House AI Chip Development

Author's Avatar
Oct 29, 2024
Article's Main Image

OpenAI is partnering with Broadcom (AVGO, Financial) and TSMC (TSM) to develop its first in-house AI chips. This initiative is aimed at supporting its AI systems while also utilizing chips from AMD (AMD) and Nvidia (NVDA) to meet its growing infrastructure needs. The company has explored various strategies to diversify chip supply and reduce costs, including the possibility of building its own chip network, which was ultimately deemed too costly and time-consuming.

OpenAI's approach highlights its strategy of leveraging both industry partnerships and a mix of internal and external chips to ensure supply and manage costs, similar to larger competitors like Amazon (AMZN), Meta (META), Google (GOOGL), and Microsoft (MSFT). Despite developing in-house chips, OpenAI continues to procure chips from different manufacturers, which could significantly impact the technology sector.

The collaboration with Broadcom for creating the first inference AI chip has been ongoing for months. While current demand is higher for training chips, analysts predict a future surge in demand for inference chips as AI applications become more widespread. OpenAI is still deciding whether to develop or acquire additional elements for its internal chip design or to collaborate with other companies.

The company has assembled a team of about 20 experts, led by top engineers from Google's Tensor Processing Unit team, including Thomas Norrie and Richard Ho. Notably, OpenAI has reserved TSMC's 2026 capacity to produce its first custom chip, although timelines may change. Nvidia's GPUs currently dominate over 80% of the market, but shortages and rising costs have led major customers like Microsoft, Meta, and OpenAI to explore alternative solutions.

OpenAI plans to utilize AMD's chips through Microsoft's Azure, indicating AMD's new MI300X chip is seeking a share of the Nvidia-dominated market. AMD has estimated $4.5 billion in sales for this chip in 2024. Training AI models and operating services like ChatGPT are costly, and OpenAI anticipates revenue of $3.7 billion and losses of $5 billion this year. Computing expenses, including hardware, electricity, and cloud services, are its largest costs, prompting efforts to optimize usage and diversify suppliers.

OpenAI remains cautious about hiring from Nvidia to maintain a good relationship with the chipmaker, especially for access to its next-generation Blackwell chips. Nvidia declined to comment on the matter.

Disclosures

I/We may personally own shares in some of the companies mentioned above. However, those positions are not material to either the company or to my/our portfolios.