When it comes to AI compute, the big cloud platform providers are chip-agnostic, adopting processors from the big chip companies (mostly NVIDIA) along with their own custom-designed GPUs, referred to as ASICS. OpenAI is now iadopting this stragegy, with a vengeance.
Not only has OpenAI carved out massive GPU deals with NVIDIA and AMD in recent weeks, now its collaborating with Broadcom on designing its own AI compute processors. The two companies announced a collaboration for 10 gigawatts of custom AI accelerators.
OpenAI will design the accelerators and systems, which will be developed and deployed in partnership with Broadcom. OpenAI said that by designing its own chips and systems, OpenAI can embed what it’s learned from developing frontier models and products directly into the hardware, “unlocking new levels of capability and intelligence.”
The racks, scaled with Ethernet and other connectivity solutions from Broadcom, are intended to meet surging demand for AI, with deployments across OpenAI’s facilities and partner data centers.
For Broadcom, this collaboration reinforces the importance of custom accelerators and the choice of Ethernet as the technology for scale-up and scale-out networking in AI datacenters.
OpenAI has grown to over 800 million weekly users and enterprise adoption.
OpenAI and Broadcom have long-standing agreements on the co-development and supply of the AI accelerators. The two companies have signed a term sheet to deploy racks incorporating the AI accelerators and Broadcom networking solutions.
“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” said Sam Altman, co-founder and CEO of OpenAI. “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.”
“Broadcom’s collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence,” said Hock Tan, President and CEO of Broadcom. “OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI.”
“Our collaboration with Broadcom will power breakthroughs in AI and bring the technology’s full potential closer to reality,” said OpenAI co-founder and President, Greg Brockman. “By building our own chip, we can embed what we’ve learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence.”
“Our partnership with OpenAI continues to set new industry benchmarks for the design and deployment of open, scalable and power-efficient AI clusters,” said Charlie Kawwas, Ph. D., President of the Semiconductor Solutions Group for Broadcom. “Custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimized next generation AI infrastructure. The racks include Broadcom’s end-to-end portfolio of Ethernet, PCIe and optical connectivity solutions, reaffirming our AI infrastructure portfolio leadership.”