ChatGPT developer OpenAI has been musing over constructing its personal AI chips for a while now but it surely seems to be just like the undertaking is certainly going forward, as United Daily News stories the corporate is paying TSMC to make the brand new chips. But slightly than utilizing its present N4 or N3 course of nodes, OpenAI has booked manufacturing slots for the 1.6 nm, so-called A16, course of node.
The report from UDN (by way of Wccftech) would not present any concrete proof for this declare however the Taiwanese information company is normally fairly correct relating to tech forecasts like this. At the second, OpenAI spends huge quantities of cash to run ChatGPT, partly as a result of very excessive value of Nvidia’s AI servers.
Nvidia’s {hardware} dominates the business, with Alphabet, Amazon, Meta, Microsoft, and Tesla spending a whole bunch of tens of millions of {dollars} on its Hopper H100 and Blackwell superchips. While the price of designing and creating a aggressive AI chip is simply as costly, after getting a working product, the continued prices are a lot decrease.
UDN means that OpenAI had initially deliberate to make use of TSMC’s comparatively low-cost N5 course of node to fabricate its AI chip however that is apparently been dropped in favour of a system that is nonetheless in improvement—A16 would be the successor to N2, which itself is not getting used to mass produce chips but.
TSMC states that A16 is a 1.6 nm node however the quantity itself is pretty meaningless now. It will use the identical gate-all-around (GAAFET) nanosheet transistors as N2 however would be the first TSMC node to make use of bottom energy supply, referred to as Super Power Rail.
But why would OpenAI wish to use one thing that is nonetheless just a few years away from being prepared for bulk orders? UDN’s report means that OpenAI has approached Broadcom and Marvell to deal with the event of the AI chips however neither firm has a lot expertise with TSMC’s cutting-edge nodes, so far as I do know.
One risk is that the entire undertaking is being achieved in collaboration with Apple, which makes use of ChatGPT in its personal AI system. That’s presently powered by way of Google’s AI servers however given how a lot Apply prefers utilizing its personal expertise today, I would not be stunned if it was additionally seeking to develop new AI chips.
With mounting losses and an AI market full of rivals, OpenAI’s future seems to be considerably unsure, although rumours of funding from Apple and Nvidia might assist flip issues round. The common inflow of tens of millions of {dollars} of funding is actually serving to it preserve inventory worth, for instance.
But if OpenAI is finally purchased by Microsoft, Meta, and even Nvidia (or maybe part-owned by all three), then it is unlikely that the OpenAI chip undertaking would ever be completed, as Nvidia actually would not wish to lose any invaluable gross sales.
Even if it does come to fruition, do not count on it to make any efficiency headlines exterior of dealing with GPT, just because that is the character of all ASICs (utility specification built-in circuits)—they’re designed to do one job very properly however that is it. OpenAI’s chip may be nice for OpenAI however few different firms would have an interest.