Cerebras Systems has unveiled its Wafer Scale Engine 3 (WSE-3), dubbed the "fastest AI chip in the world."
The WSE-3, which powers the Cerebras CS-3 AI supercomputer, reportedly offers twice the performance of its predecessor, the WSE-2, at the same power consumption and price.
The chip is capable of training AI models with up to 24 trillion parameters, a significant leap from previous models.
The WSE-3 is built on a 5nm TSMC process and features 44GB on-chip SRAM. It boasts four trillion transistors and 900,000 AI-optimized compute cores, delivering a peak AI performance of 125 petaflops – that’s the theoretical equivalent to about 62 Nvidia H100 GPUs.
The CS-3 supercomputer, powered by the WSE-3, is designed to train next-generation AI models that are 10 times larger than GPT-4 and Gemini. With a memory system of up to 1.2 petabytes, it can reportedly store 24 trillion parameter models in a single logical memory space, simplifying training workflow and boosting developer productivity.
To read more, click here.