An AI computer from Cerebras Systems, Los Altos, CA, www.cerebras.net, that is powered by its Wafer Scale Engine (WSE). Introduced in 2019 and designed for machine learning, the CS-1 computer uses an 8.5-inch square wafer that contains 84 processing tiles. The CS-2 debuted in 2021.
Touted as the "world's largest chip" and "industry's fastest AI accelerator," the Cerebras system is equivalent to using hundreds of GPUs. The wafer in the CS-2 contains 2.6 trillion transistors, comprising 850,000 Sparse Linear Algebra Compute (SLAC) cores, 40GB of RAM and a dozen 100 Gigabit Ethernet ports.
How Did Cerebras Do It?
Attempted decades ago, wafer scale integration never materialized. Even today, no matter which process or manufacturer or how small the chips are, every wafer has bad chips that are discarded. Making an entire wafer work as one chip was never achieved until Cerebras figured out how to build failover circuits for automatic redundancy. See wafer scale integration
The Cerebras Wafer Chip
Co-founder and chief architect Sean Lie is holding the chip, which winds up in a custom water-cooled case. (Image courtesy of Cerebras Systems, www.cerebras.net)