An AI computer from Cerebras Systems, Los Altos, CA, www.cerebras.net, that is powered by its Wafer Scale Engine (WSE). Introduced in 2019 and designed for machine learning, the CS-1 uses an 8.5-inch square wafer that contains 84 processing tiles.
Touted as the "world's largest chip" and "industry's fastest AI accelerator," the Cerebras system is equivalent to using hundreds of GPUs. In total, the wafer contains more than 1.2 trillion transistors, 18GB of RAM, a dozen 100 Gigabit Ethernet ports and 400,000 cores specialized for linear algebra.
How Did Cerebras Do It?
Attempted decades ago, wafer scale integration never materialized. Even today, no matter which process or manufacturer or how small the chips are, every wafer has bad chips that are discarded. Making an entire wafer work as one chip was never achieved until Cerebras figured out how to build failover circuits for automatic redundancy. See wafer scale integration
The Cerebras Wafer Chip
Founder and chief architect Sean Lie is holding the chip, which winds up in a custom water-cooled case. (Image courtesy of Cerebras Systems, www.cerebras.net)