The World‘s Largest Computer Chip That‘s Accelerating AI

Imagine a thin, square sheet of silicon the size of an iPad, glistening under the cleanroom lights. Etched onto that 21×24 cm surface are 850,000 tiny cores with billions of connections between them – forming the world‘s largest computer chip ever manufactured.

This marvel of modern engineering is Cerebras System‘s Wafer-Scale Engine, an audacious attempt to provide the infrastructure for the next generation of artificial intelligence algorithms.

WSE chip photo

The Wafer-Scale Engine contains 850,000 cores across a 46,225 mm2 surface, making it bigger than any chip ever produced. Image credit: Cerebras Systems

As neural networks grow more complex and data-intensive, legacy computing struggle to keep pace. Graphics cards and central processing units originally built for gaming and office work were never optimal for AI‘s grueling number crunching. We required something new, something bigger – a platform engineered from the ground up to provide the parallel processing horsepower modern AI demands.

Enter Cerebras and the Wafer-Scale Engine. By miniaturizing an entire supercomputer cluster onto one mammoth piece of silicon, they remove the bottlenecks holding AI advancement hostage. The results speak for themselves:

SpecsWafer-Scale EngineNvidia A100 GPU
Transistor Count2.6 trillion54 billion
On-Chip Memory40GB40GB
Memory Bandwidth20 PB/sec1.6 TB/sec
Manufacturing Process7nm7nm
Die Area46,225 mm2826 mm2

With 100x more cores and 12,500x more memory bandwidth than even the top-shelf Nvidia A100 data center GPU, the WSE brings extreme parallelization to AI workloads. This allows complex neural networks to be trained in minutes rather than weeks or months.

Why Bigger is Better for AI Chips

Artificial intelligence lives and dies by data. Whether training image recognition models or optimizing recommendations, feeding more information to neural networks consistently improves their accuracy.

However, fitting gigabytes of examples into memory is only half the battle. Once loaded, each data point must be cycled through potentially billions of mathematical operations to fine-tune the network‘s parameters.

This compute-intensive process demands massively parallel infrastructure – splitting the number crunching over thousands of cores working in unison. More cores joined by quick interconnects equates to faster training.

Traditional architectures struggle to scale in this parallel fashion due to physical limitations. High-speed links between standalone GPU chips waste precious energy and latency moving data back and forth. At a certain point, cramming more cores onto a plateaus as communication bottlenecks emerge.

By constructing an entire computing system on one expansive chip, Cerebras avoids these pitfalls. Data travels millimeters rather than meters between cores, enabling seamless coordination between them. Think of it as a tiny supercomputer on one wafer.

And because today‘s largest AI models still require more muscle, customers can snapped together groups of 16 WSE chips into a tower system called the CS-2. Delivering performance akin to a 30-rack cluster, this brings elite-scale AI within reach of more enterprises.

The Road Ahead

Five years since its launch in 2016, Cerebras continues plowing ahead driven by massive investor backing and an abundance of commercial partnerships. Having raised over $720 million to date, their latest 2021 funding round ascribed a valuation of $4 billion – cementing their status as a leader in bleeding-edge AI infrastructure.

Along with providers like Nvidia, Intel, and Graphcore, Cerebras is ensuring the hardware foundations keep pace with AI and machine learning‘s insurgence across industries. As algorithms grow more ubiquitous throughout business and society, endlessly scaling up specialized silicon in data centers will grow in lockstep importance.

"We have created the processor architecture the AI industry needs and deserves. With the Cerebras Wafer Scale Engine product, we will deliver unprecedented compute power for AI work to our customers." – Andrew Feldman, CEO and Co-Founder of Cerebras

Whether enabling pharmaceutical researchers to accelerate drug discovery or allowing financial institutions to refine fraud detection, the practical dividends from progress in AI hardware and software will prove staggering. With the largest chip ever conceived at the heart of its solutions, Cerebras seems poised to shape this landscape for decades ahead.

Did you like those interesting facts?

Click on smiley face to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

      Interesting Facts
      Login/Register access is temporary disabled