| Ticker | Private |
|---|---|
| Headquarters | Sunnyvale, California, USA |
| Founded | 2015 |
| Funding | $720M+ |
| Valuation | $4.1B |
| Founder | Andrew Feldman |
Cerebras Systems is a semiconductor company specializing in artificial intelligence compute hardware. While primarily known for the Wafer-Scale Engine (WSE) - the world's largest chip - Cerebras has emerged as a significant player in neuroscience computing, providing the computational infrastructure for major brain research initiatives[1].
Founded in 2015 by Andrew Feldman and a team of industry veterans, Cerebras has pioneered a fundamentally different approach to AI computing. The company's wafer-scale engine represents a dramatic departure from traditional GPU architectures, offering unprecedented computational power for workloads that require massive parallelism and high memory bandwidth[2].
Cerebras has partnered with the Allen Institute for Brain Science to accelerate neuron reconstruction[3]:
Partnership with the Jülich Supercomputing Centre in Germany[4]:
Cerebras hardware supports multiple neurodegenerative disease research applications:
| Disease Area | Application | Computational Advantage |
|---|---|---|
| Alzheimer's Disease | Protein folding simulations | Faster amyloid-beta aggregation modeling |
| Parkinson's Disease | Alpha-synuclein analysis | Large-scale neural network simulations |
| Multiple Sclerosis | Myelin modeling | Circuit-level simulations |
| Brain-Computer Interfaces | Real-time processing | Low-latency neural decoding |
Cerebras compute infrastructure supports Alzheimer's disease research in multiple ways:
Protein Folding and Aggregation:
Machine Learning for Early Detection:
Drug Discovery:
For Parkinson's disease, Cerebras systems enable[7]:
Cerebras compute power enables emerging neurotechnology applications:
The third-generation WSE represents the culmination of Cerebras's engineering efforts[8]:
| Specification | WSE-3 | Comparison |
|---|---|---|
| Transistors | 4 trillion | 4,000x typical GPU |
| Compute Cores | 900,000 | 100x typical GPU |
| On-chip Memory | 44GB SRAM | Largest on-chip memory |
| Interconnect Bandwidth | 100 petabits/sec | 10,000x typical |
| Die Size | 46225 mm² | Largest chip ever made |
This architecture is particularly suited for brain simulation workloads that require massive parallelism and memory bandwidth:
While not directly developing brain implants, Cerebras enables the neurotechnology ecosystem[9]:
| Capability | Cerebras WSE | Traditional GPU Clusters |
|---|---|---|
| Memory per unit | 44GB on-chip | 80GB across many devices |
| Interconnect bandwidth | 100 Pb/s | <1 Pb/s |
| Neural network training | Hours for large models | Days to weeks |
| Brain simulation scale | Near real-time | Limited by communication |
Cerebras occupies a unique position in the neurotechnology landscape, providing the foundational computational infrastructure that enables other players in the ecosystem:
| Company | Technology | Neuroscience Focus |
|---|---|---|
| Cerebras | Wafer-scale engine | Brain modeling, drug discovery |
| NVIDIA | GPUs | General AI, some neuroscience |
| Graphcore | IPU | AI training |
| Groq | LPU | AI inference |
| Tenstorrent | RISC-V AI | Various |
Future developments likely include:
Potential growth areas include:
This page covers Cerebras Systems. For the latest pipeline information, please refer to the company's official website.
| Program | Stage | Focus | Status |
|---|---|---|---|
| Core Technology | Development | Primary | Active |
| WSE-3 | Production | AI compute | Shipping |
| Research Programs | Research | Various | Ongoing |
GPU vs AI accelerator benchmarking for neural network training. arXiv. 2024. ↩︎
Morphology of a single neuron in the mouse visual cortex. Nature. 2021. ↩︎
Human Brain Project: achievements and legacy. Neuron. 2023. ↩︎
Machine learning approaches for amyloid-beta aggregation prediction. Nat Mach Intell. 2023. ↩︎
AI-driven drug discovery for Alzheimer's disease. Nat Rev Drug Discov. 2024. ↩︎
Brain-scale neural network modeling for Parkinson's disease. Brain. 2023. ↩︎
Accelerating connectomics with AI accelerators. Nat Methods. 2023. ↩︎