Moonshot-21/05/2024
The first edition of our AI newsletter: Moonshot. An opinionated digest of noteworthy innovations, thoughts on industry shifts, and updates on our own projects at Moonfire.
Welcome to the first edition of our AI newsletter: Moonshot.
Working on and with AI every day, we decided we’d curate and share the interesting stuff we find along the way, on a roughly quarterly basis. Our goal is not to provide a comprehensive roundup of AI news, but to provide a more opinionated digest of noteworthy innovations, thoughts on industry shifts, and updates on our own projects at Moonfire.
In this first issue, we’re focusing on the backbone of AI: chips.
Compute hardware is a critical resource when it comes to all kinds of compute workloads, and AI is a very useful, very compute intensive kind of workload. This issue of Moonshot is going to introduce you to a few recent innovations and ideas.
We hope you enjoy it, and let us know what you think.
– The Moonfire engineering team: Mike, Jonas, and Tom 🌗🔥
It’s been a quarter of new chips – from Google and Meta’s in-house semiconductor models to a moonshot “thermodynamic” chip.
Hardware is the once and future battleground of AI. There are not enough GPUs to train AI, choking acceleration in terms of both dollars and compute.
This is partly a geopolitical issue. 90% of the most advanced semiconductors are produced in Taiwan. A single, fragile point of failure, particularly given China’s contested claim over the country and its tensions with the US. There are other centres of chip production – the US, Japan, South Korea – and Europe is driving to onshore its own capacity, with plants proposed in Germany, France, and Italy, but the cost of production is much higher. However it plays out, AI progress relies on cooler heads prevailing.
But, more fundamentally, it’s a technology problem.
While specialised hardware accelerators – like GPUs and TPUs – have been the main driver of recent AI progress, they may eventually hold the field back. This hardware is based on a circuit called a systolic array, which is well suited to accelerating the matrix multiplication that is common to almost all AI models. The potential problem, however, is that we overly optimise both our hardware, and the entire field of AI, for matrix multiplication – narrowing our focus on innovative model and hardware design.
We need new science when it comes to AI hardware accelerators and silicon-based compute in general. If we want to move AI out of the data centre and into people’s hands, we need chips that can handle modern AI workloads in compact or energy-constrained environments. That’s why moonshots like Extropic’s thermodynamic chip or the DARPA funded attempt to use mixed-signal circuits and architectures for more energy-efficient AI computation are exciting. They’re reimagining traditional Von Neumann architecture and the physics of computing.
Until next time, all the best,
– Mike, Jonas, and Tom 🌗🔥
Authors