Sometimes it is hard to understand how semiconductors work – all this talk about ASICs, nanometers, SoCs, and a sea of other acronyms can distract from the underlying magic that is Moore’s Law. Saying that computing power doubles every few years, or saying this a has so many billion transistors sound impressive, but do not really convey a tangible sense of what a chip can accomplish.
We were reminded of this recently when discussing the most futuristic of technologies – autonomous vehicles (AV).
We are working with a company that is helping to design chips for AVs. Self-driving systems are driven by neural network/machine learning math (we could call this AI, but we have already reached our acronym quota). It should go without saying that these systems require a lot of computation. Today, most machine learning compute is run on GPUs and CPUs.
As a result, the AV systems being tested today require a lot of hardware. We have seen systems running on 10 laptops wired together. More typically, we know of several systems running on two server-grade CPUs with four to six GPUs.
To put it simply, this is completely impractical A set-up like that costs something like $10,000, which is obviously a non-starter for most vehicles. More importantly, all those computers use a lot of power. That 2-CPU/4-GPU configuration consumes close to 1 kilowatt hour (kWh). Depending on how you do the math, that would sap the range of an electric vehicle by 20%-50%, without even factoring in the weight of the system. The impact wold be far less on a gasoline engine, but it would still pose design challenges.
To be clear, no on expects to build cars using a set-up like this. So the question is then how do you have a sufficiently powerful computing apparatus whose power consumption does not kill a car’s battery?
The answer is someone needs to build a chip. All those GPUs and CPUs dedicate a lot of real estate and consume a lot of power for functions that are not needed in AVs. Instead, the big auto makers are looking for special-purpose AV processors (AVPUs?) that do one thing really well. Tesla, for one, has already announced they are working on their own chip.
These chips (ASICs to be precise) can be designed for the sole purpose of just doing AV math, and pretty much nothing else. With this approach, car makers can use a single chip that replaces the CPU/GPU combo, maybe add a second one for redundancy, with much lower power consumption. We do not know the exact power budget of such a chip, but we think you the comparison would look a lot like running a laptop versus running a full rack of servers.
Another useful example is Google. Ten years ago they unveiled their Tensor Processing Unit (TPU), a chip specially designed just to do machine learning in their data centers. It is very similar math to what is used in AVs, but the specific implementation is different enough that a TPU could not replace a chip purpose-built for AVs. Same math, very different requirements. And by comparison, Google said that using this chip instead of their own combination of GPUs and CPUs would halve the number of data centers they would have to build. A savings of tens of billions of dollars.
And that is the magic of semiconductors. Through a process of smart design, an unworkable solution becomes a seamless, powerful product. You have to dig into electricity consumption, matrix math, and semiconductor manufacturing to get to this solution to work, but the potential impact is massive.
And that is the great paradox of semis. An immense amount of engineering gets boiled down to a product that works so well that the average user is unaware of it. Maybe this is why so many US venture investors avoid semis.