Every industry has its oddities – fundamental realities that seem to conflict with how everyone thinks about them. Semis has a big example of such a reality distortion field around the subject of software. As much as semis are seen as the epitome of hardware – physical devices – a large and growing share of engineers employed at semis companies are software engineers. Not just that they use software in designing chips, but engineers whose job it is to develop some form of software, largely unrelated to the design of the chip. Not only does this conflict with how most people think of semis companies, but most people in the industry do not fully grasp it. We all know it is true, intellectually we understand it, but deep in our heart of hearts, we have not fully internalized it. Once upon a time there was a hard(ish) divide between software and hardware, but those divisions have largely dissolved, and this is going to be very important for the future of semis.
As we have written often in the past, the whole point of a chip is to run some form of software. But each side faced such different technical challenges, that it made sense to specialize. You solve your problems, we will solve ours. There is a reason that most computer science classes start with the concept of abstraction. However, over time, companies grew and realized that they could solve some of their problems by helping others to solve theirs. We know this story well from the software side, where all the Internet giants are developing their own silicon, but the trend is true on the semis side as well. We do not have exact stats, but we would estimate that 30%-50% of engineers at semis companies today are largely focussed on software matters.
We could credibly argue that Intel is the company that really changed the model. In the 1980’s and 1990’s they recognized that they had become highly reliant on Microsoft to power most of the machines running Intel CPUs. So Intel went out and started encouraging alternatives, most crucially Linux. Once Linux had become a reality, during its middle years in the 1990’s and 2000’s, a surprisingly large share of key contributors to the project were Intel employees. At the very least this contributed, and arguably entirely made possible, the rise of Linux as the de facto dominant operating system (OS) in the data center – which came to be run on Intel CPUs. Intel was not alone in this, nor were they even the first semi company to dip their toe in software, but their efforts still mark an important milestone for the industry – the point at which software in its own right became strategically important to a semis company, rather than a feature of their products.
This drive towards software has only accelerated. Nvidia is probably the leading example of this today as they are developing their Omniverse cloud computing suite, but all the other big chip companies have sizable software teams. For the most part, these are largely tasked with building and optimizing software to run on their chips. This is more than just device drivers for printers, but less than full blown OS. Fox example, all the big CPU makers spend a lot of time and resources supporting 3rd parties to recompile and optimize their software to run on those CPUs. More prosaically, there are many teams building tools to give end users more fine-grained control of the silicon. At the far extreme are budding attempts by many chip designers to sell software embedded in their chips. For the most part, all these teams are seen as cost centers, and part of the problem the big chip companies have selling software is decades of ingrained practice that software is a cost to be borne.
This trend holds promise, but an equal amount of peril for the industry. We have written extensively about Google’s VCU, which is noteworthy in that is was largely designed by software engineers with little background in semis. This is the culmination of the broader blurring between software and semis. Ultimately, we think it will be much more common as Moore’s Law slows and the focus turns towards customization. On the other hand, not everyone can afford a massive software team. Starting a chip company can be expensive, with tape outs costing $50 million on up, few start-ups can afford dedicating half their team to software, even if that is what customers increasingly require. Or more likely, they can afford it, but it heightens the risk for those start-ups who will now have to deliver both a chip and software, both of which are likely to be dependent on a single large customer. This need not spell doom and gloom. Building a chip still poses enough of a technical challenge that there are plenty of problems that customers will pay to have solved for them in silicon. But it does make it more likely that chip start-ups will grow their software capabilities at the same time as they develop their chip capabilities. This may turn out to be an advantage as those companies will be better positioned to monetize their software than the chip giants who are still learning the basics.
I’d argue there’s an enormous difference between what Microsoft did with Linux – an application for a general-purpose microprocessor – and the blurring of lines between hardware and software.
It takes an expertise that only defense-oriented software firms have exhibited to reduce the impact of the slowing of Moore’s Law. That is, highly efficient and well-structured code. The entire reason Moore’s Law remained relevant for the last 2 decades is precisely that the inefficiencies of software demanded ever-increasing hardware performance.
More likely, what will drive the future of hardware as the result of the slowing of Moore’s Law is the replacement of the general-purpose microprocessor with application-specific microprocessors. Focusing microprocessor functionality dramatically reduces gate count, enabling a focus on maximizing hardware performance specific to the application. ASµPs also tolerate the inefficient structure and coding of software that is the norm today.
One needs look no further than 5G cellular network radio access points to see the benefits of the ASµP approach – constant tweaks and updates to the signaling standards are easily implemented on deployed networks.
I think we’re getting to the same place via different routes
Great article! The idea of companies building any type of “platform” (whether it be hardware, or a software “general purpose platform”) and then needing to prove the value in the market, is repeated throughout the tech industry.
Partnerships become the critical “glue” since no one can do it alone (or at least, not very well). Ecosystem partnerships between hardware-software-application vendors ultimately creates long term value for all of them.