The definition of a ‘computer’ continues to shapeshift. In the future, the term will become almost meaningless as highly powerful information machines get ever-deeper embedded in the fabric of our lives. Apple’s news that it is launching its own CPU this year marks an important milestone on this path, but a very early marker.
We were thinking about this in response to a comment on our post yesterday looking at the changes in store for the semiconductor industry.
Someone asked us how successful Huawei could be with its Harmony Operating System (OS)? Wouldn’t they need to build a developer ecosystem? That process takes years and has long been viewed as a major barrier to entry for building an OS. Back in 2009 we started looking at this in a report we wrote about mobile operating systems. In the early days of smartphones, the raw number of apps available in an app store was hugely important. It is part of the reason we only have two full-scale mobile OS today, iOS and Android.
But a lot has changed since then.
Part of the concern with Apple launching its own silicon is will they be able to get developers to port applications to the new silicon. Apple has clearly thought about this a lot, with a huge share of their keynote touching on things like emulators, test kits, support for legacy CPUs and the like. They have experience with this, and are unlikely to have made the move if they thought it would be a major problem. They clearly believe the developer community is capable of making the shift.
Moreover, Apple clearly has a vision of a cross-platform development environment encompassing desktop, mobile, TVs, tablets, speaker and watches (and glasses and cars?). Apple is, in effect, carving out a siloed compute domain that is entirely defined by Apple. Compatibility with a common OS still matters, but the boundaries have become much more flexible.
Apple is unique in its ability to do this, given their scale and resources. But we think more fragmentation is coming to the industry as others adapt the tools available and the cost of processors falls while capabilities grow.
We already live in a world where there are three major compute platforms in everyone’s lives – phones, laptops and game consoles, with watches, tablets and speakers close by. There will be more.
The fundamental shift enabling this is the way the Internet has become the platform for so much. Most mobile apps and desktop software are pretty skins on top of web pages. To be clear, this is not a new idea. In the 90’s Oracle and Sun were talking about net computers, cheap terminals with all real processing handled in the ‘net. Going back further, computers for a long time were terminals attached to some kind of mainframe. This is a pendulum that swings back and forth between centralized and distributed systems, dictated by a trade-off between the cost of compute and the cost of communications.
That being said we seem to have reached the point of a more fundamental change. The cost of both processing and communications has fallen to the point that the vast majority of tasks that humans need ‘computers’ for can be handled by incredibly low cost hardware. Computers do not need to run full blown OS, for many tasks an enhanced web browsing engine is sufficient.
Going back to the case of Huawei, it is true it will take years for them to build a full blown suite of applications. But they do not need that. For their phones, they need only be able to support a handful of platforms – WeChat, AliPay, maybe Baidu for maps and a good browser will cover most of what 90% of Chinese consumers want. The rest of the world may want a complete Google suite, but maybe not. The same is true for laptops. There is a big business to won providing Chinese companies and government offices with ‘net computers or terminals. We have no idea if this is what Huawei is planning, and our guess is they have bigger aspirations, the point is merely that they can accomplish a lot with very little.
As the industry pendulum has swung back and forth over the years, one of the common sticking points has been “what about that specialty use case which requires intense compute power?”. The history of compute shows that many of those uses can eventually be satisfied with simpler hardware.
Take gaming as an example. The gaming consoles, like PlayStation and Xbox, have steadily marched along a progression of ever more impressive graphics rendering, requiring intense on-device processing. Nonetheless, the gaming market has exploded on mobile. And most recently we have seen the rise of ‘cloud gaming’. This is the idea that the heavy game processing is done in the cloud, only requiring the user’s device to display the graphics. This will mean low-cost devices (phones or ‘dumb’ console boxes) will be able to stream hardcore games. We will more to say on this subject soon.
The compute industry has shifted the abstraction layer away from the device operating system. The Internet and Moore’s Law have brought good-enough compute to pretty much everyone. There will always be demand for high-end hardware, but the market has gotten much, much broader and diverse. The vast majority of “computers” will gradually disappear, and get subsumed into low-cost, mundane objects.