For as long as anyone can remember, the cost of computing has been steadily declining. This idea is enshrined as Moore’s Law which roughly states that the number of transistors on a chip double every 18 months. The people who make processors have delivered on this for decades. So much so that the concept is now taken for granted.
When I talk to software teams, they tend to treat computing power as nearly free, with the assumption that a year from now they will have access to even more.
It is hard to overstate the benefits of Moore’s Law. It increases the productivity of the whole economy with an investment from a very small number of companies.
Yet the truth is that Moore’s Law is not really a law, it is a model developed by Gordon Moore, one of the founders of Intel. It held true for a long time, but there is nothing definite about it.
This idea has been trickling around hardware circles for a while, but has recently started to enter the mainstream idea-sphere. The Economist had a good overview of the problem in November. Brokerage house Jeffries’ analysts Mark Lipacis, Sundeep Bajikar and Jonathan Lee published a detailed analysis of the outlook [PDF] in September 2012.
There are two problems facing Moore’s Law. The first is that the cost of building a fab or a chip manufacturing plant go up every time the chips shrink. The cost of a new plant is now a few billion dollars, and a shrinking number of companies can afford that. In the Jeffries’ piece, they do some clever math and calculate the amount of revenue coverage needed to stay competitive. Companies that make less than two times the cost of a new plant tend to exit the industry. On page 4 of that report they show the dwindling number of ‘leading edge’ fabs or plants producing the smallest transistors, and the graph on page 5 demonstrates very well their overall thesis about revenue coverage.
The second problem facing Moore’s Law are the laws of physics. The chips are getting so small that eventually we will reach the point where the electrons coursing through them start to interfere with each other. This point was made recently by the Founder and Chairman of Broadcom.
That all being said, it is not time to start panicking just yet. People have been warning about an impending end to Moore’s Law for almost as long as it has been in existence. Second, we still have some time for someone to find a new solution of some sort (although those annoying laws of physics are a bit of hassle). As it stands now there are probably two or three more shrinks left using current techniques. Third, we are discussing the leading edge of chip manufacturing, but many things do not need that and can go for a long time before they reach any foreseeable limits.
That all being said, there is some reason to be concerned. And should it come to pass that Moore’s Law stops delivering (or takes a break until quantum computing starts to scale) the steady increase in productivity we are accustomed to will have to find some new source. Put simply, when we can no longer assume steady increases in performance from hardware, we will need the software industry to increase its own capabilities. As I mentioned earlier in the week, software development is still highly artisanal. Any end to Moore’s Law will need to see an industrialization of that process.
Pingback: Should Intel Split in Two? Probably Not – DIGITS to DOLLARS·
Pingback: Technology as Debt | DIGITS to DOLLARS·