The End of Moore's Law: Thank God!

Gordon Moore, who co-founded the chip giant Intel in 1968, made an observation in 1964, six years after the integrated circuit "chip" was invented. He said that the number of transistors on the silicon chips was doubling every year without increase in cost, and it became known as "Moore's Law".

It had to be modified in the late seventies to "doubling every 18 months", but that exponent has held steady ever since. That being almost 20 years now, it is close enough to just write it as:

(Transistors per chip) = 2 (year-1959)/1.5

For instance, 1995 was 36 years since 1959. Divide the 36 by 1.5 to get 24 generations of chips. Take 2 to the 24th power, to get 16 million -- and sure enough, 16 Mbit memory chips (using 1 transistor per bit) were common on the market. Of course, there are always four or five generations on the market at once, depending on whether you're buying sampling quantities of the latest thing out of the labs or the oldest el-cheapo memory suitable for printer buffers; but 16Mbit was in the middle of the pack.

This "law" has been regarded as some kind of reliable, constant, natural law, like gravity, by industry investors, consumers, and writers. It is not. It has been achieved through the the endless work, bleeding ulcers and failed marriages of industry engineers, and the huge investment of both industry and public sources. The "constant" improvement has not actually been that smooth - it has occurred in fits and starts; only the average of a whole industry across many years has been smooth. The relentless pressure of competition, and the huge profits to be gained from having the best chip on the market at the lowest manufacturing costs, have kept this development going - a scaling up unprecedented in the history of technology for its sheer magnitude.

Everybody has always known in some distant way that it cannot go on forever. Heck, at this rate, before 2020, the circuits will be so small that electrons will freely wander between them, even through non-conductors. Optimists love to predict that semiconductors will keep shrinking right up to that point - and by then we'll be on to some new kind of technology like optical circuits, or have found a way to harness quantum effects to compute with individual quantum-state changes in single particles.

Maybe, but the fact is that the Guys In The Labs don't have anything to show yet that is going to reach the factory floor; and they've been working hard at those ideas for decades.

The magazine Scientific American has been writing about the issue for a decade, warning that the merry-go-round will slow down not long after the year 2000.

The first warning appears in the October, 1987 issue with the "Next Computer Revolution" article by Abraham Peled, a V.P. in IBM's Research Division. After charting out the (then) coming decade of advances with excellent accuracy (the chart shows personal computers ranging from 6 to 40 MIPS in 1995), he remarks that advances will continue "for at least the next 10 or 15 years." That is, he couldn't commit to improvements beyond about 2002.

More recently, Scientific American has devoted two articles in 11 months to the subject of limits to semiconductor manufacture.

In the February, 1995 issue, staff writer Gary Stix summed up directions in the industry in "Toward Point One" - referring to the hope that it may be possible to work with elements as small as 0.1 microns, a third of the best we can do today. While the physics problem referred to above would not happen until 0.03 microns, Stix notes "manufacturing difficulties could cause the technology to expire before then".

That article shows a timeline with lithography techniques including high ultraviolet light, X-rays and electron beams continuing on to 2010 and beyond. However, it also quotes industry experts who are skeptical that X-ray lithography will ever bear fruit, after billions in research. "I won't quote anybody, but I was in a meeting where people said that when we get out of optics, we're out of the business," says Karen H. Brown, director of lithography for Sematech, the U.S. industry's R&D consortium. (writes Stix). A later article states "20 years of research on x-ray lithography have produced only modest results. No commercially available chips have been made with X-rays."

If no breakthroughs occur to make X-ray lithography and electron beams work economically in a factory setting, then Moore's Law would appear to be serious trouble in the early 2000's, around the development of one-to-four gigabit RAM chips.

This period is treated authoritatively by top experts in the January, 1996, issue. Dan and Jerry Hutcheson are a father and son that have collectively spent over 40 years, their whole careers, in advancing semiconductor manufacturing. In their article, "Technology and Economics in the Semiconductor Industry", they get down to exact figures on the now perhaps visible end to "runaway growth".

The Hutchesons bring in some historical perspective, as well. They point out that aircraft improved exponentially for decades from 1903 to about 1970. Around that time, there were two milestones, with the 747, still unmatched for size, and the Concorde, for speed. Certainly, larger and faster aircraft are possible - but not economical in commercial use. Improvement has levelled off. They offer similar case histories for railroad locomotives and car factories.

In all cases, the leading indicator was factory cost. As it gets harder to make improvements, great industry profits are finally matched by soaring investments needed to build factories. They note that factory costs in semiconductors have increased at about half the rate of Moore's law, doubling every 3 years.

The 4 and 16 K chips of the 70's were built in multi-million dollar factories; late 1990's factories cost one to three billion. The Hutcheson's analysis of investment-payoff cycles shows that it may take a consortium of all the industry giants to build a single factory by the early 2000's.

They are careful to note that "The semiconductor industry is not likely to come to a screeching halt any time soon." No doubt, the end will be a long-drawn-out affair in which the doubling time increases to two years, then three, then five - and more.

At some point, however, (assuming that a big, basic-physics breakthrough does not step in and change all the rules), we who buy and use the things will not be able to assume that today's waste will be cured by tomorrow's hardware. But, also, we won't have to worry about converting to new hardware (and new software it makes possible) when figuring out next year's budget and projects.

The End of Moore's Law will mean an end to certain kinds of daydreaming about amazing possibilities for the Next Big Thing; but it will also be the end of a lot of stress, grief, and unwanted work.

Consider the endless cycle of reconfiguration. New capabilities bring about new applications, and ever-fancier new operating systems to support them. As hardware, operating systems, "middleware", and applications are issued every year (never at the same time) it becomes a certainty that you are always reconfiguring something. People who would rather be doing writing, calculating, or drawing, let hours fly by as they tinker with device drivers and memory upgrades.

Consider the bugs. When something is always new, it is always buggy. I heard a story the other day about a man who created an embedded control system in the 80's using Motorola's venerable 6502 chip (that worthy of Apple II, Commodore 64, and TRS-80 fame), not because it was the best-designed chip, but because it was the one that he knew every bug in, every quirk of, every feature. Who can be bothered learning everything about the Pentium they got in 1995 when they know they'll have a Pentium Pro or a 80786 next year?

The new software developed to use the new power is of course buggy for a year or more. Does anybody believe that Netscape got all the bugs out of version 2.0 before they went on to focus on putting features into 3.0? Now we have bugs still in the old features and more bugs in the new ones!

Consider the maintenance problem. After you've developed a certain number of solutions where you work, suddenly you discover that all your time is consumed by rewriting them to work correctly with the new OS, the new hardware, the new development platform. Finally, you can no longer do R&D.

Don't get me wrong. I've been in this game since keypunches and teletypes. I've owned home machines since the 70's. It's been a grand ride and I'm not nearly tired of it yet. But I'm starting to think ahead. By 2005, the electronic computer will be 60 and the "home computer" will be 30. They will be able to store a high-definition copy of Gone With The Wind - in RAM. And manipulate it to make Miz Scarlett look Chinese - in real time.

We will, indeed, be able to store and manipulate just about any kind of artistic medium , simulate anything that mathematics can describe, and transmit or copy the lot like a one-box publishing company. And some of us still won't have had time to really have fun with it since the optical bus won't be talking to the fourth-level cache since the new OS upgrade came in - again.

So if, at that point, the Guys In The Lab come out of it with fishing rods on their shoulders, shrug, and say "We're out of bright ideas, folks. That's all she wrote." - well, I for one will have a certain sense of relief as I grab a good book and follow them to the fishin' hole. I'll need a rest.

Two (human) generations of people will have devoted whole 30-year careers to surfing on the wave of change, continually balancing as the improvements constantly ruin stable work environments. While our sea legs might feel wobbly for a while after the wave slides into the beach, we might find that on solid ground we can learn to dance.

Roy Brander


Index to this Issue
Index to the CUUGer Newsletter
CUUG Home Page