For all the attention being heaped on Moore's Law this week, there's another more important law that chip makers must contend with as they push the limits of semiconductor technology ever further: the law of diminishing marginal returns.
Tuesday marked the 40th anniversary of an article written by Gordon Moore, cofounder and chairman emeritus of Intel, and published in the April 19, 1965, issue of Electronics magazine. In that article, which was titled "Cramming more components onto integrated circuits," Moore observed that the number of components on a silicon chip had doubled at regular intervals and he predicted this trend would continue into the future.
Over the last four decades, chip makers have basically done as Moore predicted they would, cramming more and more transistors onto silicon chips at an exponential rate. Because increased density -- the number of transistors on a chip -- has for many years been closely tied to greater performance, this achievement made possible rapid increases in the computing power offered by a single chip.
"Moore's Law has definitely helped drive the industry forward because it sort of sets a target," said Nathan Brookwood, an analyst at market research firm Insight64.
But the importance of Moore's Law, which does not address chip performance, is limited. The incremental performance gains now achieved by regularly doubling the number of transistors on a chip, such as a desktop microprocessor, aren't as significant as they used to be. This is the law of diminishing marginal returns, an economic law that states the marginal return on a unit of input decreases as more inputs are added.
The problem of diminishing returns is compounded by rising costs. The law of diminishing marginal returns assumes that production costs remain constant, but advances in semiconductor technology and manufacturing know-how have become more expensive for chip makers with each new generation of technology. This has raised sharply the cost of keeping pace with Moore's Law and its call for exponential increases in density.
"Moore's Law is interesting but it's not relevant to the problems we face," said Bernie Meyerson, chief technologist at IBM's Systems and Technology Group.
One of those problems is scalability, the question of how to make transistors smaller without affecting their ability to function. Classical notions of scalability are rooted in research conducted during the early 1970s by a group of IBM researchers, including Bob Dennard, the inventor of the single-transistor DRAM (dynamic RAM) cell.
The classic scaling theory put forth in 1972 by Dennard and his colleagues outlined how the physical and chemical properties of a transistor could be shrunk to produce a transistor with a channel length of 1 micron, or one-millionth of a meter. The channel is one of the parts -- or features -- that make up a transistor and is used to conduct or block the flow of electric current when a transistor is switched on or off.
That theory guided the design of semiconductors for 30 years and beyond the 1-micron mark, until chip makers began using the 130-nanometer production process and classic scaling practices ran into a brick wall, Meyerson said. At that point, chip makers could no longer scale some transistor features and power consumption rose sharply, he said. The reference to size when describing a chip-making process refers to the average feature size on a chip built using that process. One nanometer is one-billionth of a meter.