My Photo

Recent Comments

« It’s Almost 2010... Where’s My Flying Car? | Main | The Next Fifty Years of Energy »

May 07, 2009

The Curse of Moore's Law

Bookmark and Share

Processor Wafers As many of you know, Gordon Moore stated in his paper of 1965 that the trend for integration of digital transistors will increase exponentially, doubling every two years. So far, Moore’s Law has been pretty close if not conservative.  Looking at the Intel 4004 processor of 1971, it represented a transistor count of around 2300 devices (yes, two thousand three hundred transistors).  The new Intel Quad-Core Itanium Tukwila contains around two billion transistors - an increase of almost a million fold... so that’s good news, right?

In the scheme of higher levels of performance or simply improved portability (think Apple iPod shuffle), higher transistor counts are a wonderful thing. However, there are forces at work at the atomic scale that are making problems for these amazing shrinking transistors... as the size is going down, the total energy the chips are consuming is going up.  Why? First some semiconductor fundamentals...

Figure 1 shows the scale of a typical Complementary Metal Oxide Semiconductor (CMOS) FET. Like all semiconductor processes today, MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) are fabricated using a layering process.  Currently, the layering is done using Deep Ultraviolet Lithography due to the extremely small line spacing used to fabricate the circuitry.  Basically, the UV light exposes sensitized areas of the device to either allow or avoid etching or ion infusion as the layers are built up to make the chip.  These are very tiny devices - the gate length of a modern MOSFET inside of a digital CMOS Cross Section chip is on the order of 65 nanometers - an average human hair is roughly 100 micrometers in diameter... over 1500 times larger!

As the transistor’s conduction channel is made smaller, so must the insulating oxide that sits above it that controls the charge carriers between the source and drain. As the oxide gets thinner, it becomes harder to prevent electrons from "tunneling" through the insulator to the underlying substrate, conduction channel or source-drain extensions.  This phenomenon occurs in both the "ON" and "OFF" states causing significant losses when considering large scale integrated circuits.  Also, sub-threshold leakage is a problem for digital devices.  This is the current that flows between the source and drain when the gate voltage is below the "turn-on" threshold - useful for analog circuits, but a dirge for digital designers. 

It turns out that as we shrink transistors, all of these parasitic problems that used to be minor at larger scales are now adding up primarily due to the higher number of transistors found in modern devices.  Equation 1 shows the relationship between some of these characteristics - most notably, the supply voltage "V".  As the supply increases, dynamic power consumption increases exponentially and is often dealt with first (being the larger source of power consumption).  However, the static losses (iLEAK) are CMOS Energy Equationincreasing as the transistor geometries shrink and the densities increase.  As the frequency of operation of the device is scaled back to conserve energy, the static loss predominates - a real problem for large scale devices with hundreds of millions of transistors.

There are structural ways to minimize these losses, but as geometries continue to shrink, power will become a much more serious issue... not only as a problem for energy consumption (i.e. operating costs) or battery life of equipment that uses these devices, but for the heat that builds up inside of the chips themselves.  As heat builds up and cannot flow away from the source, the localized temperature will also increase causing aging in the device.  A chip will fail sooner if operated at high temperatures, so by lowering the power consumption, the lifespan of the device improves.

Back in 2000 my company, National Semiconductor, pioneered a cool way to lower not only the dynamic power, but the static losses as well.  It’s called Adaptive Voltage Scaling and was used primarily on portable devices to increase run time when running on batteries.  However, as chips continue to grow more complex, AVS is now showing up in large scale devices such as the Teranetics TN2022 10G base-T Ethernet physical layer device.  AVS technology utilizes the principle that all digital chips are designed and implemented for worse case process, timing and temperature.  This is like saying the most out-of-shape person will consume one gallon of water while hiking a trail… therefore, all hikers are required to carry one gallon of water - even though most will do just fine with a single canteen full.  It burdens the better members of the group with the problems of the worse constituents.  So by using technology, each digital chip can be “assessed” for its "water consumption" based on how "fit" it is... that is, how well the chip’s process performs.  Like humans, they will all vary around a mean or average performance level, but the majority will fall near the center of the distribution.

AVS leverages this characteristic by placing monitor circuits inside the digital core (or cores) which report the current state of the chip to an embedded controller called the APC or Advanced Power Controller.  The APC then can make decisions whether the supply voltage is too high or too low and communicate those adjustments to an external power supply device called the EMU or Energy Management Unit. As temperature moves, the process ages or the digital load varies, the controller again makes updates that minimize the energy consumption of the overall system.  Energy savings of 40% or greater have been observed when using this technology especially in systems that use multiple devices.

As Moore’s Law marches on, the number of transistors on a singe chip will grow to levels never before imagined... and to keep these amazing devices running, technology will also need to address the methods for powering them.  As in my previous post, "The Personal Supercomputer in Your Pocket", the potential for capabilities is only limited by our imagination and our ability to manipulate our physical world.  Till next time...


 

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00e5522adbfd883401157074e4cb970b

Listed below are links to weblogs that reference The Curse of Moore's Law:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Fluxor

AVS wreaks havoc on analog design and actually increases power rather than decrease power. AVS essentially lowers the voltage to a level just above the breaking point for a digital circuit, but it's usually way below the breaking point for analog circuits. Hence, analog circuits are forced to use regulators in order to obtain a stable supply voltage. This increases power on the analog side although the overall SOC still saves power. It's often a marketing challenge to make the customers aware of this tradeoff.

(RZ Note: You are spot on... AVS was never intended for analog processes - this is why designers create voltage islands around the digital only cores so AVS only operates on those and keep the analog sections - such as ADCs - powered independently as needed)

The comments to this entry are closed.