My Photo

Recent Comments

« April 2009 | Main | June 2009 »

May 2009

May 27, 2009

The Next Fifty Years of Energy

Bookmark and Share

Future Energy I’ve been blogging now for over a year and have covered topics ranging from nano-technology and the future of semiconductors to large scale power generation and transmission.  This week marks the 50th anniversary of my company, National Semiconductor.  This milestone reminded me of how far we’ve come as a technological race.  While writing I’ve often reflected on my past engineering experience to look for examples of how we have improved our way of life. However, in this issue I wanted to take a look forward at one of our civilization’s next big hurdles... our future energy supply. 

We are reaching a critical mass where our population will soon exceed 8 billion people - many of whom will be the first generation to use electricity or drive a car. In the midst of our current economic crisis it is hard to imagine global markets surging from the millions of new consumers that will have buying power in the near future due to technology’s reach.  As we continue our journey into the 21st century, energy will shape the new economy.  It will be driven by the demand for manufacturing, agriculture and transportation.  As automobiles begin to shed their gasoline engines for fully electric drives, more electricity will be required to recharge the energy storage onboard these vehicles.  A simple shift from burning gasoline to fully electric vehicles will not solve our energy crisis since much of our electricity comes from carbon based fuels such as coal.  These shifts will require revolutionary changes to meet the new demands.

As in the beginning of the industrial revolution, there will be change on a scale never before seen.  Carbon based fuels have been our energy standard for over 100 years, but are becoming harder to find and reach as well as being responsible for polluting our environment.  It is well understood that every day terawatts of power rain down on our planet conveniently provided by our sun.  It lights and heats our world as well as drives our weather.  However, we currently capture only a tiny fraction of this energy through hydroelectric, wind or solar energy farming.

There are millions of square miles perfect for collecting this free energy, but the technologies are fairly new with some proposed projects reaching incredible scales.  For example, in New South Wales, Australia a proposal has been made to build a solar chimney that towers over 3000 feet tall with a heat collector that covers over a square mile.  As the air under the collector is heated, it naturally wants to rise due to lower density (like a hot air balloon).  As it rushes into the chimney which forms a natural draft, the flowing air will turn huge turbine generators that produce electricity.  The energy will then be transferred to the grid and sent to cities where it can be consumed.  This is the scale of energy engineering that will become common place by 2060.

Another large scale proposal is to place gigantic solar arrays in deserts around the world.  These arrays will either convert solar energy directly into electricity or capture the heat to boil water and turn steam turbines.  It has been calculated that a photovoltaic array 100 miles on a side would be capable of providing all of the current energy needs of the United States - and that’s with current conversion efficiencies being under 20% (an incredibly poor efficiency rating). 

In practice, placing arrays closer to where the energy is consumed provides great benefit.  Combined with "smart grid" technologies, photovoltaic arrays spread over many commercial and residential buildings will be used to gather enough energy to begin reversing the trend of building new large scale carbon based power plants.  One of the biggest problems with electrical generation is getting the energy where it needs to be, when it’s needed.  The peaks and valleys of electrical demand make power plant operators constantly struggle with keeping the output of the plants balanced with the need.  By having local generation spread out over a large population area, peak demands are much easier to manage.  It also adds another level of reliability since the local solar generators can form micro-grids allowing them to shed completely from larger grids if necessary without interruption.

Along with solar will be other technologies that can be deployed locally such as wind turbines.  Large scale wind farms are a common sight today, but smaller, high efficiency, vertical draft turbine designs will continue to improve and allow almost anyone to harness the wind.  As with PV systems, these generation systems will be connected to a smart grid to provide maximum load management.

Many scientists and engineers see an ultimate solution to fusion power in this century.  With experiments and even small pilot plants being constructed, the consensus among them is that by 2050 practical fusion plants will be a reality.  This is the ultimate replacement for the current infrastructure of carbon fuel or nuclear fission based power plants. But we may find that the fusion reactor we have 93 million miles from earth is all that we need.  With PV efficiency improvements and large scale deployment combined with practical storage methods, our technologies may one day be completely driven by the sun. 

Think this is far fetched?  An average home in the United States consumes around 1000 kilowatt hours per month.  Add an electric vehicle, and that may rise to around 1500 kilowatt-hours.  So let’s round it up to 2000 kilowatt-hours to completely remove all carbon based fuels such as natural gas and propane.  That breaks down to just under 67 kilowatt-hours per day or roughly a continuous 2.8 kilowatt load (roughly the amount of 3 hair dryers running).

Assuming the sun shines 8-10 hours a day, 50% of the days are sunny (even cloudy days produce solar power) and an energy storage efficiency of 50% (laptop batteries are far better than that), a solar PV array would only need to generate around 30 kilowatts while in daylight to meet the entire energy requirement of the home and automobile. Today, even with only 10% efficient PV arrays, 15 kilowatt systems are common and affordable (with tax credits carrying some of the burden).  It doesn’t take a large stretch of the imagination to see 30-50 kilowatt systems on every home and business within the next 50 years.

So, while you’re out on a sunny day at the gas station filling up your car and wondering about gasoline futures or wondering what to do to keep your electric bill low enough so you can afford to feed your family, take a look up and realize that all the energy you will ever need is falling on the grass in your back yard - something to think about!  Till next time...

May 07, 2009

The Curse of Moore's Law

Bookmark and Share

Processor Wafers As many of you know, Gordon Moore stated in his paper of 1965 that the trend for integration of digital transistors will increase exponentially, doubling every two years. So far, Moore’s Law has been pretty close if not conservative.  Looking at the Intel 4004 processor of 1971, it represented a transistor count of around 2300 devices (yes, two thousand three hundred transistors).  The new Intel Quad-Core Itanium Tukwila contains around two billion transistors - an increase of almost a million fold... so that’s good news, right?

In the scheme of higher levels of performance or simply improved portability (think Apple iPod shuffle), higher transistor counts are a wonderful thing. However, there are forces at work at the atomic scale that are making problems for these amazing shrinking transistors... as the size is going down, the total energy the chips are consuming is going up.  Why? First some semiconductor fundamentals...

Figure 1 shows the scale of a typical Complementary Metal Oxide Semiconductor (CMOS) FET. Like all semiconductor processes today, MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) are fabricated using a layering process.  Currently, the layering is done using Deep Ultraviolet Lithography due to the extremely small line spacing used to fabricate the circuitry.  Basically, the UV light exposes sensitized areas of the device to either allow or avoid etching or ion infusion as the layers are built up to make the chip.  These are very tiny devices - the gate length of a modern MOSFET inside of a digital CMOS Cross Section chip is on the order of 65 nanometers - an average human hair is roughly 100 micrometers in diameter... over 1500 times larger!

As the transistor’s conduction channel is made smaller, so must the insulating oxide that sits above it that controls the charge carriers between the source and drain. As the oxide gets thinner, it becomes harder to prevent electrons from "tunneling" through the insulator to the underlying substrate, conduction channel or source-drain extensions.  This phenomenon occurs in both the "ON" and "OFF" states causing significant losses when considering large scale integrated circuits.  Also, sub-threshold leakage is a problem for digital devices.  This is the current that flows between the source and drain when the gate voltage is below the "turn-on" threshold - useful for analog circuits, but a dirge for digital designers. 

It turns out that as we shrink transistors, all of these parasitic problems that used to be minor at larger scales are now adding up primarily due to the higher number of transistors found in modern devices.  Equation 1 shows the relationship between some of these characteristics - most notably, the supply voltage "V".  As the supply increases, dynamic power consumption increases exponentially and is often dealt with first (being the larger source of power consumption).  However, the static losses (iLEAK) are CMOS Energy Equationincreasing as the transistor geometries shrink and the densities increase.  As the frequency of operation of the device is scaled back to conserve energy, the static loss predominates - a real problem for large scale devices with hundreds of millions of transistors.

There are structural ways to minimize these losses, but as geometries continue to shrink, power will become a much more serious issue... not only as a problem for energy consumption (i.e. operating costs) or battery life of equipment that uses these devices, but for the heat that builds up inside of the chips themselves.  As heat builds up and cannot flow away from the source, the localized temperature will also increase causing aging in the device.  A chip will fail sooner if operated at high temperatures, so by lowering the power consumption, the lifespan of the device improves.

Back in 2000 my company, National Semiconductor, pioneered a cool way to lower not only the dynamic power, but the static losses as well.  It’s called Adaptive Voltage Scaling and was used primarily on portable devices to increase run time when running on batteries.  However, as chips continue to grow more complex, AVS is now showing up in large scale devices such as the Teranetics TN2022 10G base-T Ethernet physical layer device.  AVS technology utilizes the principle that all digital chips are designed and implemented for worse case process, timing and temperature.  This is like saying the most out-of-shape person will consume one gallon of water while hiking a trail… therefore, all hikers are required to carry one gallon of water - even though most will do just fine with a single canteen full.  It burdens the better members of the group with the problems of the worse constituents.  So by using technology, each digital chip can be “assessed” for its "water consumption" based on how "fit" it is... that is, how well the chip’s process performs.  Like humans, they will all vary around a mean or average performance level, but the majority will fall near the center of the distribution.

AVS leverages this characteristic by placing monitor circuits inside the digital core (or cores) which report the current state of the chip to an embedded controller called the APC or Advanced Power Controller.  The APC then can make decisions whether the supply voltage is too high or too low and communicate those adjustments to an external power supply device called the EMU or Energy Management Unit. As temperature moves, the process ages or the digital load varies, the controller again makes updates that minimize the energy consumption of the overall system.  Energy savings of 40% or greater have been observed when using this technology especially in systems that use multiple devices.

As Moore’s Law marches on, the number of transistors on a singe chip will grow to levels never before imagined... and to keep these amazing devices running, technology will also need to address the methods for powering them.  As in my previous post, "The Personal Supercomputer in Your Pocket", the potential for capabilities is only limited by our imagination and our ability to manipulate our physical world.  Till next time...