My Photo

Recent Comments

« July 2009 | Main | October 2009 »

August 2009

August 30, 2009

In Pursuit of Efficient Lighting

Bookmark and Share

What will leave Edison to the history books? As a technologist I am often asked what single change would bring about a more stable energy infrastructure - it’s not quite that simple.  Our infrastructure has evolved over the past several hundred years into the distributed, fairly reliable source of electrical and chemical energy that we now enjoy.  To pose this question is like asking what single change could be made in a human body to allow us to live longer - again, not so simple.  If you improve one area, you possibly degrade another. 

This brings up some controversy over moving to electric vehicles in an effort to reduce green house gases and remove the dependency on foreign oil.  If you could simply convert all carbon fuel based vehicles to electric, suddenly the entire electrical grid would be overwhelmed by the charging requirements.  In addition it would create a need for potentially hundreds of new power plants - many of these burning coal or natural gas and producing green house gases!  Not a simple solution...

But possibly, there is a single thing that could make a significant difference in improving our energy consumption - at least for now.  I have mentioned this before in several blogs, but it is fundamental in how modern humans live.  It is lighting - the artificial light that allows us to see when the sun goes down.  I cannot imagine a world without artificial light sources.  However, I periodically fly from coast to coast on a "red-eye" flight and as I look down from 25,000 feet I am constantly amazed on the amount of power being fed to tens of thousands of street lamps - all lit brightly regardless of who might be there.  I even pick out the lone 500 watt mercury vapor lamp on some mountain top location and wonder why it’s there...

According to the U.S. DoE Energy Information Administration (EIA), in 2007 the U.S. used roughly 526 billion kilowatt-hours of electricity for lighting (both commercial and residential). In the following year, a typical nuclear power plant produced roughly 12.4 billion kilowatt-hours, so for the U.S. the lighting needs alone require roughly the equivalent of over 42 nuclear power plants. In addition, the world population is growing requiring more energy.  This means the rate of increase of consumption in itself is increasing.

You cannot simply stop using power, but you can be more efficient with what you have.  As it turns out, Light Emitting Diodes or LEDs have been on the fast track to replace both incandescent and florescent bulbs.  LEDs today are already more efficient than incandescent bulbs, and closing fast on Florescent designs. One problem (among several) that is slowing adoption is in the luminous intensity of an LED. 

The problem stems from the way photons are created within the band-gap of the diode structure.  As electrons cross the band-gap (a forbidden energy level), they transition from a higher energy state to a lower one.  In most diodes, this transition is non-radiative (no light) and is simply converted to heat.  If the band-gap energy is high enough, a photon is created.  This is the basic operating principle of LEDs.  However, most of the photons are caught in wave modes within the semiconductor material and do not add to the light emission - only additional heat as they recombine within the material.

Well, over the last several years some very clever people at MIT started looking at regularly spaced nano-structures that act as waveguides to tunnel those lost photons out of the depths of the LED material.  These are called Photonic Crystals and have driven the luminous intensity and efficacy of LEDs to new highs.  They formed a company around the technology called Luminus to manufacture these ultra-bright LEDs.  This innovation may very well be the first step in realizing a solid-state lighting future.

Now there are still problems with inefficiency due to a phenomenon called Stokes Shift (found in White LEDs using phosphors), thermal conduction requirements (no IR emission as in incandescent bulbs), higher cost plus the addition of electronics required to power and monitor these devices.  However, simply improving the efficiency of every light bulb by 50% in the U.S would immediately remove 30 plus coal burning power plants from operation.  Now that’s significant. Till next time...

August 24, 2009

The Energy Loss of Poor User Interface Designs

Bookmark and Share

IStock_000001511231XSmall I was fueling my car the other day and the pump I was using had one of the worst user interface designs I have ever come across (the brand of pump will remain nameless... but you know who you are).  As I struggled with the poor response time, lack of feedback and just overall bad programming (and this was a simple fuel pump) it made me think... what energy is lost due to users taking extra time using a system with a poor UI design?

I’m sure you know what I mean... most software is delivered with very little user testing.  Of course the designer knows how to use it, but the real test is someone with absolutely no knowledge of the software.  How fast can you use it and get the information you need.  I see this in web designs and other information server applications.  If I have to drill... and drill... and drill... to get to the level I need, I go crazy - especially if I made a bad choice somewhere along the way.  It’s like the old style "wizard" help dialogs. It’s when you get to the end, and it tells you that the software is about to do unnatural things to your data and asks you if you are sure you want to use the original file... is when you realize that twenty steps back you should have specified a new file name!  That’s what I’m talking about. 

Or what about unresponsive code - oh, this is really high on my list of bad software behavior.  If I have to wait for a task to complete before starting another one (especially if they are unrelated), then I start counting the seconds like I’m in prison.  Some software engineers didn’t get the memo that we’re in the 21st century and multi-threading applications are not some lab curiosity!  Or how about the lack of user feed-back... when pushing buttons on some piece of equipment yields nothing in return?  Is the equipment not working? Is the equipment busy doing something else?  Is the button broken?  We just don’t know, but the time it takes to complete whatever task I’m doing certainly increases.

Ok, so much for the rant.  But what amount of energy is lost if any?  Something certainly must be lost? Let’s examine a fictitious, but real-world example - an ATM or Automatic Teller Machine - something most everyone is familiar with.  The ATM has an LCD touch screen and the unit sleeps when no one is around to conserve energy.  Only when someone walks up to it (motion sensing) does the LCD and backlight come on and the processor wake up. 

In this thought experiment, two revisions of software were released - revision A which has UI issues and revision B which was revised to improve the UI.  The only difference between the two releases is the user interface - everything else is the same.  The ATM is in a high traffic area (of course) so that it generates the most revenue for the bank through access fees.  Revision A lacks the "beep" for user feed-back, and uses one thread for all the functions. Revision B has a "beep" when the screen is touched and is multi-threaded so the UI is independent from other activity.

Input speed is slower for revision A due to the single thread and users may think they have not entered their PIN correctly due to the lack of audible (or tactile) feed-back causing them to touch twice.  Revision B has a thread dedicated to the UI, so the "beep" and character representation for the touch is almost instantaneous.  I would imagine that 50% of the time, revision A will cause an incorrect entry of the PIN - at least during the first attempt.  The total time delay would be roughly an additional 10 seconds.  The next hurdle would be entering the amount for deposits or withdrawals - probably 90% of the usage of the machine.  Assume the same 10 second error recovery time when an error is made. Using these simple estimates and assuming and average of 30 users per hour, table 1 shows the total run time the systems are up (not sleeping) for each revision (assuming only withdrawals and deposits).

Table 1 – Software revision unit run time comparison

Rev A time (sec)

Rev B time (sec)

PIN entry

(7 + (10 * 0.5)) * 30 = 360

5 * 30 = 150

Transaction selection

6 * 30 = 180

4 * 30 = 120

Amount Entry

(10 + (10 * 0.5)) * 30 = 450

8 * 30 = 240

Communications Time

15 * 30 = 450

15 * 30 = 450

Accept / Dispense Time

5 * 30 = 150

5 * 30 = 150

TOTALS

1590 (26.5 minutes)

1110 (18.5 minutes)

So, looking at this hypothetical ATM, we see that in every hour, a bad user interface which causes errors in entry may increase the run time by 8 minutes.  If the unit is sleeping the remainder of the hour (simplified), then every hour the run time increases by 8 minutes or 3.2 hours per day (assuming an average of 30 users per hour for 24 hours).  A more accurate model would take into account usage and sleep times for the entire day, but it is obvious that a poor UI will increase the run time of the machine. 

In this model, If the ATM consumes 200 watts in run mode and 20 watts in stand-by, the energy consumption increases by 576 watt-hrs per day, or an additional 210.4 kW-hrs per year simply due to errors caused by the poor user interface… makes you think about the next time you start writing code, doesn’t it? Till next time...

August 03, 2009

Lower Power - It’s all In the Architecture...

Bookmark and Share

Better Architecture = Lower Power

I was wandering around the show floor of the Design Automation Conference (DAC2009) in San Francisco last week talking to various vendors of EDA software and other interesting semiconductor design tools. I was amazed at how many vendors had that special tool for "lowering power by up to 20%" - just press the button and our tool will magically reduce your system power.  Oh, if it were only true - the problem is much deeper and complex than it appears.

We now live in a world where machines build machines - yes... it’s true... The Matrix is real - well, the "machines building machines" part anyway.  Ask a system-on-a-chip (SoC) designer to tell you exactly how his massive billion gate device works.  Not the blocks, inputs or outputs ("here we have a 128 bit bus for memory, and over here we have five megabytes of static ram, and here..."), but the real gates of the design at the transistor level.  This is like asking a software programmer to explain the machine code spit out of a C++ compiler - possible, but unlikely. 

These tools optimize and streamline the design based on embedded rules that are under the user’s control. They have limits in that the tool cannot improve an engineer’s bad design (software or otherwise).  So if engineers are trying to build lower power SoCs, then they need to use the most powerful tool available - the one between their ears. 

This is now a time when shrinking process geometries are causing new problems that are unlikely to go away with the next generation of tools.  When we were shrinking from 0.5 micron gate lengths to 0.35 micron lengths, problems with leakage and other structural artifacts were much easier to deal with.  Today, 45 nanometer gate lengths (0.045 micron - over 10x smaller) have an entire new set of problems.  First, there are a lot more transistors then when we were building chips from a 0.5 micron process. Second, they are running much faster and third, they leak current like a torpedoed ship leaks water - over three orders of magnitude higher than a typical 250 nm process (at 30 degrees C, 3000nA/um for a typical 45nm geometry process vs. 1nA/um for 250nm).

So what’s a designer to do to decrease power consumption?  Engineers need to start thinking of new ways to architect their designs. Here are some ideas:

1. Partition the system and provide isolation so that sections can be turned off.  Today’s tools allow for this, but very often engineers do not use this technique.

2. Don’t be afraid to provide multiple voltage islands.  Yes, it’s scary but foundries provide models of their processes at different operating points.  If you don’t need to go at warp 9.9, you don’t need your antimatter reactors running at full power... close your timing at the lower voltage and level shift that section.  Remember, dynamic power varies as the "square" of the supply voltage and leakage varies linearly as well...

3. Gate or dynamically scale your clocks.  If the system doesn’t need to be running at full speed, gate or slow down the clocks.  This is an architectural issue and some systems cannot utilize this method (e.g. video accelerators, etc.).  However, with some "re-thinking" there may be areas that can slow down at lighter loading or other conditions that do not require full performance.

4. Dynamically scale the supply voltage.  This can be done in combination with clock scaling and use either open loop table based methods such as Dynamic Voltage Scaling (DVS) or more advanced techniques such as Adaptive Voltage Scaling (AVS) which continuously monitors the process for adequate performance and automatically adjusts the supply voltage to maintain timing closure.

Any of these techniques will improve your power performance. So employ them - if not to save energy for the planet or reduce the system’s carbon footprint, then to save heat sinks or improve your mean time to failure numbers (lower junction temperature means longer life).  At this point, greater gains can be made by improving the architecture - at least until we make the move to quantum well transistors! Long live Moore’s Law...

Till next time...