My Photo

Recent Comments

Energy Efficiency

February 02, 2009

The Role of Semiconductors in Energy Conservation

Bookmark and Share

The Future Chip I’ve been hearing a great deal about how various technologies will be deployed to help reduce our carbon foot print as well as provide a sustainable energy future for all... these include alternative energy generation, smart grids, new solid state lighting, and more.  The most interesting thing is that underlying all of those technologies (and many others) are the semiconductors that provide the computational engines, the sensing and signal conditioning as well as the power conversion.  It is the humble "chip" that defines the semiconductor industry and has made such amazing strides in the last 50 years since its debut.  Now it’s time to leverage that technology in saving energy - not just consuming it.

Without semiconductors, very few of our modern technologies would exist.  It would either be impossible to manufacture them or they would simply be too complex to implement (think "mechanical or vacuum tube" based computers).  Today with energy on everyone’s mind, conservation is in the forefront along with improved efficiency.  If you consider that almost no one owned a computer in 1981 (except geeks like me), the conversion efficiency of the power supplies were not a major issue - cost might have been a higher priority.  However, today just about everyone has at least one computer and the energy consumption of the system is a high priority. Building computing platforms that use less energy is a focus for the major microprocessor vendors as well as the system designers.

Extending the view out into the Internet, the picture becomes cloudy on exactly where the power is going.  However it is going somewhere and in gigantic quantities.  Yahoo and Google both are building new data centers in the Pacific Northwest to move closer to sources of hydroelectric power which is (for now) plentiful and less expensive.  With the growth of the Internet continuing for the foreseeable future, the power consumed by this infrastructure will continue to climb.  Estimates are that by 2050 an additional 300 gigawatt power plants (coal, nuclear, natural gas, etc.) will need to be built to support the increasing consumption of electrical power.

So, if the semiconductor industry has enabled so much through higher levels of integration and performance, why can’t the next big challenge be to make these systems more energy efficient? I have no doubt that is exactly the thought on everyone’s mind.  In the past, the goal was to put as many active devices on a single "chip" as possible.  Today, a billion transistors is standard operating procedure.  Now the goal is to reduce how much energy each transistor uses to do the same job. New technology such as quantum well transistors holds the promise to reduce the energy consumed to around 1/10th of today’s modern CMOS (Complementary Metal Oxide Semiconductor) transistors.  Other technologies such as carbon nano-tubes will also play a role, but we may not see the fruits of these technologies for another 5-10 years.

So, remember while you’re talking on your cellular phone or watching that brand new 50" flat panel HDTV... without the semiconductor industry most of modern life would not exist... and the future is more dependant on the success of that industry than most think.  Till next time...

January 07, 2009

Energy Consumption in Consumer Electronics

Bookmark and Share

Bulb Happy New Year... with CES around the corner, it is interesting to think about consumer electronics and how energy efficient they are today.  I often think that with so many new devices available to consumers, the amount of energy consumed is actually growing at an ever increasing rate.  This is most likely true simply due to the decreasing cost of certain technologies. As consumers buy more, it is even more imperative that the energy efficiency of these devices continues to improve.

An interesting twist in the mix is large format flat-screen HDTVs.  What most people don’t realize when they rush out and buy that new 50" panel is that it probably consumes quite a bit more energy than their old 27" tube model.  A 50" plasma HDTV will draw near 500 watts, where their older 27" tube model draws closer to 300 watts.  With the prices of the larger displays dropping, consumers would rather have the bigger picture, than simply replace their 27" model with a similar sized LCD HDTV which actually draws far less - around 150 to 200 watts.

A similar phenomenon exists for LCD monitors for PCs.  With the performance of PCs improving orders of magnitude over the last several years, gaming and other display intensive applications are now driving consumers to purchase larger LCD displays - or even multiple displays for the same PC.  I’m guilty of the later since I find having multiple displays for my PC desktop provides me a larger, more efficient work-space.  You’ve seen the stock trading setups with 6 LCD panels... I’m not quite that far yet... but it does make you think that if more monitors are hooked up to PCs, then the energy consumption - as a whole - also rises.

Speaking of PCs, they have come extremely far in energy efficiency.  However, you need to look beyond the watts that flow into the machine and consider what it is doing for the energy it consumes.  If you consider my first PC, which ran at 4.77 MHz, only could squeak out 0.25 MIPS for the 100 watts it consumed (not including the CRT monitor) you could calculate an efficiency rating of only 0.0025 MIPS/watt. A modern PC can exceed 1000 MIPS (considering the graphics engine as well) and may only consume 250 watts including the LCD monitor.  That provides an efficiency rating of 4 MIPS/watt - an improvement of 1600 times over my PC of 28 years ago.

Also, software is very important in the efficiency equation. The operating system is well aware of the user’s current processes and can greatly reduce the PC’s energy consumption by various methods such as turning off hard drives, powering down LCD monitors or simply lowering the back light intensity when you’re pondering your next move in game-land.  All modern PCs with any energy efficiency rating tied to it will have these features.

And there is always the ubiquitous cell phone.  The only problem is that you really can’t call them simply a cell phone anymore.  Most modern cellular phones include MP3 players, video games, calendars, contact management tools, cameras, and many more features.  What’s also interesting is the size of the battery " not just the capacity, but the mechanical size.  I took the battery out of my Blackberry and it measures approximately 1" x 1.5" x 0.25".  The old "bag" phone I used to carry in the 1980’s had a battery that looked more like a lap-top pack.  The lack of cell towers and early analog (AMPS) cell technology required a 3 watt transmitter in the phone which needed a big battery to provide adequate talk-time.  Today, CDMA or GSM technology is far more efficient with bandwidth and power consumption.  Also the mobile processors use energy saving technologies such as Adaptive Voltage Scaling pioneered by my company, National Semiconductor to further improve the efficiency of digital cores.

So to sum it up, consumer electronics have come a long way - not only in features and performance, but in their efficient use of energy.  Just about every consumer electronic device or appliance carries an EnergyStar rating sticker so you know exactly how much energy the device will use.  It even spells it out in dollars so you can compare equipment.  With energy prices bound to rise again, most savvy consumers will look at those stickers and think about their monthly power bill before they make their purchase... till next time!

October 01, 2008

Performance and Energy Consumption... Are They Exclusive?

Bookmark and Share

IStock_000006760484XSmall In my position I hear a great deal of discussion regarding the physical trade-offs between performance and power consumption.  "If you want to accelerate quickly in a car, you need power to overcome inertia."  I agree... but increasing the size of the power plant in a car isn’t the only way to get it to accelerate faster.  Inertia is a function of mass (F=ma) so by decreasing the mass, you can get faster acceleration with the same power plant. 

This is a very common approach to improve either performance or fuel economy in today’s modern sports cars as well as jets, boats and other vehicles. But these principals also apply to electronic systems as well.  Complementary Metal Oxide Semiconductor (CMOS) based devices define modern digital and mixed signal electronics.  In the very design of these devices are issues with power as the performance is increased.  For example, DRAM designs have capitalized on the supply voltage vs. power equation for CMOS processes to reduce the power consumed (see the equation below).

 CMOS Equation

This equation shows that the frequency and capacitive load terms contribute linearly to the power consumption.  Reduce the frequency by half and the power will also be cut in half.  However, the supply voltage is a square function, so by reducing the supply voltage from 1.8V (DDR2 memory) to 1.5V (DDR3 memory), the power consumption is reduced by 30% which is a major savings.

As process geometries continue to shrink the conduction channel gets shorter (good) and the gate insulator gets thinner (bad). To reduce leakage (electrons that "tunnel" through the thin insulator) manufacturers have moved to lower supply voltages. By reducing the voltage across the transistor, the associated electric field that exists between the gate and the conduction channel is reduced as well.  New materials such as nitrided hafnium silicates (HfSiON) are being used to replace silicon dioxide in an effort to prevent leakage and electron tunneling (Intel is already shipping processors using hafnium based high-K dielectric in their 45 nm process).

No matter how you slice the problem, when you have a billion transistors all using a tiny amount of power, you end up with a large amount of power being consumed.  Processors and digital systems require large amounts of transistors and for the foreseeable future will only increase in density.  To increase the performance, there must be another way...

My impression is that the industry can take several paths in an effort to increase performance while minimizing power consumption.  One path (which is currently the path of choice) is to continue to shrink the process geometries to 20 nm or below which becomes extremely hard to fabricate.  This will allow more transistors on the same size die and utilize sub one-volt supply voltages.  Another avenue is to migrate away from silicon processes altogether and find another way to make transistors.  There is on-going research in the area of quantum well transistors made in indium antimonide which may be the next step for higher performance digital functions with extremely low power - one tenth of today’s power consumption.  There is a large capital investment in integrated circuit fabrication technology so the next step that will be the least painful will need to be similar to silicon based manufacturing.  There is also research being done in diamond based semiconductors as well as carbon nano-tube technologies to also reduce power while improving performance.

But what about revolutionary change?  What if we abandon semiconductors all together and move to optical non-linear crystal based computing and analog functions?  Is this even possible on the scale of which we currently build processors, analog-to-digital converters, amplifiers or other electronic components? Maybe our industry needs to take a step back and consider the new horizon in front of us...  a world were energy consumption is as much a factor as how fast we go... something to think about.  Till next time...

September 02, 2008

The Energy Content of Software – Part II

Bookmark and Share

In part one of this series I talked about moving more digital signal processing back to the analog domain to save energy. The savings is accomplished by both reducing processor cycles and by moving the functions to lower power analog solutions.  But what about the software code itself? Does it have energy content and can it be optimized to reduce a system’s power consumption?  The answer to all of these questions is YES.

To prove the point you can examine any modern laptop computer architecture.  Laptop processors are designed to speed up and slow down depending on the system load.  If you are simply idling the CPU by typing a blog post (like I’m doing while I write this), the processor doesn’t need to run at its full processing capacity.  Likewise, the operating system is aware of the processes that are running and adjusts the system accordingly.  All of these advancements provide much longer run times in today’s portable computers.

However, fundamentally the software industry has not embraced the concept of "Code Energy" or how many joules a subroutine or algorithm uses.  Object oriented languages have enabled modular code reuse and tremendously improved the time it takes to create software.  In fact, it’s so easy even this author can create fairly powerful software without spending 20 years perfecting my skill.  This arguably can result in poorly written code - software that is wasteful in both CPU cycles and energy. 

For example, many GUI (Graphical User Interface - pronounced "gooie") routines like a list update can be handled in several ways.  The correct way is to suspend the graphical refresh thread while updating the list.  This action prevents the GUI from continuously trying to refresh the visible list the user sees while the processing thread adds contents to the list.  I see this error many times in software and it’s very annoying. It wastes cycles and slows down the user interface - sometimes to a crawl...  See my Delphi example below of a correct way to update a ListBox object:

Example_routine

Another example is bloated code.  Many times libraries of functions are used to simplify the creation of a software project.  This is an extremely powerful concept allowing great reuse of routines. These libraries can be dynamically loaded or unloaded from memory as needed and can be shared among many programs or processes running simultaneously.  Now imagine you have a library that contains a quick sort routine as well as 100 other routines all packaged into one library (for Windows fans, a DLL or Dynamic Link Library).  This is a precompiled object that stands completely alone and is loaded by calls to the operating system.  If all you wanted is the quick sort routine, the operating system would need to load (and maintain) the entire library of routines since they are a complete unit.  This takes cycles, memory and other resources.  This is the equivalent of buying (and storing somewhere) the entire contents of a clothing shop when all you wanted was a single T-Shirt.

Many software engineers have become complacent with the belief that they’ll always have enough memory, disk space or CPU cycles.  The tools for creating software today are incredibly powerful and allow amazing amounts of code reuse and optimization. It is very easy to become dependant on the tools to make sure our code is written efficiently.  The tool, like a hammer or screw driver, is only as good as the user at the control end.  Poor architecture leads to poor performance and thus, high energy content.

If you turn back the hands of time 30 years to the day when microcontroller systems had 16KB (16,384 bytes) of RAM, engineers were exceptionally creative in building systems that ran efficiently.  They didn’t have object-oriented language tools and any code that was reused was from routines written in ASSEMBLER for that particular processor - not a universal module that could be targeted to any system.  In those days it took years to create complex systems, so with the advent of high-level object-oriented languages such as C++, engineers were quick to adopt them.  Unfortunately, many of these tools were targeted toward code reusability and time to completion - not energy efficiency.

As it turns out, many of these inefficiencies were noted in the poor performance of early operating systems.  The offending routines, usually in the kernel or core of the system, were re-written in highly efficient assembler code and optimized for quicker execution.  This action, in affect, lowered the energy content of the OS kernel and improved the performance of the overall system.  Engineers today don’t necessarily need to hand code every little routine in machine language to get better efficiency from their software.  Let’s look at a list of things that can lower the energy content of software - all of these ideas apply to both embedded systems and full out computer systems and servers.  I’m going to do them in "Letterman Style" ascending order of importance for fun and effect - now, no looking ahead!

10. Use a stop watch : If you want to know exactly how much energy is in your software, measure it.  That is, if the process takes seconds to complete you have a great deal of processing going on.  I like the comparisons of various sorting techniques - Bubble, Bi-Directional Bubble, Selection, Shell, Quick Sort and others.  On large data sets you can actually time how long it takes for these routines to complete and compare one with the other... a stop watch is a handy programming tool.  In this case, shorter is better...

9. Use a power meter : This is really important for embedded applications, but can apply to most systems.  The power meter should be a chart recording type (digital or analog) and calibrated to the range of the system.  You can actually watch when the supply power fluctuates due to sections of the system turning on or off or system clocks speeding up or slowing down.  Oh, you don’t dynamically control the subsystems in your project to save energy?  Bad programmer... what have we been talking about here?

8. Ask your colleagues : Has anyone in your department ever done something like this before? I’ve heard stories of engineers sitting one cube apart where one of them has worked on a similar project and the other never knew.  Ask around your department before you start writing code - maybe someone perfected the ideal and highly optimized routine for you to use...

7. Think of CPU cycles as money : Imagine having one million dollars available and you need to build a project using those funds.  What if, at the end of the project, any money left over was yours to keep?  How would you spend the money while building the project - I’d imagine very frugally... Think of your computer cycles the same way.  Remember, cycles are not free and neither is memory or communication bandwidth.  These all have finite costs associated with them - and they all draw power.

6. Know your operating system : This is one of my biggest pet peeves! I cannot tell you how many times I’ve seen code written that intentionally works around the benefits built into the OS and runs completely unreliably or with abysmal performance - and thus, high energy content.  I mentioned a few examples above, but the list could include hundreds of examples if someone gave me the time!

5. Learn your language : Just because you can write a simple hashing routine doesn’t mean you fully understand the programming language you are using.  I, for one, don’t always know every OS call or library routine - and that’s all right since that’s what reference documents are for.  However, you should know the best way to code a solution to a problem. Just like a written language, there are ways to communicate things that are more efficient and precise.  I find badly written code so often it makes me wonder if these programmers ever went to school to learn how to write code!  Poorly written code wastes energy...

4. Change your perspective : This is true for many problem-solving ventures.  A new or fresh perspective can sometimes expose methods that are superior to those used in the past.  I often find myself taking a coffee break, or simply walking around for a while when I’m looking for a software bug or before starting a new design. Have someone else look at it as well.  I once had my wife look at some broken code (she is not an engineer or programmer) to see if she could find anything that seemed out of place.  She noticed that when letters and numbers appeared in parentheses (an "if" test), there were always two equal signs... except in one.  There was the problem - a valid line of code that didn’t do what I wanted! I had looked at that code for days and never saw the flaw.

3. Improve your hardware knowledge : Software engineers may sometimes feel that all they need to know is software. Knowing more about the effects of your code on the performance of the system is absolutely a requirement - no options here.  For example, our cable company recently "upgraded" our set-top boxes with "new and improved" software.  Not only does this new release lack important functions that were previously there, but the overall performance is much slower - and all on the same hardware as we’ve been using!  Bad programmers...

2. Be modular : Once you’ve taken all the bloat out of your routines, clean up the entry and exit points to allow encapsulation.  Why waste all that great coding?  Even with my primitive coding skills, I’ve created routines that have simple entry and exit points.  Some I never thought I’d use again. But every once in a while I find a great piece of code in my stash of routines which saves hours of re-engineering the same solution!

1. Understand your tools : I cannot stress this point enough!  The tools that compile and assemble your high level object oriented code into machine code have many options to control the process.  Sometimes you want to optimize for space and other times you may want to optimize for performance - these are trade-offs. The tool sometimes will include diagnostic code in the compile to make debugging easier... did you remember to take it out in the final distribution?  I know these sound like simple things, but I’m sure there’s a large amount of production code out there that was released with the wrong flags set during the compile... we all get busy and forget things!

0. (Bonus) Always document EVERYTHING : You thought I was ranting on the tools issues - this is my biggest complaint when I look at some else’s code.  If you did something clever - TELL ME...  If you know that setting a bit in the processor during a particular operation lowers the power, make a note to why you did it.  Encapsulate your routines as well.  Tell me a story about why this routine exists - why was it important enough to encapsulate?  Oh, I could rant on this all day, but this is about saving energy - not hair on my head (it’s too late for that now).

I’m sure you have similar rants and raves or simply good ideas on how to write efficient code.  Do share if you have some!  Hope you enjoyed this two-part series and gained some insights on how to lower the energy content of your software.  Till next time...

August 26, 2008

The Energy Content of Software – Part I

Bookmark and Share

I’m frequently asked what is being done to improve the efficiency of semiconductor devices.  Digital core companies such as Altera and Xilinx as well as processor companies like Intel, AMD, Freescale and many others have been working to reduce both static and dynamic power in their products especially as geometries move beyond 40 nanometers.  Most modern analog processes have much larger feature sizes and do not suffer from the static power losses.  In fact, there is a renaissance of analog functionality which is being implemented in today’s designs.  Primarily this is being driven by the move to retreat from the digital domain. 

During the birth of the DSP age digital processing was the Holy Grail of the electronics industry and supplier’s products wouldn’t move off the shelf unless they had a "Powered by a DSP" sticker on them. The mantra was to convert analog signals into the digital domain as soon as possible so that algorithms running on digital signal processors could process what was once done with operational amplifiers.  This was a love affair with the ease of use and relatively short design cycles associated with coding the solution. This science also provided abilities completely unrealizable in the analog domain - thus the romance with digital signal processing and the shift of curriculums away from analog design across the world’s universities.

It is now recognized that DSP cycles along with high performance analog-to-digital and digital-to-analog converters all consume large amounts of power.  In many applications analog solutions were overlooked since many engineers were trained to solve the problems digitally.  A case in point is a technology familiar to many people - active noise cancellation.  Early attempts at removing ambient noise from either headphones or microphones employed DSPs with CODECs (Coder / Decoder) along with high performance digital filters and noise cancellation algorithms running on the processor.  This approach is fine for equipment plugged into power lines, but what happens when you want to use this technology in a set of headphones or a cellular phone running on a tiny lithium Ion battery? 

Run time is king when using portable devices - especially consumer equipment such as cellular phones.  Many phone developers assume that the cost of the DSP in the phone is already amortized, so they look to continue to bloat the code by adding this function into the already taxed DSP.  Processing cycles are not free even if the hardware is... and this problem shows up in poor run time or a lack of performance.  Moving some of the functionality back to the analog domain can save a large amount of the power budget.  This is not only due to the function being moved out of the DSP’s software to a lower power analog solution, but by reducing the cycles executed by the processor. 

An example of this is the National LMV1088 Far Field Noise Suppression device.  This IC uses two microphones to listen to the near field audio signal or speech and cancel out the far field noise.  The noise suppression function is completely done in the analog domain greatly saving energy.  This technique is also referred to as A-to-I for Analog-to-Information.  It means that the analog signals are directly converted into something useful - in this case, improved speech recognition in high noise environments.

Next week in Part II of this post I’ll discuss the fundamentals of coding software and the effects it has on the power consumption of our computers and infrastructure.  Till next time...

August 21, 2008

Creating Efficient Architectures

Bookmark and Share

Back in April, I sat on a panel at the 2008 Globalpress Electronics Summit held in San Francisco along with several esteemed collogues.  We were asked a question at the end of the discussion - "What single thing could the electronics industry do to help save energy?" My original answer was to implement a set of metrics to allow engineers to quickly evaluate the energy consumed by a device or system in relation to the function that device or system provides.  This would provide engineers a way to accurately evaluate competing devices and sub-systems to find the most energy efficient solutions. National Semiconductor has adopted a set of metrics for their PowerWise® Solutions Family of devices which defines these parameters.  But over the months since the conference, I also thought there could be more.

Today we are facing a crisis... one that is undeniable and visible every day we turn on the news.  Energy is in growing demand and heavily influences our economy.  Without it, our modern civilization would grind to a halt.  For example, at the current growth rate of electrical consumption it is predicted that by 2030 over 300 new gigawatt power plants will need to be built to keep pace. There are two choices - we can find more usable energy or we can be more efficient with what we have.  The final solution will be a combination of both.

Part of being efficient is understanding what is required and what is superfluous.  Efficiency experts study process flows to see where time is being wasted in the production of some item or items.  Assembly lines are case studies for this type of exercise.  Early in the 20th century, Charles Sorensen and Charlie Lewis employed by Ford Motor Company figured out how to build a moving assembly line for the Model T which greatly simplified the production of the vehicle reducing both time and cost.  This change - moving the car to each work station instead of moving the workers - was a major efficiency improvement in the production of automobiles.

Engineers have a responsibility to be efficient which is often overlooked.  In 1908 William A. Smith stated, "Engineering is the science of economy, of conserving the energy, kinetic and potential, provided and stored up by nature for the use of man. It is the business of engineering to utilize this energy to the best advantage, so that there may be the least possible waste." Just because you have 2000 watts of power available at an electrical outlet doesn’t mean you can use it all to solve your problem.  Sure, it’s easier to provide a function without concern for the method or implementation of the product. However, engineers must consider their choice in components to best solve their problem using the least amount of energy.

But even more important is how these components are used together to solve a problem.  This is the architecture of the design - the combination of subsystems that best solves the problem.  Architectures can evolve over time based on the availability of certain technologies.  For example, modern LCD HDTV units have used LCD glass with red, green and blue (RGB) color filters to turn the white light of the cold cathode florescent light (CCFL) tubes in the back-light into color pixels.  Over 85% of the backlight energy is absorbed by these color filters.  Converting the backlight to white LEDs and controlling the brightness dynamically across the image frame greatly reduces power and improves the contrast ratio. This modification of the backlight is an architectural change.

Possibly the next big architectural change in LCDs will be frame sequential scan technology which removes the color filters completely.  It also replaces the white LEDs with RGB LEDs which are sequenced - red, green, blue - along with the LCD image (running 3 to 6 times faster than the frame rate).  Each new complete image frame is composed of a red, green and blue frame which sequences so fast, the human eye cannot detect it.  A 42 inch HDTV that today draws 500 watts could be reduced to fewer than 100 watts using this architecture.

Some of the greatest gains in energy efficiency can be found in improving the system architecture.  Intel and AMD have known this for years.  Single execution pipelines can only run so fast, so by creating multiple execution paths, the processing speed can be increased.  From a perspective of saving energy, the parts can run slower and provide the same amount of processing power thus improving the performance to energy consumed.  This advancement in architecture enabled the notebook computing revolution we see today and is also helping reduce the power consumption of server farms.

In the analog semiconductor world architecture is equally important. High speed analog to digital converters (ADCs) can require a large amount of energy to perform the function of converting analog signals to digital bits.  In medical imaging systems such as ultrasound equipment, large numbers of ADCs are required to convert the analog image data into digital form so the image processors can create a picture.  To create a portable ultrasound that runs from batteries, you either need very large battery packs or a way to reduce the overall power consumption. 

Typical ADCs used in ultrasound equipment use a pipeline architecture which is very similar to the assembly line approach envisioned by Sorensen and Lewis.  Each step of the way, the system converts a piece of the analog signal to bits and passes the analog remainder to the next stage.  This architecture works very well and is found in many high speed ADCs.  However, it does take a certain amount of power to keep the "assembly line" running.  Alternative ADC architectures, such as Delta Sigma ADCs used switched capacitor modulators and filters which become very inefficient at higher conversion speeds.  National Semiconductor figured out how to substitute the switched capacitor stages for continuous time versions which dramatically lowers the power.  The first component, the ADC12EU050, debuted with 30% lower power consumption when compared to its nearest competitor.  This type of ADC also eliminated the requirement for external anti-aliasing filters which again reduced the cost and power consumption of the overall system.

As engineers, we need to continuously examine how we design things and try to look beyond our normal methods.  Sorensen and Lewis looked beyond the limits of their current methods to fundamentally change the way automobiles were assembled - these methods are still in use almost 100 years later.  Thinking "outside the box" sometimes provides an insight never seen before and could solve many of our troubling problems that we face today...

So, the next time you are about to design a new product, try and avoid starting the project with the "let’s look how we did it the last time and start there" approach.  One of my old professors once told me "a new problem requires a clean sheet of paper..." pretty good advice even 30 years later.  Till next time...

August 05, 2008

The True Cost of an Internet “Click”

Bookmark and Share

Did you ever stop and think about how much energy you consume? Yes, you personally... and your family.  I think about it all the time.  I turn off lights, adjust the thermostat, consolidate my trips to reduce fuel consumption and turn off the TV when not watching.  I’m sure you do the exact same thing.  The cost of all forms of energy is continuously increasing especially in the last few years.  But have you ever thought about how much energy you consume when you click a link on a web page or send an email... probably not - and neither have I until now. 

I do a great deal of research into how efficiently energy is used in various systems and processes and I’m constantly on the Internet accessing websites.  Recently in a meeting, a fellow executive made a comment that hiding behind the cost of your Internet broadband connection and home computers were hidden energy drains.  These were the infrastructure and servers that manifest the information super-highway.  How much power was consumed because you wanted to see the latest top video on YouTube?  What if you didn’t click it? How much power would you save? How much carbon dioxide did you save from our atmosphere?  I thought, "Wow" what an interesting question...  Now, can we answer that question? That’s the question.  This is a monumental task... and difficult to estimate (but that’s never stopped me before), so we’ll have to examine exactly what happens when you access a website and make some assumptions to reach a reasonable conclusion.  Here goes...

First we need to consider what happens when you "click" a link in a browser.  The browser must first connect to the target server so it can request the page associated with the link in the page.  This is accomplished using Transport Control Protocol (TCP) and is similar to placing a phone call to the server.  Once the server "answers" and establishes the connection, the browser forms a request packet for the page tied to the link.  This request asks the server to send the contents of the page back to the browser.  If the page address is valid, the server then responds with a stream of packets that identify it as a valid server response along with all of the Hypertext Markup Language (HTML) contents plus other information such as scripts, meta-data, formatting and others.  Once all of the contents of the request are delivered to the browser, the connection is ended and the information is rendered into something the user can see and read.  Modern browsers actually make multiple connections and requests simultaneously to fill in images and other sections of the page.  This makes the rendering much faster and provides a smoother appearance to the user (See below).

Http_transaction_3

The process above takes place between two computers usually separated by a vast distance.  It is very reasonable to expect most web accesses made by a user reach servers that are located anywhere from hundreds to thousands of miles away.  Between the two computers is a vast network of switches and routers - a "highway" for the data packets.  Like railroad trains, the packets travel from your cable modem over your cable network (the local spur line) to the central office.  There the packets are switched to higher bandwidth fiber optic cables (main rail lines) using very short pulses of laser light which travel extremely far.  The packets may transition several major switching stations before being routed to the local network connected to the distant server.  What’s interesting is that the messages between the computers will often require multiple packets and, like trains, may arrive at different times out of order from what was sent.  This occurs due to traffic conditions along the way, again like trains on a railroad, the data routers find the most efficient path to deliver the packets resulting in varying arrival times.  One job of the receiving computer is to re-order the information and pass it on to the high level software for interpretation.

All of the technology to accomplish this transaction requires power - from the computers at both end (yours and the distant server) to the networking equipment and networks in between.  As mentioned earlier, to estimate the power consumed in loading a web page we need to make some assumptions.  For this estimate, we’ll ignore the power in the local computer and home network infrastructure - this would be considered already spent in the local budget regardless of the Internet accesses.  We will only consider power consumed by everything external to your location. 

Next we’ll need to consider the page contents and how many packets would be required to move the information back to your browser.  Our "typical" page will have no video since that is most often streamed and holds the connection open (like a long phone conversation with your best friend - only they do all the talking).  It will have 3 graphics that average 100kb each and about 5000 characters of information (e.g. a Wikipedia or news page).  The total page contents will require approximately 310kb to be transferred from the server to the browser.  Upstream from the browser, there will be at least 4 requests (1 for the page, 3 for the images).  The requests will occupy only about several hundred bytes of data, so in total the one web page request will move about 315kb of data (which includes all the connection overhead) between the two computers. 

Now that we have an understanding of how much information is transferred between the two machines, we need to examine how much additional networking equipment the information crosses and the power consumed.  We’ll assume the cable head end has a modem, a switches and a router - totaling approximately 200 watts.  The high speed connection on the Internet side of the router probably has a fiber link with an interface box (another 100 watts).  We’ll assume the packets make 3 jumps to other routers along the way.  Each jump will have 2 fiber boxes and a high speed router (to simplify) for a total of 300 watts for each jump.  The server farm will have one fiber box, a router and switches which adds an additional 300 watts.  The total network power for that link is approximately 1500 watts.  Last, we need to consider the average power of a modern blade server - let’s assume it averages around 50 watts.

Now that we have a scientific guess at the power numbers, it gets a bit complicated.  We need to know how much time your data used each piece of equipment so we can get watt-hours, a measure of energy.  Let’s examine the various speeds starting with the cable side.  Typical Data Over Cable System Interface Specification (DOCSIS) cable modem will have an aggregate bandwidth of around 152 Mbps (Mega-bit per second) down stream and 108 Mbps upstream (to the server).  To simplify the calculation for time that the packets stay on that leg of the network, we’ll use the upstream data rate of 108 Mbps.  We’ll also assume the fiber legs are OC-12 (Optical Carrier 12) with data rates of around 601 Mbps (622 Mbps - 21 Mbps overhead).  The final leg inside the web server’s infrastructure will most likely be a 1 Gbps (gigabit per second) Ethernet path.

To normalize all of these varying power-speed numbers, we’ll turn to a metric used by my company, National Semiconductor to rate the power consumption of interface devices.  This breaks down the speed and power numbers into one unit of measure in units of energy per bit (Joules/bit - see PowerWise® Solution Metrics).  I’ve also mentioned this method in a previous blog (The Efficiency of Moving Bits) and it allows us to greatly simplify calculating all the various speed-power numbers.  Table 1 shows how we figure out the picojoules per bit for each hop the data takes. The total energy per bit is roughly 4.6 microjoules per bit.

Table 1 - Network Energy Consumption
Network Equip. Power Data Rate pJ/bit Value
Cable (DOCSIS) 300 W 108 Mbps 2.8 uJ/bit
Fiber (OC-12) 900 W (3 x 300) 601 Mbps 1.5 uJ/bit
Ethernet 300 W 1000 Mbps 0.3 uJ/bit
TOTAL 4.6 uJ/bit

The server blade usage will vary, but we’ll assume a fully loaded server providing 2000 pages per second.  Your page will then occupy one access of that or 1/2000 of 40 watts or 0.02 watt-seconds (Joules).  Now let’s see how this all adds up for your web page view.

We concluded that the average page request occupied about 315,000 bytes of data.  That’s 2.52 x 10^6 bits.  The total energy required for the transaction was 4.6 x 10^-6 Joules per bit.  Multiplying these two numbers result in 11.52 Joules.  We add in the server energy of 0.02 Joules for a total of 11.61 watt-seconds (Joules) for each page view.  Again, this is not streaming video (I’ll look at that in a future blog post), but a static web page access from a server.  If you now multiply that single access by 1 million every second (a medium city’s population browsing the web), you get an energy consumption number of around 11.610 kilowatts an hour to keep the data moving...  enough energy to power roughly 13 US households for a month!  For you viewing 100 pages in a day, that would be about 323 milliwatt-hours of energy - or the equivalent of watching TV for about 10 minutes - and interesting thought.

As expected, it seems that the contribution of any individual is extremely small, but the sum of the population makes a much larger impact.  Maybe you’ve got a better estimate or have looked at this before more closely... let me know what you think!  Till next time...

July 22, 2008

The Leaky Bucket Syndrome

Bookmark and Share

Residential Energy Leakage
Most people have heard about "vampire power" or the leakage power that continues to flow even when a device is turned off.  Many home entertainment systems can draw a several watts while they are in stand-by mode - the equivalent of "off".  Many set-top cable boxes keep most of their circuitry on even when the unit is not "on".  This is required for the cable infrastructure to maintain communications with the box. 

More curious still is the power that appliances draw when "off".  For instance, if you have a 1000 watt microwave oven with a digital display it may actually use more energy when not cooking then in actual use.  This is due to the electronics drawing power continuously while the oven component itself is used only periodically.  If the oven is used for an average of 4 minutes per day, it will use roughly 24 kW-hrs of energy in a year.  The electronics use about 3 watts and are on 24/7/365 so that component consumes roughly 26 kW-hrs of energy - a bit more than the oven.

If you have 4 incandescent bulb night lights that each use 3 watts of power and you leave them on 24/7 (many people do), in a year that would add up to 105 kW-hrs of energy consumed.  Switching to 1 watt LED based units with daylight sensors would save over 88 kW-hrs of energy in a year.  The LED units will last for 100,000 hours effectively never requiring replacement.  If you consider that incandescent bulbs require replacement periodically, the payback for the LED units with improved efficiency and reliability would be under a year.

OK, so saving a watt here or a watt there is like saving a penny here or a penny there - does it really make a difference?  Here’s a similar analogy: does dropping some loose change into a charity’s bucket during the holidays really make a difference.   If 10 million people dropped an average of 25 cents into buckets across the country, that would add up to 2.5 million dollars!  If every household in America dropped an average of 10 watts from their daily consumption (240 watt-hours), with over 100 million households in the US that would add up to over 24 million kW-hrs of energy per day.  That’s the equivalent output of a 1 gigawatt power plant!

Industrial Size Problems
So if saving a few watts here or there can really add up, imagine the potential savings for industrial users which can be thousands of times higher than a single family home.  It is estimated that 65% of industrial electrical use goes to powering motors.  Motors are most efficient when run at their rated loads; however they quickly lose efficiency when run at lighter loads.  This is a similar phenomenon to the efficiency loss seen in switching power supplies when run at lighter than designed loads.  A Department of Energy (DoE) study of nearly 2000 industrial motors from various applications nationwide showed 44% of them were operating at less than 40% of their recommended loading.

So what can be done to improve motor efficiency?  One method is to replace the direct drive systems with VSD or Variable Speed Drive systems.  It turns out that the speed of an AC synchronous electric motor is proportional to the frequency of the AC line current.  By implementing a variable drive system, a savings of anywhere from 5% to 50% can be realized.  With the cost of electricity increasing, the equipment cost can be quickly amortized and true savings can be realized.

What about lighting?  We talked about saving energy with dimmers and CFLs last week (Living With Less - Are Dimmers Better than CFLs?) in our homes. Now imagine how much could be saved by moving to active systems that lower the light level of florescent lights or HID (High Intensity Discharge) units near windows when it’s sunny.  Natural light coming through glass adds to the total available lighting in buildings.  If the lighting system can monitor that amount and make subtle changes to the light from artificial sources, a tremendous amount of energy can be saved.  Take for instance a parking garage.  Normally during the day, only the interior HID units would need to be working since the exterior units are near the open spaces and receive natural daylight.  Only at night would the units near the periphery need to be turned on.  Additionally, systems that could dim the HID units could gradually increase the brightness the darker it gets.

Parking_garage_hid_exampleFor example, a 4 story above ground parking garage would use approximately 280 HID units (each consuming around 215 watts of power).  Each floor would have 70 units arranged in a matrix of 7 x 10.  So there would be 30 units along the periphery that could be turned off completely and 22 units in the second ring that could be dimmed on each story.  Each story could save 215 watts on the outer 30 units and 107 watts on 22 units on the next ring.  This would save roughly 423 kW-hrs per day or 154,350 kW-hrs a year.  At US$0.10 per kilowatt hour, this would save the garage owner US$15,400 per year!  This does not include the savings in lamp replacement due to reduced wear - just a few thoughts on stopping the leaks in your buckets.

Till next time...

July 14, 2008

Living With Less – Are Dimmers Better than CFLs?

Bookmark and Share

Have you ever wondered if you installed a dimmer whether you’d save any energy in your home?  I have tons of networked dimmers installed throughout our house on every incandescent light bulb we have - including floor rope lights used for night time lighting.  "Why?" you might ask.  Besides being completely crazy about controlling and monitoring things around my house, it makes good sense to adapt the energy consumption of a particular light to the current requirements.  The interesting argument is, "how much do I save and are they better than CFLs?"

I’m going to propose a standard house.  One scenario will use Incandescent bulbs and no dimmers, one will use Compact Florescent Lights (CFLs) without dimming, and the last one will use Dimmers.  We will then run a simulation of a standard day usage pattern to find out which one of these makes the most sense in reducing a household’s lighting energy consumption.

OK, so we need a standard house.  The US average power consumption for homes is around 900 kW-hrs per month.  The US Department of Energy (DoE) states that around 8% of that is consumed by lighting (on the average).  The rest is HVAC, refrigeration, water heating, TVs, electric appliances, pumps, etc.  So the amount of power consumed by the lights in a month would be around 72 kW-hrs.  So that provides us roughly 2.4 kW-hrs per day for lighting.

Figures 1 and 2 below show two separate scenarios based on probable usage pattern of a family of 3 (i.e. husband, wife and teenager) during a normal weekday for a house with 16 bulbs.  The blocks indicate 30 minute periods to simplify the charts.  Figure 1 was filled to roughly 2400 W-hrs for 65W incandescent bulbs and no dimmers which is roughly our standard US household.  By replacing all 16 bulbs with CFLs, the total energy consumption drops to 622.5 W-hrs.  This is a 75% savings in energy for the lighting.  Figure 2 shows the effect of adding dimmers to the same lights and lowering the brightness according to various tasks.  Many times lights are left on simply to navigate through a house and rarely need to be at full brightness.  Also, while watching TV, lowering the lights to a comfortable viewing level makes it easier to see.  TV’s with adaptive brightness may also lower their backlight or projection brightness to adapt, saving power as well.  The calculation with the dimmers drops the energy consumption to 1837.5 W-hrs.  This is a 26% savings which is much less than the 75% savings of the CFLs.

You can download the Excel spreadsheet I created by clicking here so you can run your own scenarios.

So in a year’s time how much money does that save?  For a normal year of days like those above (365.25 of them), 2490 W-hrs * 365.25 equals 909.5 kW-hrs.  At an average rate of $0.10 per kW-hr, the power would cost approximately US$90.  The CFLs’ power would only cost US$23 per year and if we had installed dimmers, we’d spend US$67.  To convert the incandescent bulbs to CFL units would probably cost around US$64 which would pay for itself in the first year.  Dimmers could cost anywhere from US$4 to over $US100 each, so the payback (best case) would be 2.78 years...

Additional considerations would be the environmental factors of CFLs - they all contain mercury which is extremely toxic.  Newer versions use less, but the mercury is required for the bulb to operate.  Non-toxic LED bulbs will eventually emerge and drop in price enabling cost savings as well as an environmentally safe solution.

Got a comment?  Drop be me an email or comment here on the blog.  Till next time...

Figure 1 - No Dimming, Incandescent and CFL bulbs
No_dimming
Figure 2 - Dimming, incandescent bulbs
Dimming

July 07, 2008

The Whole Can Be Less than the Sum – Adaptively Reducing Power

Bookmark and Share

The title of this week’s blog sounds incorrect.  Isn’t it called synergy when the action of combining pieces together results in something greater than all the parts?  This is usually true, but today I’m going to discuss increasing energy efficiency through active means - that is, we’ll discuss what happens when you combine parts together in a design and the resulting power consumption of the system is actually lower (a great deal lower) than that of all the individual components.  How can this be?  It’s actually quite simple, so let’s take a look.

Here’s the concept - put together a bunch of components that monitor some power-consuming process and continuously "adapt" to the current required conditions to lower the energy consumed.  An example could be the back-light power supply for a personal media player.  While watching a video the player’s power supply is fed ambient light information from a photodiode.  As the ambient light changes, the drive current to the white LED backlight is adjusted.  It rarely needs to be at full power (direct sun light), so it adapts to the current surroundings.  In this way, the battery life is increased and the total energy consumed is reduced.

Now you may argue, "yes, but in direct sunlight the overall power consumption is actually larger due to the additional circuitry required to monitor the ambient light".  I will concede that argument is true.  However, let’s use some statistics (I love math!) to prove my point.  My theory is that if you were to create a usage pattern of the ambient lighting conditions of a large number of personal mobile devices (especially media players), you’d find most of them (around 84%) are watching their video in much less than full sunlight.  I arrived at this fairly large number by assuming a Gaussian distribution of the ambient lighting conditions that most people use - completely dark to full sunlight.  If I add 1 standard deviation on either side of the mean and then add in the remaining distribution on "the dark side" (couldn’t resist the pun), you calculate about 84% of the time users are not watching the video in full sun (or near full sun).  Actually most of the time users are in less than full sun for other reasons such as preventing a sun burn or simply being comfortable - I live in Florida and I should know!

Blog008_equation1So if you accept my theory on the usage patterns, then you must agree that if a PMP is simply designed for full sunlight viewing, it will use considerably more power then the device designed to "adapt" to the ambient conditions.  The next question is how much power is saved by being adaptive... this requires a bit more math.  First, let’s assume again a Gaussian distribution of light conditions during playback over normal usage.  We’ll use the standard Gaussian distribution formula as part of our calculations  shown in Equation 1. 

We’ll also assume a 3σ (standard deviations) spread to set the limits for full darkness and full brightness (daylight viewing with the LEDs at 100%). This will include 99.7% of the usage cases (with a 0.3% error).  To clarify, -3σ = 0% brightness and +3σ = 100% brightness (see drawing 1 and equation 2).  We will also assume a continuously variable drive to the LED back-lights (as apposed to a stepped approach) based on the ambient conditions that will never drop below 20% even in complete darkness. By applying the backlight level function in equation 2 to our distribution function shown in equation 1, we can then calculate the total percentage power used by the adaptive backlight.  Equation 3 shows the total power calculation.  Blog008_drawing1This calculation is for the entire probability distribution including very bright and full sun conditions.  If we evaluate the integral from -3σ to 1σ which represents 84% of the population (as mentioned before), the power is reduced from 59.8% down to 47.1% - an incredible savings. 

 
Blog008_equation2Now let’s take a look at the system impact of the back-light savings on the total run time of the device. Assume the back-light LEDs represent 40% of the total power consumption of the device when at full brightness.  Assume a run time of 2 hours based on a non-adaptive back-light.  If we reduce the backlight power by 40% for all users, then the overall system run time improves to 2 hours and 23 minutes.  That’s an overall system improvement of 19%.  Now, if we reduce the backlight consumption by 53% for 84% of the population that never watch video in bright light, the run time goes up to 2 hours and 32 minutes - a 27% improvement in performance.

In this discussion we have not considered the power consumed by the adaptive circuit, so we’ll assume it’s negligible relative to the backlight.  In many cases it takes very little additional circuitry to perform these types of tasks.  As you can imagine, if you are watching video in almost complete darkness with this adaptive PMP, then you could probably get an additional hour of play from it.  Just some thought provoking ideas...  If you want to know more about adaptive power reduction, check out National’s PowerWise® Solutions page at http://www.national.com/powerwise.

Blog008_equation3_2Till next time...