My Photo

Recent Comments

Energy Efficiency

September 14, 2011

Will Binary Communications Survive?

Bookmark and Share

IStock_000000123354XSmall In my last post "Going Faster Has a Price" I discussed the issues with transmitting bits represented by two states at faster data rates and the problems of inherent loss in the media, ISI and many other phenomenon that screw up the signal.  Through careful channel design and active means, engineers can transmit and recover bits over copper cable and back planes with ever greater rates.  For example, National Semiconductor and Molex demonstrated 25Gbps+ communications over a back plane at DesignCon 2011 this year.  But how long can the industry keep doing this without changing the way we define a bit on a backplane?

This problem is not a new one... as a matter of fact, it is a very old one going back to the early telecom days of modems.  In the early days of circuit switched (voice) networks, filters were placed in the system to limit the bandwidth of the signal to around 3KHz which was enough to reconstruct a human female voice without distortion.  This was done primarily as a means to frequency multiplex multiple telephone circuits on a single microwave transmission between towers (before fiber-optic lines).  So when people tried to move "bits", they were limited to the 3Khz bandwidth.
Enter the Shannon-Hartley Capacity theorem (see below).

SHCapThereomWhat this says is the maximum capacity of a channel to carry information is a function of the bandwidth (B) in Hertz and the Signal to Noise Ratio (S/N) which has no units.  So as your noise goes up, your capacity to move information goes down.  This plagued early engineers and limited the amount of information that could be moved through the network.  Early modems used Frequency Shift Keying (FSK).  One frequency was used to indicate a "0" state and another to represent a "1" state.  The frequencies where chosen so that they would pass through the 3Khz limit of the channel and could be filtered from the noise.  The problem is that you couldn’t switch between them faster than the bandwidth of the channel so you were still limited to the 3KHz... so how did they get around this?  They used Symbol Coding.

Symbol coding basically combines groups of bits into a single symbol.  That symbol can be represented by a frequency carrier and a combination of amplitude and phase.  This led to the development of Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) techniques which are in use today in modern cable modems. The group of bits can be sent all at once instead of one bit at a time... clever! However, it comes at a cost and a fair amount of complexity relegated to the world of digital signal processing.

But what about the high-speed digital signal path between two systems in our modern Internet?  Today they use scrambled Non-Return-to-Zero (NRZ) coding which prevents DC wander and EMI issues... but it is still either a "0" or a "1" state - two levels representing the state of a bit.  Will this medium ever move to other coding schemes to get more data through the channel as the early telephone system did?  It might.  Intel and Broadcom are both pushing for a standard that uses multiple levels and symbol encoding for 25 Gbps and beyond.  This has the added benefit that more bits can be sent in a single transmission of a symbol.  This is already being done today in Ethernet for the 10/100/1000 CAT-5/6/7 standards over UTP cable where the bandwidth of the channel is limited to around 350 Mhz. Will we see this at 25 Gbps and beyond? Possibly...

The problem with this method is power.  It takes DSP technology at each end of the channel to code and recover the signals adding energy consumption to the mix.  With thousands of channels in a modern data center, that power can add up really fast.  NRZ techniques are very low in power consumption.  National Semiconductor has produced devices that can move data at rates of 28 Gbps over copper media and back-planes at very low power consumption - something multi-level systems will find difficult to do.  The industry agrees and is pushing back on the multi-level proposals. 

There may come a day beyond 28 Gbps where there is no alternative but to go to multi-level symbol encoded systems, but I think that may be some time off in our future when 100 Gbps is far more common - perhaps even to your cell phone!  Till next time...

 

 

 

December 16, 2010

Get Active - Lowering Networking Power in Data Centers

Bookmark and Share

IStock_000002328995XSmall In the past I’ve discussed topics such as virtualization and digital power to help improve data center processing efficiency.  I may even have discussed additions to the 802.3 standard to idle Ethernet drops when they were not in use.  However, I have not addressed the interconnect power itself and it was surprising what I found.
In medium scale data centers such as those run by financial institutions, large retailers or corporations you will find thousands of server blades and the networking equipment to connect them together.  What is interesting about this architecture is that the majority of networking traffic occurs within the data center itself.  The reason for this is partially due to the litigious nature of our society and the never ending quest for information to help us understand ourselves.  For example, simply performing an on-line stock trade - which to the user is a single transaction - will spawn dozens of additional inter-server transactions to secure, execute, verify and log the event as well as extract statistics used in market analysis.  So when millions of people are on-line day trading stocks, billions of transactions are occurring within the data centers.
This huge amount of traffic need bandwidth and traditionally this has been accomplished by employing fiber optic cable.  Fiber has the advantage of a very small diameter thus providing space for air-flow to cool the systems.  Larger copper wire could be used for short hauls, but the diameter would block the air-flow and cause over-heating. 
Fiber requires light (lasers) to operate and different distances and data rates require different modes of optical transmission.  To allow flexibility, equipment manufacturers have created connectors that accept a module that contains the laser and receiver electronics.  These are many variants, but the most accepted standards are SFP+ (Small Form-factor pluggable), QSFP (Quad SFP), CFP ("C" or x100 Form-factor Pluggable), XFP (10 Gigabit small Form-factor Pluggable), and CXP.  These modules are actively powered and consume 400-500 milliwatts of power each! When you have thousands of them the power quickly adds up.  Additionally, the heat generated must be dealt with and the modules are also very expensive.
Now what’s most interesting is that the majority of interconnects within the data center are only a few meters long!  Normally passive copper cables would work fine but as mentioned above they would decrease the airflow at the back of the equipment.  So a clever solution is to use smaller diameter copper wire (28-30 AWG) which suffers from higher loss and place active drivers and equalizers such as the DS64BR401 in the connectors which fit these standard module sockets.  This technique is called "Active Copper" or "Active Cable" and has many benefits in less than 20 meters runs.  The first benefit is cost - these cables can be less than half the cost of the fiber module and cable.  The second is power - active cables can reduce the power consumption significantly if properly designed (< 200 mW vs. 400mW for fiber).
Fiber will always have a place for carrying data long distances for which it excels.  However, in the data center copper wire is regaining ground with the help of active electronics may be the majority of media carrying your next stock trade!  Till next time...

May 14, 2010

Saving Energy Takes Getting Your Hands Dirty…

Bookmark and Share

IStock_000010093494XSmall I often write about saving energy, improving efficiency, and lowering your planetary impact... so it’s time for me to come clean and show you my efforts to reduce my carbon footprint.  When I first designed my home back in 2000, I wasn’t thinking energy costs were going to skyrocket.  Instead I went for good efficiency, but not great efficiency... I’m paying for that now.  Even though our home is built from solid concrete poured walls with very high "R" factors, the overall open design allows large amounts of leakage through many avenues such as doors and windows.  Along with the basic window films, improved insulation, and better living habits we still struggle trying to keep our home comfortable, yet efficient in the use of electricity and propane gas...

I’ve given this much thought over the years and have recently embarked (as mentioned in my prior post, "Ignorance is Bliss") on a massive project to automate, well... just about everything that can be automated in our home.  The idea is to instrument everything (or most things that use power) to understand where the energy is going and to use that information for making decisions on energy use.  For example, if the TV and lights are on in the family room and the alarm system is set to "AWAY" mode, then the system should turn off the TV, adjust the thermostat to save power and turn off all the lights.  In everyday life, we are so caught up in our schedule that remembering to do these simple things falls far down on our list.  A "Smart Home" that knows your lifestyle can save you power if it is properly equipped - that’s where "getting your hands dirty" comes in... however, it feels like I’m trying to move an eight lane highway without disrupting the flow of traffic - not so easy (see my wiring closet photo below).

I never thought about how isolated our home's systems were until I began this project.  The lights were originally manual switches (I was the automation) which I replaced over a period of a year with Universal Power-line Bus (UPB) smart switches that are networked together over the power line (no new wires).  The thermostats were individual manual units without setback or other communications ability.  The hot water heater is gas (propane) and is simply on or off... same with the recirculation pump.  The appliances have no power metering or timing ability and cannot communicate with anything - except a human operator. The list goes on... so you can see the complexity of trying to tie all of these disjointed systems together as well as adding the sub-metering ability.

DSCN3124_small So I’ve begun by prioritizing the largest users of power that I can control... My list is HVAC, Lighting and hot water (propane).  I need to know what’s on as well as the state of the home (security set away or home, time of day, weather conditions, etc.) to make proper decisions.  The lighting system is 90% complete - most all switches are automated and networked so I can address a single unit or using the protocol, address all units at once via "links".  These links are pre-programmed to take a switch to a certain level (on, off, 20%, etc.).  So using an "all off" link, I can turn all the lights in the house off in one command.

The HVAC is a bit more complicated due to multiple air handlers... they need to be coordinated so they are not fighting each other to cool or heat the home. A zoned system would have been much better (I wasn’t watching the store that day...), but we have what we have.  So, replacing each thermostat with a computerized setback version was the first step (and most reasonably priced solution).  This has worked to greatly reduce our consumption in general, but there’s still money on the table.  The next step is automated thermostats with communications ability.  These can be networked (RS-485, UPB, etc.) so that computer software can force a condition (off, setback, etc.).

The hot water is simpler since we have a recirculating pump that can be turned off - this limits how much water is being heated and can dynamically be turned on when people are home thus saving propane.  The appliances are a different matter.  There have been talks for years of appliance communication standards so that HA systems can have control (the universal remote for everything, etc.).  Each appliance manufacturer had their proprietary scheme of how it should be done, and after years of trying to come to a common standard, it fell apart.  This was partially due to a lack of a "need" - no one could rationalize why someone might want to control their washer, dryer, oven or dish-washer from a home computer... until oil prices shot up sending electricity costs through the roof.  I personally felt that one and I’m sure you did too.  However, no one in our home has finished cooking and left the oven on (so far), so that’s pretty low on my priority list...

If you want to know more about the ancient attempts for unifying everything in the home, check out the EIA-600 CEBus standard... some really great OOP concepts, but it never flew.  Also UPB, INSTEON, and Z-Wave are all lighting (and other equipment) control standards with products available today... I’ll keep you updated as I try to finish what I’ve started, but as they say, "The Blacksmith’s kitchen often has wooden utensils".  Till next time...

October 13, 2009

The Energy of Information

Bookmark and Share

Is the energy content of information increasing? As a Technologist it is very interesting to me that in the twenty first century our world still prints newspapers and books on paper.  More amazingly, the computer printer market is booming especially in areas such as photo-printers.  In the late twentieth century it was predicted that by the next millennium, paper would be obsolete as a medium for sharing information... I'm pretty sure not everyone got that memo...

So what happened? We are now in a world where the internet almost completely permeates our environment including locations so remote, only a satellite link and solar-recharged batteries will work to power the nodes (think "Antarctica").  We have advanced social networking, file storage and even complete applications that exist solely in a nebulous cloud of computers spread across a vast infrastructure... and we still print out the map to a local restaurant on plain old paper.

My theory is that everything migrates to the lowest possible energy level and paper requires very little energy to provide information - it only requires a small amount of light to shine on it so a human can observe what is stored there. In fact it requires zero energy to store the information (or read it if it's in Braille) and potentially has a long retention life of several hundred years (not so for a DVD).

So paper is not such a bad medium for sharing information - mankind has been doing that for thousands of years.  But it has one major flaw... it is hard to update.  If you manufacture encyclopedias on paper, then the second you set the type for the printing, they are obsolete.  Information does not stand still.  It is fluid as our understanding of the universe expands and history moves behind us in time. And worse, information can be useless.  Think about a billion books randomly arranged in a gigantic library without a card catalog.  Even with an index, searching millions of pages of information for knowledge may never yield fruit. 

So is the energy content of information increasing? I would suggest it is.  As we accumulate more information, the energy required to store, search and display it increases - possibly exponentially with the quantity of information.  The amount of new information being created daily is unfathomable since people are sharing what they know more freely and indexing of that information has greatly improved.  Additionally, information that was previously in print is now being converted to share electronically increasing the energy that information requires.  Google did some math several years ago and predicted that even with the advance of computing power as it is, it would still take roughly 300 years to index all the information on the world-wide-web... Wow!  Guess how much energy that will take!  Till next time...

October 01, 2009

Ignorance is Bliss... How Knowing Too Much Can Ruin Your Day

Bookmark and Share

Don't watch your power meter to closely... you may lose sleep!

My name is Rick Zarr and I am a geek.  OK, I’ve said it publically for the record.  I get excited over reading articles on quantum well transistors and photonic lattice light emitting diodes.  Yes, I live to learn about technology and how it can be employed to improve our lives. Most of all... I like to build things - always have and always will. I have a "home project" worksheet that looks more like a broker’s stock trading analysis complete with Gant charts and status updates. I am a consummate data collector and home automation enthusiast - much to the dismay of my wonderful, loving wife who tolerates all the lights going out at the press of an incorrect button... but I digress. Information is power over your environment and it helps immensely with decision making processes - most of the time...

I so much love to instrument things, that over the past nine years I have been equipping our home with sensors, custom software and automation to know exactly what’s going on.  My goal was to improve our "living" efficiency as if our house were some giant manufacturing machine kicking out sneakers or soda bottles.  I will admit it is a work in process... engineer’s minds never sleep and we are always coming up with new ways to solve problems or improve processes.  So goes my "smart" house - which should be more aptly referred to as a "modestly clever" house with SLD.

I am usually intelligent in my decision processes, so when I started this project I learned that knowing the truth can sometimes be less favorable then simply being ignorant to the topic.  The power consumption of our house is a classic example.  Now, I knew I was a large consumer of energy. I write about the topic all the time and I’m painfully aware of the "average" consumption in America. I was on a mission to find where every milliwatt was going...

My quest started me on a crazed path to rid our home of energy waste... this lasted about 10 minutes until I realized that the rest of the family wasn’t buying into it.  It’s much easier to say, "Would you mind turning off the TV when you’re done watching America’s Next Top Model" as apposed to "Here’s a detailed report of the family’s energy consumption for the last week - we have a consumption goal of Y, and your quota is X so please adjust your life-style accordingly"... my daughters would simply laugh.

Following my rant I was lovingly banished to my home office.  I sat at my computer and watched the machine in action - lights going on here and there, air conditioners cycling on and off, pumps starting and stopping and realized that to make this type of thing work, my family (including me) needed to be out of the equation.

I am now working on adding rules to the system that (here’s a stretch) "learn" what we’re doing and adjust the house accordingly.  For example, if the thermostat in the bonus room is set to 75°F and there’s no one moving around, the TV is off or worse, the security alarm is set in "away" mode, then it’s probably safe to set the thermostat back to 85°F until someone changes it (or someone enters the room).  There are many other examples, which made me realize that I just added another item to my long list of things to build... this is going to take awhile.  Until next time...

August 30, 2009

In Pursuit of Efficient Lighting

Bookmark and Share

What will leave Edison to the history books? As a technologist I am often asked what single change would bring about a more stable energy infrastructure - it’s not quite that simple.  Our infrastructure has evolved over the past several hundred years into the distributed, fairly reliable source of electrical and chemical energy that we now enjoy.  To pose this question is like asking what single change could be made in a human body to allow us to live longer - again, not so simple.  If you improve one area, you possibly degrade another. 

This brings up some controversy over moving to electric vehicles in an effort to reduce green house gases and remove the dependency on foreign oil.  If you could simply convert all carbon fuel based vehicles to electric, suddenly the entire electrical grid would be overwhelmed by the charging requirements.  In addition it would create a need for potentially hundreds of new power plants - many of these burning coal or natural gas and producing green house gases!  Not a simple solution...

But possibly, there is a single thing that could make a significant difference in improving our energy consumption - at least for now.  I have mentioned this before in several blogs, but it is fundamental in how modern humans live.  It is lighting - the artificial light that allows us to see when the sun goes down.  I cannot imagine a world without artificial light sources.  However, I periodically fly from coast to coast on a "red-eye" flight and as I look down from 25,000 feet I am constantly amazed on the amount of power being fed to tens of thousands of street lamps - all lit brightly regardless of who might be there.  I even pick out the lone 500 watt mercury vapor lamp on some mountain top location and wonder why it’s there...

According to the U.S. DoE Energy Information Administration (EIA), in 2007 the U.S. used roughly 526 billion kilowatt-hours of electricity for lighting (both commercial and residential). In the following year, a typical nuclear power plant produced roughly 12.4 billion kilowatt-hours, so for the U.S. the lighting needs alone require roughly the equivalent of over 42 nuclear power plants. In addition, the world population is growing requiring more energy.  This means the rate of increase of consumption in itself is increasing.

You cannot simply stop using power, but you can be more efficient with what you have.  As it turns out, Light Emitting Diodes or LEDs have been on the fast track to replace both incandescent and florescent bulbs.  LEDs today are already more efficient than incandescent bulbs, and closing fast on Florescent designs. One problem (among several) that is slowing adoption is in the luminous intensity of an LED. 

The problem stems from the way photons are created within the band-gap of the diode structure.  As electrons cross the band-gap (a forbidden energy level), they transition from a higher energy state to a lower one.  In most diodes, this transition is non-radiative (no light) and is simply converted to heat.  If the band-gap energy is high enough, a photon is created.  This is the basic operating principle of LEDs.  However, most of the photons are caught in wave modes within the semiconductor material and do not add to the light emission - only additional heat as they recombine within the material.

Well, over the last several years some very clever people at MIT started looking at regularly spaced nano-structures that act as waveguides to tunnel those lost photons out of the depths of the LED material.  These are called Photonic Crystals and have driven the luminous intensity and efficacy of LEDs to new highs.  They formed a company around the technology called Luminus to manufacture these ultra-bright LEDs.  This innovation may very well be the first step in realizing a solid-state lighting future.

Now there are still problems with inefficiency due to a phenomenon called Stokes Shift (found in White LEDs using phosphors), thermal conduction requirements (no IR emission as in incandescent bulbs), higher cost plus the addition of electronics required to power and monitor these devices.  However, simply improving the efficiency of every light bulb by 50% in the U.S would immediately remove 30 plus coal burning power plants from operation.  Now that’s significant. Till next time...

August 24, 2009

The Energy Loss of Poor User Interface Designs

Bookmark and Share

IStock_000001511231XSmall I was fueling my car the other day and the pump I was using had one of the worst user interface designs I have ever come across (the brand of pump will remain nameless... but you know who you are).  As I struggled with the poor response time, lack of feedback and just overall bad programming (and this was a simple fuel pump) it made me think... what energy is lost due to users taking extra time using a system with a poor UI design?

I’m sure you know what I mean... most software is delivered with very little user testing.  Of course the designer knows how to use it, but the real test is someone with absolutely no knowledge of the software.  How fast can you use it and get the information you need.  I see this in web designs and other information server applications.  If I have to drill... and drill... and drill... to get to the level I need, I go crazy - especially if I made a bad choice somewhere along the way.  It’s like the old style "wizard" help dialogs. It’s when you get to the end, and it tells you that the software is about to do unnatural things to your data and asks you if you are sure you want to use the original file... is when you realize that twenty steps back you should have specified a new file name!  That’s what I’m talking about. 

Or what about unresponsive code - oh, this is really high on my list of bad software behavior.  If I have to wait for a task to complete before starting another one (especially if they are unrelated), then I start counting the seconds like I’m in prison.  Some software engineers didn’t get the memo that we’re in the 21st century and multi-threading applications are not some lab curiosity!  Or how about the lack of user feed-back... when pushing buttons on some piece of equipment yields nothing in return?  Is the equipment not working? Is the equipment busy doing something else?  Is the button broken?  We just don’t know, but the time it takes to complete whatever task I’m doing certainly increases.

Ok, so much for the rant.  But what amount of energy is lost if any?  Something certainly must be lost? Let’s examine a fictitious, but real-world example - an ATM or Automatic Teller Machine - something most everyone is familiar with.  The ATM has an LCD touch screen and the unit sleeps when no one is around to conserve energy.  Only when someone walks up to it (motion sensing) does the LCD and backlight come on and the processor wake up. 

In this thought experiment, two revisions of software were released - revision A which has UI issues and revision B which was revised to improve the UI.  The only difference between the two releases is the user interface - everything else is the same.  The ATM is in a high traffic area (of course) so that it generates the most revenue for the bank through access fees.  Revision A lacks the "beep" for user feed-back, and uses one thread for all the functions. Revision B has a "beep" when the screen is touched and is multi-threaded so the UI is independent from other activity.

Input speed is slower for revision A due to the single thread and users may think they have not entered their PIN correctly due to the lack of audible (or tactile) feed-back causing them to touch twice.  Revision B has a thread dedicated to the UI, so the "beep" and character representation for the touch is almost instantaneous.  I would imagine that 50% of the time, revision A will cause an incorrect entry of the PIN - at least during the first attempt.  The total time delay would be roughly an additional 10 seconds.  The next hurdle would be entering the amount for deposits or withdrawals - probably 90% of the usage of the machine.  Assume the same 10 second error recovery time when an error is made. Using these simple estimates and assuming and average of 30 users per hour, table 1 shows the total run time the systems are up (not sleeping) for each revision (assuming only withdrawals and deposits).

Table 1 – Software revision unit run time comparison

Rev A time (sec)

Rev B time (sec)

PIN entry

(7 + (10 * 0.5)) * 30 = 360

5 * 30 = 150

Transaction selection

6 * 30 = 180

4 * 30 = 120

Amount Entry

(10 + (10 * 0.5)) * 30 = 450

8 * 30 = 240

Communications Time

15 * 30 = 450

15 * 30 = 450

Accept / Dispense Time

5 * 30 = 150

5 * 30 = 150

TOTALS

1590 (26.5 minutes)

1110 (18.5 minutes)

So, looking at this hypothetical ATM, we see that in every hour, a bad user interface which causes errors in entry may increase the run time by 8 minutes.  If the unit is sleeping the remainder of the hour (simplified), then every hour the run time increases by 8 minutes or 3.2 hours per day (assuming an average of 30 users per hour for 24 hours).  A more accurate model would take into account usage and sleep times for the entire day, but it is obvious that a poor UI will increase the run time of the machine. 

In this model, If the ATM consumes 200 watts in run mode and 20 watts in stand-by, the energy consumption increases by 576 watt-hrs per day, or an additional 210.4 kW-hrs per year simply due to errors caused by the poor user interface… makes you think about the next time you start writing code, doesn’t it? Till next time...

May 07, 2009

The Curse of Moore's Law

Bookmark and Share

Processor Wafers As many of you know, Gordon Moore stated in his paper of 1965 that the trend for integration of digital transistors will increase exponentially, doubling every two years. So far, Moore’s Law has been pretty close if not conservative.  Looking at the Intel 4004 processor of 1971, it represented a transistor count of around 2300 devices (yes, two thousand three hundred transistors).  The new Intel Quad-Core Itanium Tukwila contains around two billion transistors - an increase of almost a million fold... so that’s good news, right?

In the scheme of higher levels of performance or simply improved portability (think Apple iPod shuffle), higher transistor counts are a wonderful thing. However, there are forces at work at the atomic scale that are making problems for these amazing shrinking transistors... as the size is going down, the total energy the chips are consuming is going up.  Why? First some semiconductor fundamentals...

Figure 1 shows the scale of a typical Complementary Metal Oxide Semiconductor (CMOS) FET. Like all semiconductor processes today, MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) are fabricated using a layering process.  Currently, the layering is done using Deep Ultraviolet Lithography due to the extremely small line spacing used to fabricate the circuitry.  Basically, the UV light exposes sensitized areas of the device to either allow or avoid etching or ion infusion as the layers are built up to make the chip.  These are very tiny devices - the gate length of a modern MOSFET inside of a digital CMOS Cross Section chip is on the order of 65 nanometers - an average human hair is roughly 100 micrometers in diameter... over 1500 times larger!

As the transistor’s conduction channel is made smaller, so must the insulating oxide that sits above it that controls the charge carriers between the source and drain. As the oxide gets thinner, it becomes harder to prevent electrons from "tunneling" through the insulator to the underlying substrate, conduction channel or source-drain extensions.  This phenomenon occurs in both the "ON" and "OFF" states causing significant losses when considering large scale integrated circuits.  Also, sub-threshold leakage is a problem for digital devices.  This is the current that flows between the source and drain when the gate voltage is below the "turn-on" threshold - useful for analog circuits, but a dirge for digital designers. 

It turns out that as we shrink transistors, all of these parasitic problems that used to be minor at larger scales are now adding up primarily due to the higher number of transistors found in modern devices.  Equation 1 shows the relationship between some of these characteristics - most notably, the supply voltage "V".  As the supply increases, dynamic power consumption increases exponentially and is often dealt with first (being the larger source of power consumption).  However, the static losses (iLEAK) are CMOS Energy Equationincreasing as the transistor geometries shrink and the densities increase.  As the frequency of operation of the device is scaled back to conserve energy, the static loss predominates - a real problem for large scale devices with hundreds of millions of transistors.

There are structural ways to minimize these losses, but as geometries continue to shrink, power will become a much more serious issue... not only as a problem for energy consumption (i.e. operating costs) or battery life of equipment that uses these devices, but for the heat that builds up inside of the chips themselves.  As heat builds up and cannot flow away from the source, the localized temperature will also increase causing aging in the device.  A chip will fail sooner if operated at high temperatures, so by lowering the power consumption, the lifespan of the device improves.

Back in 2000 my company, National Semiconductor, pioneered a cool way to lower not only the dynamic power, but the static losses as well.  It’s called Adaptive Voltage Scaling and was used primarily on portable devices to increase run time when running on batteries.  However, as chips continue to grow more complex, AVS is now showing up in large scale devices such as the Teranetics TN2022 10G base-T Ethernet physical layer device.  AVS technology utilizes the principle that all digital chips are designed and implemented for worse case process, timing and temperature.  This is like saying the most out-of-shape person will consume one gallon of water while hiking a trail… therefore, all hikers are required to carry one gallon of water - even though most will do just fine with a single canteen full.  It burdens the better members of the group with the problems of the worse constituents.  So by using technology, each digital chip can be “assessed” for its "water consumption" based on how "fit" it is... that is, how well the chip’s process performs.  Like humans, they will all vary around a mean or average performance level, but the majority will fall near the center of the distribution.

AVS leverages this characteristic by placing monitor circuits inside the digital core (or cores) which report the current state of the chip to an embedded controller called the APC or Advanced Power Controller.  The APC then can make decisions whether the supply voltage is too high or too low and communicate those adjustments to an external power supply device called the EMU or Energy Management Unit. As temperature moves, the process ages or the digital load varies, the controller again makes updates that minimize the energy consumption of the overall system.  Energy savings of 40% or greater have been observed when using this technology especially in systems that use multiple devices.

As Moore’s Law marches on, the number of transistors on a singe chip will grow to levels never before imagined... and to keep these amazing devices running, technology will also need to address the methods for powering them.  As in my previous post, "The Personal Supercomputer in Your Pocket", the potential for capabilities is only limited by our imagination and our ability to manipulate our physical world.  Till next time...


 

March 19, 2009

AC / DC Wars Continue... Part II

Bookmark and Share

AC or DC? You decide... My previous post "AC vs. DC - The Westinghouse / Edison War Continues..." has created some very active feedback and thus compelled me to create a "part two" post on the subject.  Surprisingly, there are individuals on both sides of the fence.  Some are very pro-DC, others pro-AC.  It’s fascinating to see the reasons for each point of view.

Some readers are promoting DC for use in HV transmission systems where the higher voltage (as in AC transmission systems) lowers the resistive loses (see my previous blog for the math).  For example, ABB (a manufacturer of HVDC equipment) makes very good statements on the advantages of HVDC transmission such as connecting grids of varying frequency (60 Hz to 50 Hz) and lower cost over long distances (only 1 wire is required not 3 or 4 - makes sense).

Others are promoting DC in the home providing a universal DC bus for equipment such as PCs and other electronics.  Since there would be only one DC supply, it would be much easier to back it up with batteries - possibly fed from solar panels.  Why have Uninterruptible Power Supplies (UPS) on every computer, DVR or gaming console when you can have one, low voltage DC supply.  Again, this makes sense.

On the other side are some very good reasons for alternating current.  AC power transmission systems are extremely reliable, well understood, fairly universal and ubiquitous.  We have had AC with us for over 100 years and to universally convert our transmission systems to DC would be unrealistic (for now... in 100 years, who knows).  Home and commercial systems are all engineered to work with AC from circuit breakers to florescent lights (an incandescent bulb would work either way).  Today’s incandescent bulb dimmers require AC power to work... a completely different technology would be required to dim them using DC power.  LEDs will eventually replace CFLs and incandescent bulbs and could greatly benefit from a DC lighting bus.

I take the position there’s a place for both.  In many applications, local DC buses could provide a uniform, uninterruptible supply of power that easily integrates with local power generation (solar, wind, etc.).  An AC/DC "gateway" can provide the interface between the energy producers (e.g. the power company) and the local generation capability (e.g. solar panels) and manage the power flow.  The direction of flow can be from the grid to the home or visa versa when the sun is shining with low local consumption.  This device can also provide a gateway to the power consuming devices in the home to help manage and lower the consumption.

The future of energy management is bright and will require a rethinking of how we use power whether it is AC or DC in nature.  I believe that AC and DC power can live in harmony where each has a place that simplifies the application.  Let me know what you think!  Till next time...

February 26, 2009

AC vs. DC – The Westinghouse / Edison War Continues...

Bookmark and Share

AC High Voltage Power Lines Did you know if Edison had his way, all generation and transmission of electrical power including the outlets in your house would provide direct current (DC) instead of alternating current (AC) that we have today?  Around the turn of the 20th century, Nikola Tesla invented alternating current generation, transmission and AC induction motors.  He then licensed his patents to George Westinghouse and the war with Edison began.  Edison went as far as electrocuting animals with AC power to show how lethal it was compared to direct current. The fact is ANY electrical current can be fatal. It does take more current to place your heart into fibrillation with DC than AC (around 60 milliamps for line power AC, and 300 to 500 milliamps for DC). Above 200 milliamps muscles contract so violently, the heart cannot pump at all...  Thus the reason you should always throw off the circuit breaker when working on an electrical project... I do (well, most of the time).

We all know that Tesla and Westinghouse won the battle. AC power has the advantage of easily being "transformed" to higher and lower voltages allowing transmission over vast distances.  Additionally, AC power propagates down a wire with lower loss than direct current.  DC power suffers seriously from Ohm’s Law (R = V / I where "R" is resistance in ohms of the wire, "V" is the voltage drop across the length of the wire in volts, and "I" is current flowing through the wire in amperes).  To calculate the power lost for DC power due to the resistance of a wire, you simply use ohms law plus the power equation (P = I * V) and find P = I^2 * R where P is power in watts.  If you consider a transmission line carrying DC power with a current of 10,000 amperes and a transmission resistance of only 0.1 ohm, you will be losing 10 million watts of power! Also, there would be a voltage loss (a drop in voltage) of over 1,000 volts from one end to the other. Depending on the length of the wire it will either get warm, catch on fire or explode! Since it was known that transmission losses would be much higher than zero ohms (unless the wires were made from super conducting materials), DC transmission was considered impractical and abandoned. But interestingly, the battle still rages on in pockets of our industry.

There are complexities with AC power namely maintaining the correct frequency (50 or 60 Hertz depending on your country) and phase synchronization.  When generators are brought on-line, they must exactly match the phase and frequency of the "grid" otherwise "seriously bad things happen".  Consider what would occur if a 100 megawatt generator was switched into the grid with as little as 1 degree of phase difference between the generator and the grid. The phase angle of 1 degree at the zero crossing (the point where the sine wave power goes to zero before reversing) would be equal to a power loss of over 1.74 megawatts! Well, in reality the power wouldn’t be lost... it would show up somewhere you wouldn’t want it to -  like a high voltage transmission transformer (i.e. imagine a large boom followed by much panic). That’s why our transmission grids have safeguards - like high power circuit breakers the size of automobiles. There are other problems with large distributed networks that span a nation - the phase of the power will be different along the grid and there is always the issue of Power Factor.

With all the problems associated with AC power, our modern world runs on it.  What’s interesting is that in most homes, the electronics (including your PC) immediately turn the AC power into high voltage DC and then using a switching power supply convert the power into lower DC voltages required by the system.  Most electronic subsystems run on DC voltages that range from less than 1 volt to around 48 volts.  There are losses with the conversion from one DC voltage to another, but most designs can provide about 80% efficiency with many above 90%.  To learn more about switching power supplies, go check out National’s Analog University tutorial on switching power.  Also check out their WEBENCH tools which allows you to design a complete switching power supply on-line.

Another reason for converting to DC is the ever increasing need for alternative energy sources such as wind and solar. For instance, photovoltaic panels used for solar installations supply DC power which must then be converted to AC. As LED lighting begins to overtake the traditional incandescent bulbs and CFLs, they will require direct current.  This again is supplied by switching power supplies that convert the power into a constant level direct current for the LEDs. 

But this begs the question, "what about our existing infrastructure?"  I doubt anyone would say, "sure, come on over and tear up my entire house and rewire it for DC power."  Just the issue with appliances is enough to stall any initiative.  However, a dual power system might actually have some merit.  For those systems that can benefit from DC power (such as charging your electric vehicle’s batteries), making a DC gateway into the home might provide some benefits.  You would have one very efficient DC power supply that would reduce the AC line current to around 48 volts DC.  Then, any appliance or electronics that would require DC could start at the 48 volt point and easily convert it to what ever the system requires.

There is a silent movement to move back to DC power for some of the above reasons at least at the final destination. I seriously doubt that Edison will finally win the war which is pretty much over at this point. But as applications for direct current emerge in the home a master DC home gateway may one day show up in your garage.  Something to think about... till next time...