My Photo

Recent Comments

« May 2008 | Main | July 2008 »

June 2008

June 30, 2008

The Quest for Energy Independence - Improving Solar Efficiency

Bookmark and Share

Every month I get a unique feeling - and not the kind I enjoy.  It’s similar to the feeling you get when you see blue flashing lights behind your car while driving.  It’s that sudden "Oh... No..." sinking feeling produced by knowing you are completely helpless.  I get this feeling every time I open my electric bill and I’m sure I’m not alone - especially these days.  Oil and natural gas have both had major increases in cost.  This makes me think,"how long before my electricity cost doubles?"

So, each month I look around the house wondering what I can do to be more efficient and realize that I’ve already trimmed to the point of discomfort.  Surely there is an answer to the rising cost of energy?  A few months ago I was repeating this exercise and walked outside to clear my mind.  As I stood in the sun (in Florida) and began to get uncomfortably hot, I realized how much power was raining down on my house.  Maybe I could improve the insulation or install an attic fan?  "If only I could turn my entire roof into a solar panel for free", I thought.  I’d need a bunch of panels since the efficiency of even the best photovoltaic cells are only around 20 to 23 percent.

Since I’m involved with energy efficiency matters at National Semiconductor, I often day dream about complete energy independence.  There are many issues with disconnecting your home or office from the grid.  Cost is the first and efficiency is the second. To power an average home, you’d need a large number of panels due to today’s solar conversion efficiencies.  For example, assuming an average US household uses 900 kW-hrs per month and receives 10 hours of sunlight a day, you’d need panels that average an output of 3 kilowatts.  The energy could be stored in batteries for times when there is no sun, or the home could remain connected to the grid and use that as storage for the excess electricity generated.

Pv_system With this estimate, I had to ask the question, "Once you have enough panels for a 3 kilowatt output, will you always get 3 kilowatts?"  The answer is, "probably not".  Systems are usually deployed with the panels arranged in strings to increase the voltage, and the "strings" connected in parallel (see drawing 1) to increase the current.  Think of them as batteries - when you want a higher voltage, you connect them in series.  When you want more current, you connect them in parallel - the solar industry does the exact same thing.  But what happens if one of your "batteries" or solar panels goes dead in the series string - depending on the failure, the string can go dead or lose voltage just like a string of batteries.

So, solar power panels suffer the same fate!  If something as simple as the shade from an adjacent building or even bird droppings fall on a single panel, the entire string of panels can be affected.  So my residential installation now needs to be a 6 kilowatt system to make sure this doesn’t happen - even less cost effective than my initial estimate.  But what if you could fix the series issue with the panels and increase the overall efficiency of the string? Well, National Semiconductor has entered into the photovoltaic business with just such a solution.  It is called SolarMagic™ and this technology solves the issue of variable output from each panel greatly improving the efficiency of the array.

If you want to know more about SolarMagic Technology, go to:

http://www.national.com/solarmagic

There is even a cool video on solar powered golf carts - one with SolarMagic technology and the other without it - guess who wins?  While I’m waiting for my own photovoltaic system to be installed (one day...), I’ll continue my efforts at reducing my own consumption and keep that sinking feeling each month to a minimum.  Got comments on solar energy harvesting?  Then drop me a comment or an email.  Till next time...

June 23, 2008

A Case Study of Electric Vehicles

Bookmark and Share

Over at National Semiconductor, we've been working on metrics for measuring and improving the performance to power ratios of our devices.  During some recent meetings I was talking to a colleague about electric vehicles.  The question came up, "if your car doesn't have a fuel tank, how do you measure the MPG (KPL) of the vehicle?" Does the efficiency of the charger affect your mileage? Does the battery replacement cost get included into the cost of the fuel or the cost of ownership?  Here's my take on this - and it makes me think I want a fully electric car - but maybe not yet...

So you are the proud owner of a new "Zolt" (a fictitious car company) "Wunderwatt" (a fictitious automobile) fully electric 4-door sedan.  It came complete with regenerative braking.  You found this "feature" is a bit strange -  when you take your foot off the accelerator pedal, the car immediately starts to slow down - much sooner than a conventional car.  This is due to the motor now running as a generator to charge the battery.  This feature really improves the "time between charges" - a measure of how efficient the automobile is with the energy it stores.  Here are Zolt's specifications for the Wunderwatt:

Battery Capacity: 75 kW-hr (Lithium Ion)
Mileage on a single charge: 400 miles (643 kilometers) - based on non-stop
Time to recharge: 9 hours (110VAC), 4.5 hours (220VAC)
Battery Life: approximately 4 years (degrading from age due to temperature)
Battery Replacement Cost: approximately $5000 US (hopefully less)

To figure out how the charge relates to mileage, we must first calculate the equivalent of the MPG rating - in this case Miles (or Kilometers) per kW-hr of stored energy.  This will be an average based on the car manufacturer's rated distance on a charge, but might actually be lower (or higher due to regenerative braking).  The mileage on a single charge could be calculated by running the motor on a dynamometer at some fixed speed until the battery goes dead.  This is not a real world method, so there needs to be some standard for measuring this value.  To calculate the MPkW-hr you simply divide the total mileage on a single charge by the battery capacity.  In the case of the Wunderwatt, it is 400 / 75 = 5.34 MPkW-hr or 644 / 75 = 8.59 KPkW-hr.  So for each kilowatt-hour of storage you can drive approximately 5.34 miles (8.59 kilometers).

Next, we need to estimate the annual usage of the vehicle.  Most families will average about 15000 miles (24140 kilometers) per year.  This may vary, but it's a number used by many automobile leasing companies for annual usage, so it's probably pretty accurate.  To calculate annual power consumption we simply divide this number by the MPkW-hr (KPkW-hr) number which yields kilowatt-hours consumed by the vehicle.  In our case we get 15000 / 5.34 = 2810 kilowatt-hours which are also the same for metric.  The average US household uses roughly 900 kilowatt-hours per month - so the car uses roughly the same power over a year that an average US household does in 3 months, but there are several other factors we need to consider.

The car does not have a 400 mile long extension cord, it has batteries.  Batteries are not 100% efficient at charging or discharging, so we need to introduce a battery efficiency loss factor - for Lithium Ion, we'll use 0.998 which is negligible and we'll ignore it.  Additionally, the charger is converting line power to charge current.  This process can be anywhere from 70% to 90% efficient (and possibly higher), so let's split the difference at 80% and introduce a charge efficiency loss factor of 0.8.  We now divide the total power used by the vehicle by our loss factor: 2810 / 0.8 = 3513 kilowatt-hours of input energy into the car. 

Now that we have an energy consumption number, the annual "fuel" cost can be calculated.  We simply take the energy consumed and multiply by the cost per unit energy.  For an electricity cost of $0.15 US per kW-hr, we get 3513 kW-hr * 0.15 = $527 annual electricity cost.  Now, your amount may be higher or lower depending on the local cost of electricity.  But there is still another factor, the replacement cost of the batteries - they have a finite lifespan.  The question is whether to include that in the devaluation of the vehicle, the maintenance cost, or the fuel cost. 

If we include cost of replacing the batteries with the fuel cost, then we need to amortize the cost of the battery over the lifespan.  Lithium Ion batteries age - the aging process has been slowed down on modern cells, but due to elevated temperatures in a vehicle (e.g. sitting in a hot parking lot every day), it may only last 2-3 years.  For our Wunderwatt model, the lifespan is specified at 4 years with a replacement cost of $5000 US.  That amortizes out to $1250 per year.  So the total cost of fuel for the vehicle is roughly $1777 annually. 

To compare that with an average gasoline powered sedan that gets 25 MPG (10.6 KPL) we'll need to calculate the fuel cost. Using an average cost per gallon of regular (87 octane) gasoline of $4.00 US (as of June 15th, 2008), the cost of driving the same 15000 miles would be (15000 / 25) * 4.00 = $2400 per year.  This was quite a surprise to me!  Even including the battery replacement cost on an annual basis, driving this mythical electric car is still cheaper than a conventional gasoline powered vehicle.  The overall costs should also be cheaper since there are really no oil changes, fewer moving parts in the electric car and regenerative braking (which was not considered in the mileage of the electric car).

But would I buy this car if it came out tomorrow - the answer is maybe... My perfect electric car would have the performance of a gasoline powered sedan, but use a battery system that does not degrade with time and outlasts the vehicle. There is on-going research in the area of double-layer carbon nanotube supercapacitors.  See this article from Science Daily:
http://www.sciencedaily.com/releases/2005/02/050217224708.htm

The ability to densely pack carbon nanotubes inside these capacitors provides much more surface area to store charge.  They effectively will never wear out and have endless charge-discharge cycles.  Initially these capacitors may find their way into the regenerative braking system to reabsorb as much of the vehicle's kinetic energy and use that for acceleration reducing the size and weight of the on-board batteries. 

The complete equation is shown below in case you want to enter your own values.  If you can think of any additional terms or you have an improved equation, drop me an email or comment here on the blog.  Till next time...

Equation 1 - Electric Vehicle Cost of OwnershipCoo_equation_2 

Where:
- CoO is Annual Cost of Ownership
- Sc is Distance traveled on a charge
- Sa is Distance traveled annually (varies by user)
- Ec is the energy capacity of the battery (usually in kW-hr)
- eff is the charger conversion efficiency
- Ce is the cost of energy (usually in $/kW-hr)

June 15, 2008

The Case of the Missing 42 Minutes

Bookmark and Share

My job requires me to travel quite often.  This interferes with my ability to watch the few TV shows that I enjoy, namely “House, M.D.” which airs on the FOX network on Monday nights.  To facilitate my ability to watch my shows whenever I want, I rely on a digital video recorder (DVR) type set top box (STB) to record these programs while I’m away.  Awhile back I was on a business trip and eagerly looked forward to watching my favorite show when I returned home.  I ran through the door, dropped my bags, grabbed a bottle of water, jumped over the couch and turned on my home entertainment system to watch the show.  I looked on in horror as the screen menu showed only 18 minutes of “House, M.D.” had recorded…  How could this have happened!  I was on the case immediately.

My first thought was equipment failure; however other shows had recorded normally. I started looking through the entire list of recorded shows and found a few programs that also reported short record times – the DVR did not record the entire show.  This was a mystery that I had to solve.  Being an engineer, I knew I could figure out what was going on.  My first call went to the cable company’s customer service line.  I told them what had happened and they said, “Oh, you probably have a bad DVR.  Bring it in and we’ll give you another one” – and so I did.  I reprogrammed all of my shows (as well as my wife’s) and set about believing the problem was solved.

Several days later, the same scenario occurred again.  This time with a different show… what was causing the exact same failure?  The statistical odds of losing another DVR cable box to the same failure mechanism is astronomically low – unless this was a design defect.  Again, I was on the case.  As before, I checked all the recorded shows and found that several of them had only recorded 15-20 minutes of the program.  All the other shows were intact and would play correctly.  Now I wondered if the programs that did not record properly had something in common… so I began to look more closely.

On initial inspection, none of the shows had anything in common.  They were on different channels which led me to believe that the cable was healthy (no attenuation at a specific frequency to cause data or transport errors).  The only thing that even remotely looked suspicious was the time of recording – all the shows that failed to record properly were recorded during the day time. But “House, M.D.” was the exception… it was on during the night (9:00 PM ET) – however, it was recorded on the original HD DVR.  On the new DVR, all the failed shows were recorded during the day.  What was going on here?  Was the cable system suffering from some failure during day? Had sun spot activity increased causing a loss of the carrier from the satellite or a power failure? But what about my other show, “House, M.D.”? It was not on during the day! So again, I called the customer service line and they said, “Oh, you probably have a bad DVR…”  Once again, I changed out the DVR and reprogrammed all of the shows.

As you can predict, once again the same symptom showed up.  This time it didn’t matter what time of the day a show was aired – it started failing to record properly all the time on almost all the programs.  This was getting worse and there was no solution in sight… until fate gave me a clue. That evening I was standing in front of the rack of equipment where the DVR is located and felt a breeze of warm air coming from the rack. As I thought about the problem it hit me – thermal failure!

I had recently upgraded from an SD or standard definition set-top box without recording capability to the HD DVR version and moved the SD box to the bedroom.  The new HD box was slightly taller than the original – most likely from the additional HD electronics and DVR functions.  Additionally, in comparing the new HD DVR with the older SD box in our bedroom, I noticed the HD DVR drew considerably more power.  I pulled the rack out of the wall and removed the HD DVR box.  On inspection, I noticed that the new design had vent holes on the top of the box where the older SD version did not (it had them on the sides and back).  My theory on the failure was that when I installed the new DVR into the rack space of the original set-top box, I cut off the air circulation through the taller chassis since the vent holes were on the top – not the sides.

For an experiment, I set the HD DVR on top of the rack in open air leaving the equipment out in the hall way (much to the dismay of my family having to walk around it).  Again, I watched to see if the failures continued.  After two days of recording not a single failure.  I called customer service and asked, “Have you had any thermal problems with these new HD DVRs getting hot and not recording?” The person on the other side immediately responded, “Oh yes, these get much hotter than the older ones – you should make sure they have adequate ventilation or they will reset. We get a great deal of returns due to this problem”.  Thus the mystery of the missing 42 minutes of “House, M.D.” was solved.  Here’s how it played out.

The new recorder was installed the week prior to my travels and seemed to run fine.  We had only programmed a few shows so the “recorder” portion (hard drive, associated power sections, etc.) were only being used during the late night when it was cool.  We started adding more shows – many of which aired at the same time causing both tuners to be active. The increase of active circuitry generated additional heat in the system which caused the temperature in the box to rise. If the DVR had been recording for several hours non-stop, the internal temperature would rise above the maximum operating conditions and cause the unit to fail. On failure, it would reset and stop recording, cool down and operate normally again.

The replacement boxes showed the same symptom, only at different failure levels – the last box being the most sensitive.  In a way, the more sensitive unit made finding the problem much easier.  The day time failures suddenly made sense as well – the air conditioning during the day is set up to 82 degrees F (28 degrees C) which is much warmer than during the evening when we are home – so that particular box was failing only during the day.

So, what is the moral of this story? I’m glad you asked.  In a residential (or commercial) space, having thousands of watts available from a wall socket doesn’t mean it’s acceptable to draw as much power as you want (or can).  Power that goes into electronics comes out as heat.  Heat flows from higher to lower temperatures (a temperature differential) at a rate defined by the thermal impedance of the system.  If heat is continuously added to a closed system, temperatures will rise until a temperature differential is achieved that allows the heat to flow. In the case of my new HD DVR, my original rack space was fine for an SD unit, but inadequate for the HD DVR’s increased power consumption. 

Designers should always strive to lower their power consumption since there is no guarantee that proper ventilation will be available.  In the case of the new HD DVR, designers solved the thermal issues by simply drilling more holes in the box which in my case caused 3 consecutive failures – and many returns for the cable company from other customers.  A lower power design would have slipped right back into the rack and never missed a single episode of “House, M.D.” 

Got a similar story?  Drop me an email or comment here on the blog.  I’d love to hear from you!  Till next time…

June 08, 2008

If Houses Grew Like Hard Drives

Bookmark and Share

Every once in awhile I like to throw out some thought provoking analogies and this week’s posting is actually quite amusing (well… I think so).  The question is, “what if the cost per square foot of a residence decreased at the same rate as the cost of a megabyte of storage in a typical hard drive”?  How big of a house could you buy today?  This also sparks another question, “if the same rate of energy efficiency improvement of hard drives (by space) applied to an average home (in the U.S.), how much power would you use per month”?

To answer the first question, we need some base lines.  In 1983 I was designing a computer system that required a hard disk drive.  We ordered a 5¼” full bay height unit made by Control Data (which was compatible with the Shugart ST-506 drive – the first 5¼” drive). When it arrived in our lab, we eagerly unpacked the drive, placed it on one of the benches and called all the other engineers into the lab to see it.  We all marveled at this amazing piece of hardware which weighed in at about 4 ½ pounds and could fit in your hand. The total unformatted capacity was 5 megabytes (a megabyte is referred in this post as 10^6 bytes for simplicity) and we all thought, “What would anyone ever do with that much storage capacity”? In a world where 16KB of main memory was a bunch, this seemed unbelievable! Also, this predated integrated drive electronics (IDE) so the disk controller was a large circuit card loaded with components to read from and write to the disk.  Those were the days!

At the time (and my memory is fading) the hard drive cost us roughly $1000 US.  Around the same time, I (and the bank) purchased my first home – an 1100 square foot condominium for $72,000 US.  If you adjust for inflation since July of 1983 (roughly 115%), today the hard drive would cost about $2150 US which calculates out to a cost per megabyte of $430 US. The condo would cost roughly $155,000 US today (unadjusted for increased material cost due to supply) which calculates out to about $141 US per square foot.  We’ll use these numbers for our base lines.

Today, a 1 terabyte drive (200,000 times more capacity than my 1983 vintage 5 megabyte drive) was advertised at just under $200 US.  That calculates out to $0.002 US or 2/10ths of 1 cent US per megabyte.  This equates to roughly a 215,000 times improvement in storage capacity per dollar.  Now if housing costs had done the same thing since 1983, today 1 square foot of residential space would cost roughly $0.00066 US per square foot.  You could purchase the equivalent space of the Empire State Building for around $1800 US (the Empire State Building would probably be a bit more since it’s considered commercial property and is located in Manhattan).

The more interesting view of this comparison is stated in question 2.  That is, “if the same rate of energy efficiency improvement of hard drives (by space) applied to an average home (in the U.S.), how much power would you use per month”?  We know from the above calculations that over the period of the last 25 years, there has been a 200,000 times improvement in storage capacity and a decrease in the power consumption.  The 1 terabyte drive uses roughly 38W worse case compared to the early ST-506 drives which consumed about 27W nominally (60W worse case).  For this discussion, let’s use the worse case numbers.  The vintage 5 megabyte drive consumed 12 watts per megabyte of storage.  The modern 1 terabyte drive consumes about 38 microwatts per megabyte which is roughly a 300,000 times improvement in energy efficiency.  A typical US home uses about 900 kilowatt-hours per month.  If we applied the same efficiency improvements to an average US household, the home would consume only 3 milliwatt-hours per month!  Disregarding the voltage and frequency conversion losses, this “modern” house could run off of a single AA battery for over a month!

When you considerer that today, manufacturers and consumers are very conscious about the amount of power a product consumes, the electronics industry has actually been making extremely large improvements over the last 25 years.  If you wanted to build a 1 terabyte drive out of the old ST-506 units, the array would draw over 8 megawatts (not including the controller power consumption and air conditioning) and occupy 17,300 cubic feet – the equivalent of a 2000 square foot home stacked floor to ceiling. Today a terabyte of storage fits in your hand and draws less than one of the original 5 megabyte drives –amazing!

If you’d like to comment on my post, please drop me either an email or comment on this blog – I’d love to post your story as well.  Till next time…

June 01, 2008

The Efficiency of Moving Bits - Part 2

Bookmark and Share

In my last post I talked about life as a designer in the early 1980’s… it’s funny to look back and think of what we thought was “amazing technology” – some more of which I’ll discuss today.  I’m sure someone reading this could comment on engineering marvels of the 1960’s as well.

I’m going to continue my discussion on moving bits with some comparisons of bus architectures from that time and today.  We’ll take a look at how much has changed and the evolution of connecting systems and subsystems together.  I’d like to start with the S-100 bus which was essentially the Intel 8080 processor’s external bus, but not many engineers might remember that architecture.  I had friends that owned (yes, owned) the IMSAI 8080 and built automated amateur radio repeaters using these machines as controllers.  My first “computer” designs used the ISA bus made famous by IBM in the PC released in August of 1981 (I purchased one of those early machines and was the first geek on the block to show it off!).

The ISA or Industry Standard Architecture bus was not a standard (yet) when IBM introduced it in their PC.  Actually, the fact that IBM published a technical manual that including a listing of the BIOS (Basic Input Output System) source code and complete schematics, made it quite easy to clone…  I still don’t understand that decision.  This basic 8 bit bus originally ran at 4.77 MHz (like my old machine did), but later was enhanced and supported an 8 MHz clock, so every 250nS, 8 bits were transferred over the bus.  To actually move data into RAM or read an I/O address several cycles of the bus were required to latch the address, allow for settling of the bus, and give time to the peripheral to respond.

As processors improved in speed, buses needed to keep up.  The first logical step was to simply speed up the clock (as IBM did from the original PC 4.77 MHz clock to the 8 MHz clock in the XT) or use both edges of the clock.  Additionally, by making the bus “wider” with more communication lines (e.g. 8 bits to 16 bits and beyond) also improved performance.  The bus wars raged for years getting ever wider and faster.  As engineers, we continued to look for ways to move more data between subsystems and this led to us bumping into the physical laws of nature… primarily skew between bus lines and waveform distortion.  It was even more difficult if you wanted to extend the bus farther than the 10 inches inside the chassis, which at times seemed almost impossible.

It seemed counter-intuitive that the solution would be to move away from wider buses and serialize the data, but this is exactly what happened.  As the speed of the buses increased, there was little margin between each communication line (i.e. data, address, or control signals).  A tiny amount of skew would cause errors in the transfer of data.  Additionally, the mechanics of connecting large numbers of lines to a circuit card added expense.  Serializing the data reduced the number of mechanical connections, reduced or removed issues with skew, and had one additional feature – it reduced the overall power consumed.  This was accomplished by moving away from large voltage-swing technologies such as TTL and using devices based on LVDS.  Additionally, bus designers could now have point-to-point connections to each peripheral due to low connection count (e.g. PCI Express) which greatly improves bus bandwidth.  An example is the DS92LV16 SERDES (Serializer / Deserializer) transceiver. This device simply takes 16 data lines, serializes them, embeds the clock transports it to another DS92LV16 which reconstructs the 16 data lines and clock.  This is a transceiver so it has an upstream and a downstream and is LVDS based so it uses 2 wires for each path (4 wires total).

To compare old bus architectures such as ISA with serialized LVDS, we’ll need to define the parameters.  In my old ISA designs, I decided to use buffers to drive the data over the back-plane (rows of 62 pin edge connectors).  I needed enough drive to make sure the loading caused by a full complement of circuit cards would not degrade the speed of the edges.  The buffers were industry standard 74LS24x TTL level buffers.  I needed one 74LS245 and two 74LS244 buffers to drive the address lines.  There were others for control and bus management, but we’ll use only the 74LS245 for simplicity.  The bus was about 10 inches long (0.254 meters) and the transceiver consumed about 250 milliwatts (with a supply current of roughly 50 mA at 5V).

If we apply the equation from last week’s blog post to calculate energy per bit-meter for the old ISA bus we get 15.4 nJ/bit-meter.  This is using the bus clock rate, not the bus transfer rate. The calculation for the serialized bus running full duplex to the peripheral, we get 1.6 nJ/bit-meter over the same circuit card – an almost 10 to 1 improvement in data transfer energy efficiency.  These calculations are for a single end of the connection.  This ISA example does not take into account all the supporting bus electronics since it was a shared bus.  The serialized bus is much simpler since it connects directly to a single peripheral and can have multiple peripherals communicating with the host at the same time.

If you look deeper, the older parallel bus architectures are even less efficient at moving data than stated above.  If you think about extending the bus to an external chassis (i.e. 1 meter or more), the problems really pile up for parallel buses.  Serialized buses simplify everything from the connector to cable (with less wires) and even the number of connections to processors or FPGAs and are far more efficient at saving energy when moving data.

Let me know your stories or opinions by commenting on this blog or dropping me an email.  I’d love to hear from you.  Until next week…