My Photo

Recent Comments

January 19, 2010

The End Of The Carbon Age

Bookmark and Share

IStock_000009197493XSmall In studies of the progress of mankind, periods in time have been given specific names to identify the level of our development.  For instance, names such as the Stone Age, Bronze Age and Iron Age were placed on various periods in time to describe what materials were in wide use for tools and weapons.  Today, we have similar names such as the Steam Age or Industrial Age (roughly 1770 to 1914) and more recently the Information Age (roughly 1980’s to present).

However, a broader category that started with the controlled use of fire through the burning of wood, oils, and other natural sources of fuel began the Carbon Age (coined by Eric Roston in his book, "The Carbon Age" - where he takes the journey of the element all the way back to the Big Bang).  In my view, this was the first time that man-made controlled sources of energy began releasing carbon gasses into the atmosphere.  It covers all of the previously mentioned ages as well as today’s modern age of technology.  Civilization has simply found new sources of carbon based fuels to consume to drive our technology. 

Estimates vary, but evidence points to man-kind using controlled fire between 100,000 to 200,000 years ago (and possibly earlier).  The use of carbon based fuels such as petroleum has increased exponentially with inventions such as the internal combustion engine and gas turbines.  Additionally, coal has been used widely throughout history and increased as well during the beginning of the industrial age.  Coal along with natural gas drives many of our modern power plants and contributes large amounts of CO2 to our atmosphere.  So it is fairly safe to say, the carbon age started a long time ago and is in full swing today...

However, it is quite possible we are living at or near the end of the carbon age - at least for use as an energy source.  With the world population growing at unbelievable rates, and the increased demand for energy to drive technology, we are simply running out of naturally occurring carbon based resources.  Again, estimates vary depending on how much carbon sources remain and how fast we move away from them, but most estimates seem to fall within the 100 year mark (some much lower to even tens of years).

Along with increased global political pressure and unstable sources of oil, a great deal of research has been focused on alternative energy sources.  It has also driven the development of electric vehicles and other sources of electric power such as solar, wind and geothermal.  Nuclear power has had promise to provide clean, unlimited sources of electricity, however current designs produce a great deal of hazardous waste along with by-products usable in weapons of mass destruction.

Besides natural sources of energy such as solar and wind, there are at least two other nuclear methods available to solving the growing energy demand.  One is nuclear fusion (not fission like traditional nuclear plants use) where isotopes of hydrogen are fused together to produce helium isotopes, a proton or neutron along with tremendous energy.  Current research is hampered by the difficulty in producing temperatures high enough to sustain fusion in a controlled manner.  Once fusion power is commercially feasible, power plants based on this technology will appear replacing both fission based nuclear plants as well as traditional coal and natural gas burning plants. But this technology has some time to go...

The second type is a lesser known nuclear technology based on Thorium.  The latest designs incorporate thorium dissolved in fluoride salts.  These are called Liquid Fluoride Thorium Reactors or LFTRs (pronounced "lifters"). They are straight forward designs that incorporate many inherent safety features.  In these designs the liquid salt carrying the fissionable thorium expands as it gets hot limiting the fission process.  As the reaction slows, the liquid salt cools down and contracts once again increasing the rate of reaction - the system is self-regulating.  Designs incorporate a dump tank with a salt plug that melts if it gets too hot and automatically drains the core material and shuts off the reaction. 

In researching thorium based reactors, it quickly became evident that the history of nuclear reactor designs were driven by the DOD to promote the production of Plutonium for weapons programs.  Since thorium based reactors burn most of their fuel, the result is very little waste with little or no weapons potential… adoption of such technology could lead to the end of the carbon age where we run everything by electricity. Lawrence Livermore Laboratories and others are even designing small reactors that can fit on the back of a rail car and be safely transported anywhere.  Proliferation of this technology could solve many of our current problems with the nuclear industry and the current waste products it produces, but mostly with carbon based fuels and the issues with their use.

For more information on thorium LFTR technology, see the Thorium Energy Alliance.  It was said at the dawn of nuclear power generation that electricity would be produced so cheaply that there would be no need to meter it... maybe that day is upon us!

Till next time...

October 13, 2009

The Energy of Information

Bookmark and Share

Is the energy content of information increasing? As a Technologist it is very interesting to me that in the twenty first century our world still prints newspapers and books on paper.  More amazingly, the computer printer market is booming especially in areas such as photo-printers.  In the late twentieth century it was predicted that by the next millennium, paper would be obsolete as a medium for sharing information... I'm pretty sure not everyone got that memo...

So what happened? We are now in a world where the internet almost completely permeates our environment including locations so remote, only a satellite link and solar-recharged batteries will work to power the nodes (think "Antarctica").  We have advanced social networking, file storage and even complete applications that exist solely in a nebulous cloud of computers spread across a vast infrastructure... and we still print out the map to a local restaurant on plain old paper.

My theory is that everything migrates to the lowest possible energy level and paper requires very little energy to provide information - it only requires a small amount of light to shine on it so a human can observe what is stored there. In fact it requires zero energy to store the information (or read it if it's in Braille) and potentially has a long retention life of several hundred years (not so for a DVD).

So paper is not such a bad medium for sharing information - mankind has been doing that for thousands of years.  But it has one major flaw... it is hard to update.  If you manufacture encyclopedias on paper, then the second you set the type for the printing, they are obsolete.  Information does not stand still.  It is fluid as our understanding of the universe expands and history moves behind us in time. And worse, information can be useless.  Think about a billion books randomly arranged in a gigantic library without a card catalog.  Even with an index, searching millions of pages of information for knowledge may never yield fruit. 

So is the energy content of information increasing? I would suggest it is.  As we accumulate more information, the energy required to store, search and display it increases - possibly exponentially with the quantity of information.  The amount of new information being created daily is unfathomable since people are sharing what they know more freely and indexing of that information has greatly improved.  Additionally, information that was previously in print is now being converted to share electronically increasing the energy that information requires.  Google did some math several years ago and predicted that even with the advance of computing power as it is, it would still take roughly 300 years to index all the information on the world-wide-web... Wow!  Guess how much energy that will take!  Till next time...

October 01, 2009

Ignorance is Bliss... How Knowing Too Much Can Ruin Your Day

Bookmark and Share

Don't watch your power meter to closely... you may lose sleep!

My name is Rick Zarr and I am a geek.  OK, I’ve said it publically for the record.  I get excited over reading articles on quantum well transistors and photonic lattice light emitting diodes.  Yes, I live to learn about technology and how it can be employed to improve our lives. Most of all... I like to build things - always have and always will. I have a "home project" worksheet that looks more like a broker’s stock trading analysis complete with Gant charts and status updates. I am a consummate data collector and home automation enthusiast - much to the dismay of my wonderful, loving wife who tolerates all the lights going out at the press of an incorrect button... but I digress. Information is power over your environment and it helps immensely with decision making processes - most of the time...

I so much love to instrument things, that over the past nine years I have been equipping our home with sensors, custom software and automation to know exactly what’s going on.  My goal was to improve our "living" efficiency as if our house were some giant manufacturing machine kicking out sneakers or soda bottles.  I will admit it is a work in process... engineer’s minds never sleep and we are always coming up with new ways to solve problems or improve processes.  So goes my "smart" house - which should be more aptly referred to as a "modestly clever" house with SLD.

I am usually intelligent in my decision processes, so when I started this project I learned that knowing the truth can sometimes be less favorable then simply being ignorant to the topic.  The power consumption of our house is a classic example.  Now, I knew I was a large consumer of energy. I write about the topic all the time and I’m painfully aware of the "average" consumption in America. I was on a mission to find where every milliwatt was going...

My quest started me on a crazed path to rid our home of energy waste... this lasted about 10 minutes until I realized that the rest of the family wasn’t buying into it.  It’s much easier to say, "Would you mind turning off the TV when you’re done watching America’s Next Top Model" as apposed to "Here’s a detailed report of the family’s energy consumption for the last week - we have a consumption goal of Y, and your quota is X so please adjust your life-style accordingly"... my daughters would simply laugh.

Following my rant I was lovingly banished to my home office.  I sat at my computer and watched the machine in action - lights going on here and there, air conditioners cycling on and off, pumps starting and stopping and realized that to make this type of thing work, my family (including me) needed to be out of the equation.

I am now working on adding rules to the system that (here’s a stretch) "learn" what we’re doing and adjust the house accordingly.  For example, if the thermostat in the bonus room is set to 75°F and there’s no one moving around, the TV is off or worse, the security alarm is set in "away" mode, then it’s probably safe to set the thermostat back to 85°F until someone changes it (or someone enters the room).  There are many other examples, which made me realize that I just added another item to my long list of things to build... this is going to take awhile.  Until next time...

August 30, 2009

In Pursuit of Efficient Lighting

Bookmark and Share

What will leave Edison to the history books? As a technologist I am often asked what single change would bring about a more stable energy infrastructure - it’s not quite that simple.  Our infrastructure has evolved over the past several hundred years into the distributed, fairly reliable source of electrical and chemical energy that we now enjoy.  To pose this question is like asking what single change could be made in a human body to allow us to live longer - again, not so simple.  If you improve one area, you possibly degrade another. 

This brings up some controversy over moving to electric vehicles in an effort to reduce green house gases and remove the dependency on foreign oil.  If you could simply convert all carbon fuel based vehicles to electric, suddenly the entire electrical grid would be overwhelmed by the charging requirements.  In addition it would create a need for potentially hundreds of new power plants - many of these burning coal or natural gas and producing green house gases!  Not a simple solution...

But possibly, there is a single thing that could make a significant difference in improving our energy consumption - at least for now.  I have mentioned this before in several blogs, but it is fundamental in how modern humans live.  It is lighting - the artificial light that allows us to see when the sun goes down.  I cannot imagine a world without artificial light sources.  However, I periodically fly from coast to coast on a "red-eye" flight and as I look down from 25,000 feet I am constantly amazed on the amount of power being fed to tens of thousands of street lamps - all lit brightly regardless of who might be there.  I even pick out the lone 500 watt mercury vapor lamp on some mountain top location and wonder why it’s there...

According to the U.S. DoE Energy Information Administration (EIA), in 2007 the U.S. used roughly 526 billion kilowatt-hours of electricity for lighting (both commercial and residential). In the following year, a typical nuclear power plant produced roughly 12.4 billion kilowatt-hours, so for the U.S. the lighting needs alone require roughly the equivalent of over 42 nuclear power plants. In addition, the world population is growing requiring more energy.  This means the rate of increase of consumption in itself is increasing.

You cannot simply stop using power, but you can be more efficient with what you have.  As it turns out, Light Emitting Diodes or LEDs have been on the fast track to replace both incandescent and florescent bulbs.  LEDs today are already more efficient than incandescent bulbs, and closing fast on Florescent designs. One problem (among several) that is slowing adoption is in the luminous intensity of an LED. 

The problem stems from the way photons are created within the band-gap of the diode structure.  As electrons cross the band-gap (a forbidden energy level), they transition from a higher energy state to a lower one.  In most diodes, this transition is non-radiative (no light) and is simply converted to heat.  If the band-gap energy is high enough, a photon is created.  This is the basic operating principle of LEDs.  However, most of the photons are caught in wave modes within the semiconductor material and do not add to the light emission - only additional heat as they recombine within the material.

Well, over the last several years some very clever people at MIT started looking at regularly spaced nano-structures that act as waveguides to tunnel those lost photons out of the depths of the LED material.  These are called Photonic Crystals and have driven the luminous intensity and efficacy of LEDs to new highs.  They formed a company around the technology called Luminus to manufacture these ultra-bright LEDs.  This innovation may very well be the first step in realizing a solid-state lighting future.

Now there are still problems with inefficiency due to a phenomenon called Stokes Shift (found in White LEDs using phosphors), thermal conduction requirements (no IR emission as in incandescent bulbs), higher cost plus the addition of electronics required to power and monitor these devices.  However, simply improving the efficiency of every light bulb by 50% in the U.S would immediately remove 30 plus coal burning power plants from operation.  Now that’s significant. Till next time...

August 24, 2009

The Energy Loss of Poor User Interface Designs

Bookmark and Share

IStock_000001511231XSmall I was fueling my car the other day and the pump I was using had one of the worst user interface designs I have ever come across (the brand of pump will remain nameless... but you know who you are).  As I struggled with the poor response time, lack of feedback and just overall bad programming (and this was a simple fuel pump) it made me think... what energy is lost due to users taking extra time using a system with a poor UI design?

I’m sure you know what I mean... most software is delivered with very little user testing.  Of course the designer knows how to use it, but the real test is someone with absolutely no knowledge of the software.  How fast can you use it and get the information you need.  I see this in web designs and other information server applications.  If I have to drill... and drill... and drill... to get to the level I need, I go crazy - especially if I made a bad choice somewhere along the way.  It’s like the old style "wizard" help dialogs. It’s when you get to the end, and it tells you that the software is about to do unnatural things to your data and asks you if you are sure you want to use the original file... is when you realize that twenty steps back you should have specified a new file name!  That’s what I’m talking about. 

Or what about unresponsive code - oh, this is really high on my list of bad software behavior.  If I have to wait for a task to complete before starting another one (especially if they are unrelated), then I start counting the seconds like I’m in prison.  Some software engineers didn’t get the memo that we’re in the 21st century and multi-threading applications are not some lab curiosity!  Or how about the lack of user feed-back... when pushing buttons on some piece of equipment yields nothing in return?  Is the equipment not working? Is the equipment busy doing something else?  Is the button broken?  We just don’t know, but the time it takes to complete whatever task I’m doing certainly increases.

Ok, so much for the rant.  But what amount of energy is lost if any?  Something certainly must be lost? Let’s examine a fictitious, but real-world example - an ATM or Automatic Teller Machine - something most everyone is familiar with.  The ATM has an LCD touch screen and the unit sleeps when no one is around to conserve energy.  Only when someone walks up to it (motion sensing) does the LCD and backlight come on and the processor wake up. 

In this thought experiment, two revisions of software were released - revision A which has UI issues and revision B which was revised to improve the UI.  The only difference between the two releases is the user interface - everything else is the same.  The ATM is in a high traffic area (of course) so that it generates the most revenue for the bank through access fees.  Revision A lacks the "beep" for user feed-back, and uses one thread for all the functions. Revision B has a "beep" when the screen is touched and is multi-threaded so the UI is independent from other activity.

Input speed is slower for revision A due to the single thread and users may think they have not entered their PIN correctly due to the lack of audible (or tactile) feed-back causing them to touch twice.  Revision B has a thread dedicated to the UI, so the "beep" and character representation for the touch is almost instantaneous.  I would imagine that 50% of the time, revision A will cause an incorrect entry of the PIN - at least during the first attempt.  The total time delay would be roughly an additional 10 seconds.  The next hurdle would be entering the amount for deposits or withdrawals - probably 90% of the usage of the machine.  Assume the same 10 second error recovery time when an error is made. Using these simple estimates and assuming and average of 30 users per hour, table 1 shows the total run time the systems are up (not sleeping) for each revision (assuming only withdrawals and deposits).

Table 1 – Software revision unit run time comparison

Rev A time (sec)

Rev B time (sec)

PIN entry

(7 + (10 * 0.5)) * 30 = 360

5 * 30 = 150

Transaction selection

6 * 30 = 180

4 * 30 = 120

Amount Entry

(10 + (10 * 0.5)) * 30 = 450

8 * 30 = 240

Communications Time

15 * 30 = 450

15 * 30 = 450

Accept / Dispense Time

5 * 30 = 150

5 * 30 = 150

TOTALS

1590 (26.5 minutes)

1110 (18.5 minutes)

So, looking at this hypothetical ATM, we see that in every hour, a bad user interface which causes errors in entry may increase the run time by 8 minutes.  If the unit is sleeping the remainder of the hour (simplified), then every hour the run time increases by 8 minutes or 3.2 hours per day (assuming an average of 30 users per hour for 24 hours).  A more accurate model would take into account usage and sleep times for the entire day, but it is obvious that a poor UI will increase the run time of the machine. 

In this model, If the ATM consumes 200 watts in run mode and 20 watts in stand-by, the energy consumption increases by 576 watt-hrs per day, or an additional 210.4 kW-hrs per year simply due to errors caused by the poor user interface… makes you think about the next time you start writing code, doesn’t it? Till next time...

August 03, 2009

Lower Power - It’s all In the Architecture...

Bookmark and Share

Better Architecture = Lower Power

I was wandering around the show floor of the Design Automation Conference (DAC2009) in San Francisco last week talking to various vendors of EDA software and other interesting semiconductor design tools. I was amazed at how many vendors had that special tool for "lowering power by up to 20%" - just press the button and our tool will magically reduce your system power.  Oh, if it were only true - the problem is much deeper and complex than it appears.

We now live in a world where machines build machines - yes... it’s true... The Matrix is real - well, the "machines building machines" part anyway.  Ask a system-on-a-chip (SoC) designer to tell you exactly how his massive billion gate device works.  Not the blocks, inputs or outputs ("here we have a 128 bit bus for memory, and over here we have five megabytes of static ram, and here..."), but the real gates of the design at the transistor level.  This is like asking a software programmer to explain the machine code spit out of a C++ compiler - possible, but unlikely. 

These tools optimize and streamline the design based on embedded rules that are under the user’s control. They have limits in that the tool cannot improve an engineer’s bad design (software or otherwise).  So if engineers are trying to build lower power SoCs, then they need to use the most powerful tool available - the one between their ears. 

This is now a time when shrinking process geometries are causing new problems that are unlikely to go away with the next generation of tools.  When we were shrinking from 0.5 micron gate lengths to 0.35 micron lengths, problems with leakage and other structural artifacts were much easier to deal with.  Today, 45 nanometer gate lengths (0.045 micron - over 10x smaller) have an entire new set of problems.  First, there are a lot more transistors then when we were building chips from a 0.5 micron process. Second, they are running much faster and third, they leak current like a torpedoed ship leaks water - over three orders of magnitude higher than a typical 250 nm process (at 30 degrees C, 3000nA/um for a typical 45nm geometry process vs. 1nA/um for 250nm).

So what’s a designer to do to decrease power consumption?  Engineers need to start thinking of new ways to architect their designs. Here are some ideas:

1. Partition the system and provide isolation so that sections can be turned off.  Today’s tools allow for this, but very often engineers do not use this technique.

2. Don’t be afraid to provide multiple voltage islands.  Yes, it’s scary but foundries provide models of their processes at different operating points.  If you don’t need to go at warp 9.9, you don’t need your antimatter reactors running at full power... close your timing at the lower voltage and level shift that section.  Remember, dynamic power varies as the "square" of the supply voltage and leakage varies linearly as well...

3. Gate or dynamically scale your clocks.  If the system doesn’t need to be running at full speed, gate or slow down the clocks.  This is an architectural issue and some systems cannot utilize this method (e.g. video accelerators, etc.).  However, with some "re-thinking" there may be areas that can slow down at lighter loading or other conditions that do not require full performance.

4. Dynamically scale the supply voltage.  This can be done in combination with clock scaling and use either open loop table based methods such as Dynamic Voltage Scaling (DVS) or more advanced techniques such as Adaptive Voltage Scaling (AVS) which continuously monitors the process for adequate performance and automatically adjusts the supply voltage to maintain timing closure.

Any of these techniques will improve your power performance. So employ them - if not to save energy for the planet or reduce the system’s carbon footprint, then to save heat sinks or improve your mean time to failure numbers (lower junction temperature means longer life).  At this point, greater gains can be made by improving the architecture - at least until we make the move to quantum well transistors! Long live Moore’s Law...

Till next time...

July 22, 2009

Engineer This!

Bookmark and Share

Turn those ideas into reality! So there’s an energy crisis... I don’t know about you, but when someone tells me they’re in a "crisis" state, it usually involves 911, paramedics, attorneys or counselors.  Somehow I don’t get the same feeling about our energy "crisis" when I’m driving to work and I’m stuck in traffic, or when I’m flying a "red eye" home from the west coast and I’m looking out the window at thousands of square miles of street lights blazing.

A crisis is upon us, but we seem to be going about our normal lives not really worried about unplugging that phone charger or adjusting the HVAC to save some energy.  Let me propose a future that could be only a few years away and without changing behavior could produce a true crisis. Here’s the scenario:

It’s 2015 and both China and India’s economy is booming again.  People who had never owned a motor scooter now are buying the latest Tata Motors Nano and other sub-subcompacts - and at over 60 miles per gallon, economical to own since gasoline is now $5.50US per gallon in the US and over $15.00US per gallon everywhere else.  There are now over 1 billion vehicles in operation worldwide and the oil consuming nations do not have the capacity to refine crude oil into gasoline and diesel fuel driving the cost through the roof. 

This high cost has rippled into everyday life driving the price of other fuel sources such as natural gas to new highs.  Modern gas burning power plants are now paying excessive prices for supply and passing that onto consumers driving the price of a kilowatt-hour to over $0.35.  Monthly electric bills that used to be in the $150 per month range are now over $400 and cities are turning off their street and building lights to conserve power. The world is now in an energy crisis...

This scenario is not too far fetched if you consider that the US has not built a new oil refinery since 1976... If people start migrating to electric vehicles which need to be plugged into the grid, increased demand will be placed on power plants once again driving up cost.  It is simple economics... when the demand of a commodity increases driving the supply lower, the price goes up. 

So here’s my call to action for our next generation of engineers about to enter the work force (or those already in it) - Do something about it! There are several key technologies that still need to be developed, and those that succeed will not only be heroes of our age (the carbon age), but will surely reap the financial  benefits as well.  Below, in order, find my list of areas that need to be developed and commercialized to reduce our energy consumption.

1. Inexpensive, safe and reliable electrical energy storage. This could include a new generation of batteries, but better still... a solid state device such as a mega-capacitor that never wears out.  Battery technology has not evolved much since the 1950... abundant energy storage can drive the adoption of electric vehicles and energy harvesting (e.g. solar).
2. Smarter Everything.  If everyday stuff had more brains and could communicate with a common protocol (language) to everything else (scary Matrix-esque thought...), our "stuff’" could work together to conserve energy.  Examples could include smart appliances and equipment (that know the price of energy and act accordingly), smarter cars (that know the price of fuel and tolls, shortest distances, driver’s habits, etc), smarter power grids that can communicate to consumers what’s going on, street lights that dim when no motion is detected in the area and more. Even coffee makers that only brew one cup made to order by reading an RF ID tag on the bottom of a user’s mug would save energy (not to mention coffee - I want one of those, by the way...).
3. Better use of existing technology.  For example, laser printers that fuse the toner using high power LEDs instead of an old fashioned quartz tube that must heat a roller (and keep it heated even when no one is printing anything...). I see power savings (and waste) everywhere I look.  Take a look around and get inspired.
4. More efficient systems.  For example, the basic heat pump air conditioning or refrigeration system hasn’t evolved much in 50 years (other than coolant material changes which actually hurt efficiency).  The basic system uses the gas / liquid phase changes to absorb heat and then pump it somewhere else.  Not much life left here.  How about using endothermic magneto-caloric material such as gadolinium (or other composite materials) in a strong magnetic field to cool your fridge or house.  The Brits are working on just such a design for a solid state refrigerator. Hey, isn’t that the same nation that gave us the guts of our microwave ovens? They invented the high-power cavity magnetron in the early 1940s.

So you probably get my point.  Humans are driven by environmental pressures. If there’s not enough water at our watering hole, then let’s move to a new one.  But let’s not wait until the hole has gone dry, the vegetation has died and all of the water buffalo have moved on to other places to say, "Hey, we should move to another watering hole."  Let’s be a bit more proactive. Till next time...

June 24, 2009

Telepresence – The Next Best Thing to Being There

Bookmark and Share

Virtual meeting Just about every week I step onto some form of aircraft - mostly turbine powered kerosene burning jets.  I leave my carbon footprint trailing all over the friendly skies and often reflect on that fact.  After running for a flight and finally settling into my generously spacious coach seat, I get a chance to breath and relax.  My mind often goes to past episodes of Star Trek or other futuristic science fiction shows where people simply press a button and are instantly connected via real-time video communications with anyone, anywhere - even between star systems (a bit of a physics problem there with the faster-than-light information propagation thing, but I digress).

So, this week I opened a copy of USA Today and found an interesting article in the Money section on the re-emergence of Video Conferencing technology.  In a world where reducing energy consumption and the dependence on carbon based fuels is paramount, it would seem to me a no-brainer to shove some horsepower in the form of incentives into the video conferencing industry.  Companies such as Citrix and Cisco are already providing services to the masses that are available via the web.  Services such as GoToMeeting and WebEx provide a shared desktop environment for viewing each other’s Power Point or by using shared drawing and mark-up tools. This is extremely handy when needing a quick meeting to "touch base" which in my industry seems to be every hour.

On the higher end are companies such as Tandberg and Polycom who supply specialized equipment, services and software to enable multi-user high-definition audio / video conferencing or "telepresence."  These systems are really something and are geared to the corporate level of service. However, they often require a significant investment in equipment and infrastructure to take advantage of the technology as well as having issues with interoperability between competitors.

Beyond cost, there are a few issues with telepresence that has limited the adoption of the technology.  One is simply "eye contact" and shaking hands.  As humans we extrapolate a great deal of information by watching body language - especially someone’s eyes.  Body language can be extremely telling when you are in the same room.  Place someone in front of a camera, and you may not see the detail or the body language may be influenced by the Hawthorne Effect also known as the "Observer Effect."  A person may act differently if they feel they are being "watched." The knowledge that the camera is sending a video stream to possibly unknown individuals or even being recorded will change a person’s natural behavior.  This could possibly interfere with what would be a normal conversation.

As people get used to the idea of telepresence, those issues will fade, but today the lack of ubiquitous access and standards continues to plague the industry.  I would love to have a camera built into one of the monitors in my office so I could simply answer a call and "see" the individual - not to mention instantly share information as you would in person.  Seeing an individual over a video link reminds you of your connection to them and builds the relationship through repeated virtual "in-person" meetings.  However, many systems cannot interoperate and limit many calls to pre-arranged meetings.  Not that arranging a meeting before hand is bad, but it can limit the ability to impulsively place a video call to someone. 

As the telepresence industry evolves, the issues with interoperability and viewer consciousness will be solved or fade away.  By that time I will probably have a wall-size OLED display and a persistent connection with my comrades world-wide.  People all across our organization will be able to walk by my virtual "cube" and see if I’m in - maybe not such a great idea if I’m trying to get something time-critical completed! Something to think about... but when it happens I will surely miss the posh and lavish comforts of modern airline travel. Till next time...

May 27, 2009

The Next Fifty Years of Energy

Bookmark and Share

Future Energy I’ve been blogging now for over a year and have covered topics ranging from nano-technology and the future of semiconductors to large scale power generation and transmission.  This week marks the 50th anniversary of my company, National Semiconductor.  This milestone reminded me of how far we’ve come as a technological race.  While writing I’ve often reflected on my past engineering experience to look for examples of how we have improved our way of life. However, in this issue I wanted to take a look forward at one of our civilization’s next big hurdles... our future energy supply. 

We are reaching a critical mass where our population will soon exceed 8 billion people - many of whom will be the first generation to use electricity or drive a car. In the midst of our current economic crisis it is hard to imagine global markets surging from the millions of new consumers that will have buying power in the near future due to technology’s reach.  As we continue our journey into the 21st century, energy will shape the new economy.  It will be driven by the demand for manufacturing, agriculture and transportation.  As automobiles begin to shed their gasoline engines for fully electric drives, more electricity will be required to recharge the energy storage onboard these vehicles.  A simple shift from burning gasoline to fully electric vehicles will not solve our energy crisis since much of our electricity comes from carbon based fuels such as coal.  These shifts will require revolutionary changes to meet the new demands.

As in the beginning of the industrial revolution, there will be change on a scale never before seen.  Carbon based fuels have been our energy standard for over 100 years, but are becoming harder to find and reach as well as being responsible for polluting our environment.  It is well understood that every day terawatts of power rain down on our planet conveniently provided by our sun.  It lights and heats our world as well as drives our weather.  However, we currently capture only a tiny fraction of this energy through hydroelectric, wind or solar energy farming.

There are millions of square miles perfect for collecting this free energy, but the technologies are fairly new with some proposed projects reaching incredible scales.  For example, in New South Wales, Australia a proposal has been made to build a solar chimney that towers over 3000 feet tall with a heat collector that covers over a square mile.  As the air under the collector is heated, it naturally wants to rise due to lower density (like a hot air balloon).  As it rushes into the chimney which forms a natural draft, the flowing air will turn huge turbine generators that produce electricity.  The energy will then be transferred to the grid and sent to cities where it can be consumed.  This is the scale of energy engineering that will become common place by 2060.

Another large scale proposal is to place gigantic solar arrays in deserts around the world.  These arrays will either convert solar energy directly into electricity or capture the heat to boil water and turn steam turbines.  It has been calculated that a photovoltaic array 100 miles on a side would be capable of providing all of the current energy needs of the United States - and that’s with current conversion efficiencies being under 20% (an incredibly poor efficiency rating). 

In practice, placing arrays closer to where the energy is consumed provides great benefit.  Combined with "smart grid" technologies, photovoltaic arrays spread over many commercial and residential buildings will be used to gather enough energy to begin reversing the trend of building new large scale carbon based power plants.  One of the biggest problems with electrical generation is getting the energy where it needs to be, when it’s needed.  The peaks and valleys of electrical demand make power plant operators constantly struggle with keeping the output of the plants balanced with the need.  By having local generation spread out over a large population area, peak demands are much easier to manage.  It also adds another level of reliability since the local solar generators can form micro-grids allowing them to shed completely from larger grids if necessary without interruption.

Along with solar will be other technologies that can be deployed locally such as wind turbines.  Large scale wind farms are a common sight today, but smaller, high efficiency, vertical draft turbine designs will continue to improve and allow almost anyone to harness the wind.  As with PV systems, these generation systems will be connected to a smart grid to provide maximum load management.

Many scientists and engineers see an ultimate solution to fusion power in this century.  With experiments and even small pilot plants being constructed, the consensus among them is that by 2050 practical fusion plants will be a reality.  This is the ultimate replacement for the current infrastructure of carbon fuel or nuclear fission based power plants. But we may find that the fusion reactor we have 93 million miles from earth is all that we need.  With PV efficiency improvements and large scale deployment combined with practical storage methods, our technologies may one day be completely driven by the sun. 

Think this is far fetched?  An average home in the United States consumes around 1000 kilowatt hours per month.  Add an electric vehicle, and that may rise to around 1500 kilowatt-hours.  So let’s round it up to 2000 kilowatt-hours to completely remove all carbon based fuels such as natural gas and propane.  That breaks down to just under 67 kilowatt-hours per day or roughly a continuous 2.8 kilowatt load (roughly the amount of 3 hair dryers running).

Assuming the sun shines 8-10 hours a day, 50% of the days are sunny (even cloudy days produce solar power) and an energy storage efficiency of 50% (laptop batteries are far better than that), a solar PV array would only need to generate around 30 kilowatts while in daylight to meet the entire energy requirement of the home and automobile. Today, even with only 10% efficient PV arrays, 15 kilowatt systems are common and affordable (with tax credits carrying some of the burden).  It doesn’t take a large stretch of the imagination to see 30-50 kilowatt systems on every home and business within the next 50 years.

So, while you’re out on a sunny day at the gas station filling up your car and wondering about gasoline futures or wondering what to do to keep your electric bill low enough so you can afford to feed your family, take a look up and realize that all the energy you will ever need is falling on the grass in your back yard - something to think about!  Till next time...

May 07, 2009

The Curse of Moore's Law

Bookmark and Share

Processor Wafers As many of you know, Gordon Moore stated in his paper of 1965 that the trend for integration of digital transistors will increase exponentially, doubling every two years. So far, Moore’s Law has been pretty close if not conservative.  Looking at the Intel 4004 processor of 1971, it represented a transistor count of around 2300 devices (yes, two thousand three hundred transistors).  The new Intel Quad-Core Itanium Tukwila contains around two billion transistors - an increase of almost a million fold... so that’s good news, right?

In the scheme of higher levels of performance or simply improved portability (think Apple iPod shuffle), higher transistor counts are a wonderful thing. However, there are forces at work at the atomic scale that are making problems for these amazing shrinking transistors... as the size is going down, the total energy the chips are consuming is going up.  Why? First some semiconductor fundamentals...

Figure 1 shows the scale of a typical Complementary Metal Oxide Semiconductor (CMOS) FET. Like all semiconductor processes today, MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) are fabricated using a layering process.  Currently, the layering is done using Deep Ultraviolet Lithography due to the extremely small line spacing used to fabricate the circuitry.  Basically, the UV light exposes sensitized areas of the device to either allow or avoid etching or ion infusion as the layers are built up to make the chip.  These are very tiny devices - the gate length of a modern MOSFET inside of a digital CMOS Cross Section chip is on the order of 65 nanometers - an average human hair is roughly 100 micrometers in diameter... over 1500 times larger!

As the transistor’s conduction channel is made smaller, so must the insulating oxide that sits above it that controls the charge carriers between the source and drain. As the oxide gets thinner, it becomes harder to prevent electrons from "tunneling" through the insulator to the underlying substrate, conduction channel or source-drain extensions.  This phenomenon occurs in both the "ON" and "OFF" states causing significant losses when considering large scale integrated circuits.  Also, sub-threshold leakage is a problem for digital devices.  This is the current that flows between the source and drain when the gate voltage is below the "turn-on" threshold - useful for analog circuits, but a dirge for digital designers. 

It turns out that as we shrink transistors, all of these parasitic problems that used to be minor at larger scales are now adding up primarily due to the higher number of transistors found in modern devices.  Equation 1 shows the relationship between some of these characteristics - most notably, the supply voltage "V".  As the supply increases, dynamic power consumption increases exponentially and is often dealt with first (being the larger source of power consumption).  However, the static losses (iLEAK) are CMOS Energy Equationincreasing as the transistor geometries shrink and the densities increase.  As the frequency of operation of the device is scaled back to conserve energy, the static loss predominates - a real problem for large scale devices with hundreds of millions of transistors.

There are structural ways to minimize these losses, but as geometries continue to shrink, power will become a much more serious issue... not only as a problem for energy consumption (i.e. operating costs) or battery life of equipment that uses these devices, but for the heat that builds up inside of the chips themselves.  As heat builds up and cannot flow away from the source, the localized temperature will also increase causing aging in the device.  A chip will fail sooner if operated at high temperatures, so by lowering the power consumption, the lifespan of the device improves.

Back in 2000 my company, National Semiconductor, pioneered a cool way to lower not only the dynamic power, but the static losses as well.  It’s called Adaptive Voltage Scaling and was used primarily on portable devices to increase run time when running on batteries.  However, as chips continue to grow more complex, AVS is now showing up in large scale devices such as the Teranetics TN2022 10G base-T Ethernet physical layer device.  AVS technology utilizes the principle that all digital chips are designed and implemented for worse case process, timing and temperature.  This is like saying the most out-of-shape person will consume one gallon of water while hiking a trail… therefore, all hikers are required to carry one gallon of water - even though most will do just fine with a single canteen full.  It burdens the better members of the group with the problems of the worse constituents.  So by using technology, each digital chip can be “assessed” for its "water consumption" based on how "fit" it is... that is, how well the chip’s process performs.  Like humans, they will all vary around a mean or average performance level, but the majority will fall near the center of the distribution.

AVS leverages this characteristic by placing monitor circuits inside the digital core (or cores) which report the current state of the chip to an embedded controller called the APC or Advanced Power Controller.  The APC then can make decisions whether the supply voltage is too high or too low and communicate those adjustments to an external power supply device called the EMU or Energy Management Unit. As temperature moves, the process ages or the digital load varies, the controller again makes updates that minimize the energy consumption of the overall system.  Energy savings of 40% or greater have been observed when using this technology especially in systems that use multiple devices.

As Moore’s Law marches on, the number of transistors on a singe chip will grow to levels never before imagined... and to keep these amazing devices running, technology will also need to address the methods for powering them.  As in my previous post, "The Personal Supercomputer in Your Pocket", the potential for capabilities is only limited by our imagination and our ability to manipulate our physical world.  Till next time...