My Photo

Recent Comments

Computing

September 14, 2011

Will Binary Communications Survive?

Bookmark and Share

IStock_000000123354XSmall In my last post "Going Faster Has a Price" I discussed the issues with transmitting bits represented by two states at faster data rates and the problems of inherent loss in the media, ISI and many other phenomenon that screw up the signal.  Through careful channel design and active means, engineers can transmit and recover bits over copper cable and back planes with ever greater rates.  For example, National Semiconductor and Molex demonstrated 25Gbps+ communications over a back plane at DesignCon 2011 this year.  But how long can the industry keep doing this without changing the way we define a bit on a backplane?

This problem is not a new one... as a matter of fact, it is a very old one going back to the early telecom days of modems.  In the early days of circuit switched (voice) networks, filters were placed in the system to limit the bandwidth of the signal to around 3KHz which was enough to reconstruct a human female voice without distortion.  This was done primarily as a means to frequency multiplex multiple telephone circuits on a single microwave transmission between towers (before fiber-optic lines).  So when people tried to move "bits", they were limited to the 3Khz bandwidth.
Enter the Shannon-Hartley Capacity theorem (see below).

SHCapThereomWhat this says is the maximum capacity of a channel to carry information is a function of the bandwidth (B) in Hertz and the Signal to Noise Ratio (S/N) which has no units.  So as your noise goes up, your capacity to move information goes down.  This plagued early engineers and limited the amount of information that could be moved through the network.  Early modems used Frequency Shift Keying (FSK).  One frequency was used to indicate a "0" state and another to represent a "1" state.  The frequencies where chosen so that they would pass through the 3Khz limit of the channel and could be filtered from the noise.  The problem is that you couldn’t switch between them faster than the bandwidth of the channel so you were still limited to the 3KHz... so how did they get around this?  They used Symbol Coding.

Symbol coding basically combines groups of bits into a single symbol.  That symbol can be represented by a frequency carrier and a combination of amplitude and phase.  This led to the development of Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) techniques which are in use today in modern cable modems. The group of bits can be sent all at once instead of one bit at a time... clever! However, it comes at a cost and a fair amount of complexity relegated to the world of digital signal processing.

But what about the high-speed digital signal path between two systems in our modern Internet?  Today they use scrambled Non-Return-to-Zero (NRZ) coding which prevents DC wander and EMI issues... but it is still either a "0" or a "1" state - two levels representing the state of a bit.  Will this medium ever move to other coding schemes to get more data through the channel as the early telephone system did?  It might.  Intel and Broadcom are both pushing for a standard that uses multiple levels and symbol encoding for 25 Gbps and beyond.  This has the added benefit that more bits can be sent in a single transmission of a symbol.  This is already being done today in Ethernet for the 10/100/1000 CAT-5/6/7 standards over UTP cable where the bandwidth of the channel is limited to around 350 Mhz. Will we see this at 25 Gbps and beyond? Possibly...

The problem with this method is power.  It takes DSP technology at each end of the channel to code and recover the signals adding energy consumption to the mix.  With thousands of channels in a modern data center, that power can add up really fast.  NRZ techniques are very low in power consumption.  National Semiconductor has produced devices that can move data at rates of 28 Gbps over copper media and back-planes at very low power consumption - something multi-level systems will find difficult to do.  The industry agrees and is pushing back on the multi-level proposals. 

There may come a day beyond 28 Gbps where there is no alternative but to go to multi-level symbol encoded systems, but I think that may be some time off in our future when 100 Gbps is far more common - perhaps even to your cell phone!  Till next time...

 

 

 

March 17, 2011

Going Faster Has a Price

Bookmark and Share

IStock_000005835679XSmall As you know, if you want a car that you can drive at 150 MPH, then you will pay a premium since it will require additional technology to keep you connected to the road and overcome the frictional forces of the air - as well as the "Gee, I look really cool in this car" effect which at times comes with an even greater price tag.  In the physical world of high speed data and signal integrity, these laws also apply.  Dr. Howard Johnson knows this well and has published several books on the subject.  Even the subtitle "Advanced Black Magic" implies the difficulty in designing high speed systems.


Well folks, it isn’t getting easier.  In fact, it is getting far worse. What is interesting about our world is our fundamental quest for knowledge - and the more rich the content of the information, the quicker people learn or share information.  There is also the desire to communicate and the later also applies... the richer the content (photos, videos, music, etc) the more appealing the media.  With the passing of the DMCA Title 2 which protected service providers from copyright infringement when making local copies to stream (or the unscrupulous pirates stealing it from them) along with the deployment of DOCSIS modems (now version 3.0 exceeding 100Mbps up and down - if the OSP is willing) the stage is set for one of the largest bandwidth explosions ever witnessed by man.


This expansion of bandwidth is driving data center equipment to ever increasing capacities... it wasn’t long ago that 1Gbps was fast... not any more.  The norm in data centers now is 10Gbps Ethernet (802.3ae - optical) and quickly moving to 100G Ethernet!  The latter has been accomplished via 10 lanes of 10Gbps, but is moving to 4 lanes of 25Gbps which matches the number of lasers and receivers found in most 100G modules.  Do you know what happens to a 25G signal when it travels over a back-plane... it isn’t pretty.  In fact, 10G has issues as well and it’s amazing that it works at all...


For example, take a look at the image below.  This is a comparison of PCI Express signals (generation 1 through 3) over 26 inches of differential traces on a PCB (FR-4).  As the speed of the signal increases the eye opening decreases.   What used to work without issue now requires either a change in board material or active circuitry to restore the signal.  These signals are far slower than a 25-28Gbps stream now being considered for electrical interface to optical modules.  Without signal conditioning, careful layout (thank you Dr. Johnson), and good impedance control... no bits, just noise...

Blog 050 

 
If you want to know more about fixing this, visit http://www.national.com/datacom and watch some of the cool videos on how it’s done... as Spock would say... "Fascinating"... Till next time...

December 16, 2010

Get Active - Lowering Networking Power in Data Centers

Bookmark and Share

IStock_000002328995XSmall In the past I’ve discussed topics such as virtualization and digital power to help improve data center processing efficiency.  I may even have discussed additions to the 802.3 standard to idle Ethernet drops when they were not in use.  However, I have not addressed the interconnect power itself and it was surprising what I found.
In medium scale data centers such as those run by financial institutions, large retailers or corporations you will find thousands of server blades and the networking equipment to connect them together.  What is interesting about this architecture is that the majority of networking traffic occurs within the data center itself.  The reason for this is partially due to the litigious nature of our society and the never ending quest for information to help us understand ourselves.  For example, simply performing an on-line stock trade - which to the user is a single transaction - will spawn dozens of additional inter-server transactions to secure, execute, verify and log the event as well as extract statistics used in market analysis.  So when millions of people are on-line day trading stocks, billions of transactions are occurring within the data centers.
This huge amount of traffic need bandwidth and traditionally this has been accomplished by employing fiber optic cable.  Fiber has the advantage of a very small diameter thus providing space for air-flow to cool the systems.  Larger copper wire could be used for short hauls, but the diameter would block the air-flow and cause over-heating. 
Fiber requires light (lasers) to operate and different distances and data rates require different modes of optical transmission.  To allow flexibility, equipment manufacturers have created connectors that accept a module that contains the laser and receiver electronics.  These are many variants, but the most accepted standards are SFP+ (Small Form-factor pluggable), QSFP (Quad SFP), CFP ("C" or x100 Form-factor Pluggable), XFP (10 Gigabit small Form-factor Pluggable), and CXP.  These modules are actively powered and consume 400-500 milliwatts of power each! When you have thousands of them the power quickly adds up.  Additionally, the heat generated must be dealt with and the modules are also very expensive.
Now what’s most interesting is that the majority of interconnects within the data center are only a few meters long!  Normally passive copper cables would work fine but as mentioned above they would decrease the airflow at the back of the equipment.  So a clever solution is to use smaller diameter copper wire (28-30 AWG) which suffers from higher loss and place active drivers and equalizers such as the DS64BR401 in the connectors which fit these standard module sockets.  This technique is called "Active Copper" or "Active Cable" and has many benefits in less than 20 meters runs.  The first benefit is cost - these cables can be less than half the cost of the fiber module and cable.  The second is power - active cables can reduce the power consumption significantly if properly designed (< 200 mW vs. 400mW for fiber).
Fiber will always have a place for carrying data long distances for which it excels.  However, in the data center copper wire is regaining ground with the help of active electronics may be the majority of media carrying your next stock trade!  Till next time...

April 05, 2010

The World Electric – Part III

Bookmark and Share

The future of semiconductors? Imagine that it’s now the year 2093, two hundred years after the Columbian World’s Exposition of 1893 where Westinghouse lit the event with 100,000 incandescent bulbs amazing the Victorian visitors with artificial electric light.  In this future at the end of the 21st century the electronics industry has greatly matured and also diverged.  Micro-electromechanical Systems (MEMS) have merged with analog semiconductor technology to create entire laboratories on silicon and diamond that fit on a pinhead, while digital functions have moved to the quantum mechanical realm of matter.

Digital chips are no longer referred to as microelectronics but rather nano-electronics utilizing groups of quantum dots to form interconnects and logic.  Small geometry CMOS processes faded away around 2020 (the last were sub 16 nanometer 3D structures) with the introduction of production grade high temperature Double Electron-layer Tunneling Transistors (DELTT) and some limited quantum interference devices.  These quantum well devices were unipolar having both positive and negative transconductance based on gate voltage. This eliminated the need for complementary device types ("n" and "p") resulting in greatly simplified structures. These devices were eventually replaced with Quantum Dot Transistor (QDT) variants.

Logic is no longer based on electron currents but rather on electron position... using this method molecular size gates and functions are common place.  Computers now run at equivalent clock speeds exceeding 40,000 GHz (although no clock is running) thanks in part to quantum effects and new quantum architectures.  The additional byproduct is extremely high energy-efficiency resulting in almost no waste heat.

Analog semiconductors on the other hand have merged with nano-scale machines to form complete sensor and analysis engines on a single device.  These devices are so small and consume so little power that a complete health monitor fits into a ring the size of a wedding band. Utilizing mechanical and thermal energy harvesting, no batteries are required and communication with the "net" is accomplished through large arrays of nano-access-points spread like fertilizer across the countryside.  Even structures have pea-size sensors embedded right in the concrete mix during construction that utilize RF energy harvesting to relay the state and stresses providing status in real time.

The world of 2093 as seen in this vision would not exist if not for the never ending march of semiconductor performance - both analog and digital.  Many of these "proposals" of our future are based on research going on right now across many disciplines.  Economic progress dictates growth and if there is to be growth in the semiconductor industry, engineers and researchers will find ways to navigate the physical laws of our world to make gains in performance.  My vision is not everyone’s, but with some historical review, a look at current technologies in production, and a little bit of imagination I’m sure you can imagine the world of 2093 as I do... potentially so far advanced that we (like our Victorian predecessors holding an iPhone) would not even recognize the technology!  Comments are always welcome - until next time...

October 13, 2009

The Energy of Information

Bookmark and Share

Is the energy content of information increasing? As a Technologist it is very interesting to me that in the twenty first century our world still prints newspapers and books on paper.  More amazingly, the computer printer market is booming especially in areas such as photo-printers.  In the late twentieth century it was predicted that by the next millennium, paper would be obsolete as a medium for sharing information... I'm pretty sure not everyone got that memo...

So what happened? We are now in a world where the internet almost completely permeates our environment including locations so remote, only a satellite link and solar-recharged batteries will work to power the nodes (think "Antarctica").  We have advanced social networking, file storage and even complete applications that exist solely in a nebulous cloud of computers spread across a vast infrastructure... and we still print out the map to a local restaurant on plain old paper.

My theory is that everything migrates to the lowest possible energy level and paper requires very little energy to provide information - it only requires a small amount of light to shine on it so a human can observe what is stored there. In fact it requires zero energy to store the information (or read it if it's in Braille) and potentially has a long retention life of several hundred years (not so for a DVD).

So paper is not such a bad medium for sharing information - mankind has been doing that for thousands of years.  But it has one major flaw... it is hard to update.  If you manufacture encyclopedias on paper, then the second you set the type for the printing, they are obsolete.  Information does not stand still.  It is fluid as our understanding of the universe expands and history moves behind us in time. And worse, information can be useless.  Think about a billion books randomly arranged in a gigantic library without a card catalog.  Even with an index, searching millions of pages of information for knowledge may never yield fruit. 

So is the energy content of information increasing? I would suggest it is.  As we accumulate more information, the energy required to store, search and display it increases - possibly exponentially with the quantity of information.  The amount of new information being created daily is unfathomable since people are sharing what they know more freely and indexing of that information has greatly improved.  Additionally, information that was previously in print is now being converted to share electronically increasing the energy that information requires.  Google did some math several years ago and predicted that even with the advance of computing power as it is, it would still take roughly 300 years to index all the information on the world-wide-web... Wow!  Guess how much energy that will take!  Till next time...

April 02, 2009

The Personal Supercomputer in Your Pocket

Bookmark and Share

Nokia 'Morph' future PMD OK, imagine its 1984 (for a glimpse into the past, see my previous post, "If Houses Grew Like Hard Drives"). Someone walks up to you on the street (possibly dressed in a black suit) and hands you an iPhone 3G. What would you think? Remember, 1983 was the year Motorola introduced the DynaTAC 8000X "brick" phone  - and it was just a (very large mobile) phone...  Technical issues with using a 3G phone in the 1980s aside, I’m sure you’d think it was of extraterrestrial origin (or some other advanced civilization from the earth’s core).  And that was only around 25 years ago.

As an engineer I’ve watched the evolution and fusion of personal portable devices - I’ve owned many of them as well.  It was predicted in the late 1990s that portable devices (i.e. cell phones, music players, video camcorders, DVD players) would somehow "merge" into a single device that you'd carry in your pocket.  I remember having those discussions around the lunch table with my fellow engineers circa 1998 (only ten years ago).  It went something like this... "Hey guys, I just got a Rio MP3 player (from Diamond Multimedia)... totally cool gadget!  It holds up to twelve songs with no moving parts... it hooks up to my parallel port and I can download any song I want - I just need to compress the CD song with the Rio software and I’m mobile. Someday there’ll be a unit like that with five hundred megabytes of storage and a full color LCD that could hold pictures too!"

It was hard to imagine what would be possible with shrinking semiconductor process geometries, FLASH memory densities, display technology and power management.  We could only see so far into the future and it quickly became cloudy.  The best we could do was to envision evolutionary progress - improving on what we already knew.  But what was happening in the labs at Apple, Nokia, Samsung, LG and others was "revolutionary".  It was made possible by semiconductor manufacturers and other technology suppliers. We never saw the coming of CMOS image sensors with optics so small you could fit an entire video camera into the volume of a sugar cube (or less). We could not imagine an 80 gigabyte rotating media hard drive that was only one inch on a side and no thicker than a match book.  We might have imagined a few hundred megabytes of FLASH memory in a device, but not tens of gigabytes - that was science fiction.

Along with the functionality, we missed the connectivity completely.  In the late 1990s the World Wide Web was just taking off.  It was an era of the "New Economy" where stores were virtual and information was just a click away... that is, if you had a personal computer, a modem (56 kilobits per second) and a phone line.  We never would have imagined the 3G mobile web supported by a "cloud" of millions of computers spread around the globe supplying every imaginable variation of endless content.

All of what I’ve mentioned is now old news... things that have come and gone within a six month design cycle.  Moore’s law continues to march us forward into the future possibly jumping to quantum well transistors and saying goodbye to shrinking CMOS processes and the power they consume.  Display technology will continue to improve providing either projected (i.e. pico-projectors) images or screens that role up.  Battery technology may get a boost from new materials that allow Lithium chemistry batteries to charge in seconds instead of hours.

So what’s next? As Yoda might say, "The future I cannot see... very cloudy it has become".  What I can see is the evolutionary component of our technology. It is quite clear that as a civilization we will continue to push the thresholds of our knowledge and provide continuous improvements in the methods used to facilitate the tools of everyday existence.  OK, that’s a bit poetic, but what does it mean to you? Pull your Personal Mobile Device (PMD) out of your pocket, hold it in your hand and imagine the kids of 2034 laughing at how primitive a device it is! They will all have the equivalent of a modern supercomputer in their pockets that never need charging, are always connected to the "cloud" at gigabit speeds, use gesture, facial and voice recognition, are flexible, self cleaning and can project full 3D images directly onto their retinas... our present devices will be their "brick" phones.  something to think about!  Till next time...

p.s. check out Nokia's "Morph" for a glimps into the future...


 

October 29, 2008

The Energy Impact of Grid Computing

Bookmark and Share

Metaphorical Grid I was exploring the Internet to see what happened to SETI’s @home project - I once was a member and ran their client on a Windows 2000 machine in my office.  I was quite surprised to see that the project had evolved and was still alive and well.  More interesting than that was the myriad of projects that were using distributed or "grid" computing.

It started me thinking about problem solving in general. If you need large amounts of computer power (e.g. a very expensive supercomputer and infrastructure), you may limit what problems you try to solve.  However, if large amounts of computing power is available and inexpensive (or free), then those seeking to solve complex problems tend to take advantage of it.

For those of you not familiar with grid computing, here’s a brief tutorial.  Traditional computers solve problems in a linear or serial fashion similar to solving long division.  You work on a piece of the problem and when complete you move on to the next section. The results of the first calculations are used in the next step so it proceeds in a serial fashion. Distributed computing uses many computers to solve a problem by breaking it up into tiny pieces.  Each piece is assigned to a single computer for processing so they can all work in parallel greatly speeding up the result. 

Only certain types of problems can be solved this way. For instance, the long division example above does not segment well for distributed computing. It does segment well for vectored computing which works like an assembly line, but requires dedicated processing elements. Problems like weather forecasting, computational fluid dynamics and certain mathematical problems like fractals can all be broken into small pieces and solved in this manner.  In the worlds of physics, pharmaceutical research, genome analysis and others there are many problems well suited to this type of computing.  The pieces are independent of one another and somewhat easily isolated to individual computer elements.

Typically, distributed computing is done with super computers designed with multiple processor cores such as those built by IBM and other vendors.  These systems will have anywhere from 64 to over 1024 independent computing elements (in many cases using multi-core processor chips which multiply the processing capability even more).  This effectively provides thousands of times the computing power that would be available from a single high speed computer.

Now, imagine millions of computers tied together into one massive supercomputer.  You no longer have simply 1000 computing elements, but millions of them.  By tying together average home computers (which are usually pretty snappy by today’s standards) using the Internet, this is exactly what you have. This is grid computing, or tying together computing elements with a communication grid such as the Internet. The computers are not tightly coupled as in dedicated cluster supercomputers.  They can come and go as users take the tasks on and off line.  By simply the shear number of computers in the grid, large problems can be solved using this technique.

It does take software to coordinate the distribution of data sets and the collection of results.  One such technology is BOINC which stands for Berkeley Open Infrastructure for Network Computing. The BOINC platform allows users to distribute their problems over the grid and collect results.  Many projects, such as SETI have moved to BOINC.

While looking at several of the grid computing sites, I started thinking about the power consumed by the computers in the grid.  Typically, the clients are screen savers that scavenge computer time when the user is away (or not using the machine).  When you walk away from your computer and the screen saver kicks in, instead of moving images around to prevent CRT burn-in (old school), it starts up the compute tasks assigned to the machine. 

If you’re like me, I rarely turn off my computers to get around the time it takes to "boot-up" the machine.  Instead, I set the computer to enter into a sleep mode while being idle which greatly reduces the power consumption of the machine. In this mode small parts of the system stay powered up to monitor mouse or keyboard activity to alert the computer to “wake up” and go into full power mode.  This can dramatically lower the power consumption from 150 watts (full speed with LCD on) to 20 of watts or less (LCD off, hard drives powered down, graphics off-line, processor speed reduced to a crawl, etc.). 

Looking at this, 20 watts may still seem high considering it is on all the time, but compared to 150 watts it’s a considerable savings.  If you consider a single grid system like BOINC has over 500,000 computers in their cluster, the additional compute time increases the overall power consumption dramatically. If you assume a computer sleeps 70% of the day while the user is away from the machine, the overall power is reduced to 14% of the normal level (I think I’m going to do some measurements and advise in a later post).  For 500,000 machines entering sleep mode, that is a reduction of over 65 megawatts of power! Over the period of one day, each computer would consume 1.4 kW-hrs and for 500,000 units the total daily energy consumption is 708,000 kW-hrs.

Since the BOINC client does not allow the computer to enter sleep mode, the power consumption of the machine stays relatively flat all day.  Only the LCD display can be powered down (or enter stand-by mode). To calculate the average active power of the system, let’s assume the LCD is allowed to enter stand-by mode while the client runs.  A modern LCD such as the Dell E228WFP consumes roughly 40 watts while running and 2 watts in stand-by. So, the LCD can power down, but the computer is still running at full power since it is reading and writing to the hard drive and doing intensive calculations.  The power of the system is only reduced to roughly 112 watts due to the LCD display entering stand-by mode (See diagram below).

Grid Computing Comparison  

If you now consider that each machine running the client will consume roughly 112 watts for 70% of the time, each machine uses a little over 2.9 kW-hrs per day (compared with 1.4 kW-hrs per day for a non-grid computer). At US$0.16 per kilowatt-hour, that’s an increase in cost of only US$0.24 per day (US$7.20 per month) for any one user.  However, the grid now consumes 1.481 million kW-hrs per day compared to 708,000 kW-hrs which is an increase of 773,000 kW-hrs per day.

If you assume an average U.S. household consumes roughly 30 kW-hrs per day, that increase is the equivalent of adding 25,700 average homes to the power grid. This is not necessarily a bad thing, since solving incredibly compute-intensive problems could lead to a better world for humanity... but it does make you think "wow, that’s a lot of power!" Till next time...