My Photo

Recent Comments

Comms Infrastructure

September 14, 2011

Will Binary Communications Survive?

Bookmark and Share

IStock_000000123354XSmall In my last post "Going Faster Has a Price" I discussed the issues with transmitting bits represented by two states at faster data rates and the problems of inherent loss in the media, ISI and many other phenomenon that screw up the signal.  Through careful channel design and active means, engineers can transmit and recover bits over copper cable and back planes with ever greater rates.  For example, National Semiconductor and Molex demonstrated 25Gbps+ communications over a back plane at DesignCon 2011 this year.  But how long can the industry keep doing this without changing the way we define a bit on a backplane?

This problem is not a new one... as a matter of fact, it is a very old one going back to the early telecom days of modems.  In the early days of circuit switched (voice) networks, filters were placed in the system to limit the bandwidth of the signal to around 3KHz which was enough to reconstruct a human female voice without distortion.  This was done primarily as a means to frequency multiplex multiple telephone circuits on a single microwave transmission between towers (before fiber-optic lines).  So when people tried to move "bits", they were limited to the 3Khz bandwidth.
Enter the Shannon-Hartley Capacity theorem (see below).

SHCapThereomWhat this says is the maximum capacity of a channel to carry information is a function of the bandwidth (B) in Hertz and the Signal to Noise Ratio (S/N) which has no units.  So as your noise goes up, your capacity to move information goes down.  This plagued early engineers and limited the amount of information that could be moved through the network.  Early modems used Frequency Shift Keying (FSK).  One frequency was used to indicate a "0" state and another to represent a "1" state.  The frequencies where chosen so that they would pass through the 3Khz limit of the channel and could be filtered from the noise.  The problem is that you couldn’t switch between them faster than the bandwidth of the channel so you were still limited to the 3KHz... so how did they get around this?  They used Symbol Coding.

Symbol coding basically combines groups of bits into a single symbol.  That symbol can be represented by a frequency carrier and a combination of amplitude and phase.  This led to the development of Quadrature Phase Shift Keying (QPSK) or Quadrature Amplitude Modulation (QAM) techniques which are in use today in modern cable modems. The group of bits can be sent all at once instead of one bit at a time... clever! However, it comes at a cost and a fair amount of complexity relegated to the world of digital signal processing.

But what about the high-speed digital signal path between two systems in our modern Internet?  Today they use scrambled Non-Return-to-Zero (NRZ) coding which prevents DC wander and EMI issues... but it is still either a "0" or a "1" state - two levels representing the state of a bit.  Will this medium ever move to other coding schemes to get more data through the channel as the early telephone system did?  It might.  Intel and Broadcom are both pushing for a standard that uses multiple levels and symbol encoding for 25 Gbps and beyond.  This has the added benefit that more bits can be sent in a single transmission of a symbol.  This is already being done today in Ethernet for the 10/100/1000 CAT-5/6/7 standards over UTP cable where the bandwidth of the channel is limited to around 350 Mhz. Will we see this at 25 Gbps and beyond? Possibly...

The problem with this method is power.  It takes DSP technology at each end of the channel to code and recover the signals adding energy consumption to the mix.  With thousands of channels in a modern data center, that power can add up really fast.  NRZ techniques are very low in power consumption.  National Semiconductor has produced devices that can move data at rates of 28 Gbps over copper media and back-planes at very low power consumption - something multi-level systems will find difficult to do.  The industry agrees and is pushing back on the multi-level proposals. 

There may come a day beyond 28 Gbps where there is no alternative but to go to multi-level symbol encoded systems, but I think that may be some time off in our future when 100 Gbps is far more common - perhaps even to your cell phone!  Till next time...

 

 

 

March 17, 2011

Going Faster Has a Price

Bookmark and Share

IStock_000005835679XSmall As you know, if you want a car that you can drive at 150 MPH, then you will pay a premium since it will require additional technology to keep you connected to the road and overcome the frictional forces of the air - as well as the "Gee, I look really cool in this car" effect which at times comes with an even greater price tag.  In the physical world of high speed data and signal integrity, these laws also apply.  Dr. Howard Johnson knows this well and has published several books on the subject.  Even the subtitle "Advanced Black Magic" implies the difficulty in designing high speed systems.


Well folks, it isn’t getting easier.  In fact, it is getting far worse. What is interesting about our world is our fundamental quest for knowledge - and the more rich the content of the information, the quicker people learn or share information.  There is also the desire to communicate and the later also applies... the richer the content (photos, videos, music, etc) the more appealing the media.  With the passing of the DMCA Title 2 which protected service providers from copyright infringement when making local copies to stream (or the unscrupulous pirates stealing it from them) along with the deployment of DOCSIS modems (now version 3.0 exceeding 100Mbps up and down - if the OSP is willing) the stage is set for one of the largest bandwidth explosions ever witnessed by man.


This expansion of bandwidth is driving data center equipment to ever increasing capacities... it wasn’t long ago that 1Gbps was fast... not any more.  The norm in data centers now is 10Gbps Ethernet (802.3ae - optical) and quickly moving to 100G Ethernet!  The latter has been accomplished via 10 lanes of 10Gbps, but is moving to 4 lanes of 25Gbps which matches the number of lasers and receivers found in most 100G modules.  Do you know what happens to a 25G signal when it travels over a back-plane... it isn’t pretty.  In fact, 10G has issues as well and it’s amazing that it works at all...


For example, take a look at the image below.  This is a comparison of PCI Express signals (generation 1 through 3) over 26 inches of differential traces on a PCB (FR-4).  As the speed of the signal increases the eye opening decreases.   What used to work without issue now requires either a change in board material or active circuitry to restore the signal.  These signals are far slower than a 25-28Gbps stream now being considered for electrical interface to optical modules.  Without signal conditioning, careful layout (thank you Dr. Johnson), and good impedance control... no bits, just noise...

Blog 050 

 
If you want to know more about fixing this, visit http://www.national.com/datacom and watch some of the cool videos on how it’s done... as Spock would say... "Fascinating"... Till next time...

December 16, 2010

Get Active - Lowering Networking Power in Data Centers

Bookmark and Share

IStock_000002328995XSmall In the past I’ve discussed topics such as virtualization and digital power to help improve data center processing efficiency.  I may even have discussed additions to the 802.3 standard to idle Ethernet drops when they were not in use.  However, I have not addressed the interconnect power itself and it was surprising what I found.
In medium scale data centers such as those run by financial institutions, large retailers or corporations you will find thousands of server blades and the networking equipment to connect them together.  What is interesting about this architecture is that the majority of networking traffic occurs within the data center itself.  The reason for this is partially due to the litigious nature of our society and the never ending quest for information to help us understand ourselves.  For example, simply performing an on-line stock trade - which to the user is a single transaction - will spawn dozens of additional inter-server transactions to secure, execute, verify and log the event as well as extract statistics used in market analysis.  So when millions of people are on-line day trading stocks, billions of transactions are occurring within the data centers.
This huge amount of traffic need bandwidth and traditionally this has been accomplished by employing fiber optic cable.  Fiber has the advantage of a very small diameter thus providing space for air-flow to cool the systems.  Larger copper wire could be used for short hauls, but the diameter would block the air-flow and cause over-heating. 
Fiber requires light (lasers) to operate and different distances and data rates require different modes of optical transmission.  To allow flexibility, equipment manufacturers have created connectors that accept a module that contains the laser and receiver electronics.  These are many variants, but the most accepted standards are SFP+ (Small Form-factor pluggable), QSFP (Quad SFP), CFP ("C" or x100 Form-factor Pluggable), XFP (10 Gigabit small Form-factor Pluggable), and CXP.  These modules are actively powered and consume 400-500 milliwatts of power each! When you have thousands of them the power quickly adds up.  Additionally, the heat generated must be dealt with and the modules are also very expensive.
Now what’s most interesting is that the majority of interconnects within the data center are only a few meters long!  Normally passive copper cables would work fine but as mentioned above they would decrease the airflow at the back of the equipment.  So a clever solution is to use smaller diameter copper wire (28-30 AWG) which suffers from higher loss and place active drivers and equalizers such as the DS64BR401 in the connectors which fit these standard module sockets.  This technique is called "Active Copper" or "Active Cable" and has many benefits in less than 20 meters runs.  The first benefit is cost - these cables can be less than half the cost of the fiber module and cable.  The second is power - active cables can reduce the power consumption significantly if properly designed (< 200 mW vs. 400mW for fiber).
Fiber will always have a place for carrying data long distances for which it excels.  However, in the data center copper wire is regaining ground with the help of active electronics may be the majority of media carrying your next stock trade!  Till next time...

June 24, 2009

Telepresence – The Next Best Thing to Being There

Bookmark and Share

Virtual meeting Just about every week I step onto some form of aircraft - mostly turbine powered kerosene burning jets.  I leave my carbon footprint trailing all over the friendly skies and often reflect on that fact.  After running for a flight and finally settling into my generously spacious coach seat, I get a chance to breath and relax.  My mind often goes to past episodes of Star Trek or other futuristic science fiction shows where people simply press a button and are instantly connected via real-time video communications with anyone, anywhere - even between star systems (a bit of a physics problem there with the faster-than-light information propagation thing, but I digress).

So, this week I opened a copy of USA Today and found an interesting article in the Money section on the re-emergence of Video Conferencing technology.  In a world where reducing energy consumption and the dependence on carbon based fuels is paramount, it would seem to me a no-brainer to shove some horsepower in the form of incentives into the video conferencing industry.  Companies such as Citrix and Cisco are already providing services to the masses that are available via the web.  Services such as GoToMeeting and WebEx provide a shared desktop environment for viewing each other’s Power Point or by using shared drawing and mark-up tools. This is extremely handy when needing a quick meeting to "touch base" which in my industry seems to be every hour.

On the higher end are companies such as Tandberg and Polycom who supply specialized equipment, services and software to enable multi-user high-definition audio / video conferencing or "telepresence."  These systems are really something and are geared to the corporate level of service. However, they often require a significant investment in equipment and infrastructure to take advantage of the technology as well as having issues with interoperability between competitors.

Beyond cost, there are a few issues with telepresence that has limited the adoption of the technology.  One is simply "eye contact" and shaking hands.  As humans we extrapolate a great deal of information by watching body language - especially someone’s eyes.  Body language can be extremely telling when you are in the same room.  Place someone in front of a camera, and you may not see the detail or the body language may be influenced by the Hawthorne Effect also known as the "Observer Effect."  A person may act differently if they feel they are being "watched." The knowledge that the camera is sending a video stream to possibly unknown individuals or even being recorded will change a person’s natural behavior.  This could possibly interfere with what would be a normal conversation.

As people get used to the idea of telepresence, those issues will fade, but today the lack of ubiquitous access and standards continues to plague the industry.  I would love to have a camera built into one of the monitors in my office so I could simply answer a call and "see" the individual - not to mention instantly share information as you would in person.  Seeing an individual over a video link reminds you of your connection to them and builds the relationship through repeated virtual "in-person" meetings.  However, many systems cannot interoperate and limit many calls to pre-arranged meetings.  Not that arranging a meeting before hand is bad, but it can limit the ability to impulsively place a video call to someone. 

As the telepresence industry evolves, the issues with interoperability and viewer consciousness will be solved or fade away.  By that time I will probably have a wall-size OLED display and a persistent connection with my comrades world-wide.  People all across our organization will be able to walk by my virtual "cube" and see if I’m in - maybe not such a great idea if I’m trying to get something time-critical completed! Something to think about... but when it happens I will surely miss the posh and lavish comforts of modern airline travel. Till next time...

August 05, 2008

The True Cost of an Internet “Click”

Bookmark and Share

Did you ever stop and think about how much energy you consume? Yes, you personally... and your family.  I think about it all the time.  I turn off lights, adjust the thermostat, consolidate my trips to reduce fuel consumption and turn off the TV when not watching.  I’m sure you do the exact same thing.  The cost of all forms of energy is continuously increasing especially in the last few years.  But have you ever thought about how much energy you consume when you click a link on a web page or send an email... probably not - and neither have I until now. 

I do a great deal of research into how efficiently energy is used in various systems and processes and I’m constantly on the Internet accessing websites.  Recently in a meeting, a fellow executive made a comment that hiding behind the cost of your Internet broadband connection and home computers were hidden energy drains.  These were the infrastructure and servers that manifest the information super-highway.  How much power was consumed because you wanted to see the latest top video on YouTube?  What if you didn’t click it? How much power would you save? How much carbon dioxide did you save from our atmosphere?  I thought, "Wow" what an interesting question...  Now, can we answer that question? That’s the question.  This is a monumental task... and difficult to estimate (but that’s never stopped me before), so we’ll have to examine exactly what happens when you access a website and make some assumptions to reach a reasonable conclusion.  Here goes...

First we need to consider what happens when you "click" a link in a browser.  The browser must first connect to the target server so it can request the page associated with the link in the page.  This is accomplished using Transport Control Protocol (TCP) and is similar to placing a phone call to the server.  Once the server "answers" and establishes the connection, the browser forms a request packet for the page tied to the link.  This request asks the server to send the contents of the page back to the browser.  If the page address is valid, the server then responds with a stream of packets that identify it as a valid server response along with all of the Hypertext Markup Language (HTML) contents plus other information such as scripts, meta-data, formatting and others.  Once all of the contents of the request are delivered to the browser, the connection is ended and the information is rendered into something the user can see and read.  Modern browsers actually make multiple connections and requests simultaneously to fill in images and other sections of the page.  This makes the rendering much faster and provides a smoother appearance to the user (See below).

Http_transaction_3

The process above takes place between two computers usually separated by a vast distance.  It is very reasonable to expect most web accesses made by a user reach servers that are located anywhere from hundreds to thousands of miles away.  Between the two computers is a vast network of switches and routers - a "highway" for the data packets.  Like railroad trains, the packets travel from your cable modem over your cable network (the local spur line) to the central office.  There the packets are switched to higher bandwidth fiber optic cables (main rail lines) using very short pulses of laser light which travel extremely far.  The packets may transition several major switching stations before being routed to the local network connected to the distant server.  What’s interesting is that the messages between the computers will often require multiple packets and, like trains, may arrive at different times out of order from what was sent.  This occurs due to traffic conditions along the way, again like trains on a railroad, the data routers find the most efficient path to deliver the packets resulting in varying arrival times.  One job of the receiving computer is to re-order the information and pass it on to the high level software for interpretation.

All of the technology to accomplish this transaction requires power - from the computers at both end (yours and the distant server) to the networking equipment and networks in between.  As mentioned earlier, to estimate the power consumed in loading a web page we need to make some assumptions.  For this estimate, we’ll ignore the power in the local computer and home network infrastructure - this would be considered already spent in the local budget regardless of the Internet accesses.  We will only consider power consumed by everything external to your location. 

Next we’ll need to consider the page contents and how many packets would be required to move the information back to your browser.  Our "typical" page will have no video since that is most often streamed and holds the connection open (like a long phone conversation with your best friend - only they do all the talking).  It will have 3 graphics that average 100kb each and about 5000 characters of information (e.g. a Wikipedia or news page).  The total page contents will require approximately 310kb to be transferred from the server to the browser.  Upstream from the browser, there will be at least 4 requests (1 for the page, 3 for the images).  The requests will occupy only about several hundred bytes of data, so in total the one web page request will move about 315kb of data (which includes all the connection overhead) between the two computers. 

Now that we have an understanding of how much information is transferred between the two machines, we need to examine how much additional networking equipment the information crosses and the power consumed.  We’ll assume the cable head end has a modem, a switches and a router - totaling approximately 200 watts.  The high speed connection on the Internet side of the router probably has a fiber link with an interface box (another 100 watts).  We’ll assume the packets make 3 jumps to other routers along the way.  Each jump will have 2 fiber boxes and a high speed router (to simplify) for a total of 300 watts for each jump.  The server farm will have one fiber box, a router and switches which adds an additional 300 watts.  The total network power for that link is approximately 1500 watts.  Last, we need to consider the average power of a modern blade server - let’s assume it averages around 50 watts.

Now that we have a scientific guess at the power numbers, it gets a bit complicated.  We need to know how much time your data used each piece of equipment so we can get watt-hours, a measure of energy.  Let’s examine the various speeds starting with the cable side.  Typical Data Over Cable System Interface Specification (DOCSIS) cable modem will have an aggregate bandwidth of around 152 Mbps (Mega-bit per second) down stream and 108 Mbps upstream (to the server).  To simplify the calculation for time that the packets stay on that leg of the network, we’ll use the upstream data rate of 108 Mbps.  We’ll also assume the fiber legs are OC-12 (Optical Carrier 12) with data rates of around 601 Mbps (622 Mbps - 21 Mbps overhead).  The final leg inside the web server’s infrastructure will most likely be a 1 Gbps (gigabit per second) Ethernet path.

To normalize all of these varying power-speed numbers, we’ll turn to a metric used by my company, National Semiconductor to rate the power consumption of interface devices.  This breaks down the speed and power numbers into one unit of measure in units of energy per bit (Joules/bit - see PowerWise® Solution Metrics).  I’ve also mentioned this method in a previous blog (The Efficiency of Moving Bits) and it allows us to greatly simplify calculating all the various speed-power numbers.  Table 1 shows how we figure out the picojoules per bit for each hop the data takes. The total energy per bit is roughly 4.6 microjoules per bit.

Table 1 - Network Energy Consumption
Network Equip. Power Data Rate pJ/bit Value
Cable (DOCSIS) 300 W 108 Mbps 2.8 uJ/bit
Fiber (OC-12) 900 W (3 x 300) 601 Mbps 1.5 uJ/bit
Ethernet 300 W 1000 Mbps 0.3 uJ/bit
TOTAL 4.6 uJ/bit

The server blade usage will vary, but we’ll assume a fully loaded server providing 2000 pages per second.  Your page will then occupy one access of that or 1/2000 of 40 watts or 0.02 watt-seconds (Joules).  Now let’s see how this all adds up for your web page view.

We concluded that the average page request occupied about 315,000 bytes of data.  That’s 2.52 x 10^6 bits.  The total energy required for the transaction was 4.6 x 10^-6 Joules per bit.  Multiplying these two numbers result in 11.52 Joules.  We add in the server energy of 0.02 Joules for a total of 11.61 watt-seconds (Joules) for each page view.  Again, this is not streaming video (I’ll look at that in a future blog post), but a static web page access from a server.  If you now multiply that single access by 1 million every second (a medium city’s population browsing the web), you get an energy consumption number of around 11.610 kilowatts an hour to keep the data moving...  enough energy to power roughly 13 US households for a month!  For you viewing 100 pages in a day, that would be about 323 milliwatt-hours of energy - or the equivalent of watching TV for about 10 minutes - and interesting thought.

As expected, it seems that the contribution of any individual is extremely small, but the sum of the population makes a much larger impact.  Maybe you’ve got a better estimate or have looked at this before more closely... let me know what you think!  Till next time...

June 01, 2008

The Efficiency of Moving Bits - Part 2

Bookmark and Share

In my last post I talked about life as a designer in the early 1980’s… it’s funny to look back and think of what we thought was “amazing technology” – some more of which I’ll discuss today.  I’m sure someone reading this could comment on engineering marvels of the 1960’s as well.

I’m going to continue my discussion on moving bits with some comparisons of bus architectures from that time and today.  We’ll take a look at how much has changed and the evolution of connecting systems and subsystems together.  I’d like to start with the S-100 bus which was essentially the Intel 8080 processor’s external bus, but not many engineers might remember that architecture.  I had friends that owned (yes, owned) the IMSAI 8080 and built automated amateur radio repeaters using these machines as controllers.  My first “computer” designs used the ISA bus made famous by IBM in the PC released in August of 1981 (I purchased one of those early machines and was the first geek on the block to show it off!).

The ISA or Industry Standard Architecture bus was not a standard (yet) when IBM introduced it in their PC.  Actually, the fact that IBM published a technical manual that including a listing of the BIOS (Basic Input Output System) source code and complete schematics, made it quite easy to clone…  I still don’t understand that decision.  This basic 8 bit bus originally ran at 4.77 MHz (like my old machine did), but later was enhanced and supported an 8 MHz clock, so every 250nS, 8 bits were transferred over the bus.  To actually move data into RAM or read an I/O address several cycles of the bus were required to latch the address, allow for settling of the bus, and give time to the peripheral to respond.

As processors improved in speed, buses needed to keep up.  The first logical step was to simply speed up the clock (as IBM did from the original PC 4.77 MHz clock to the 8 MHz clock in the XT) or use both edges of the clock.  Additionally, by making the bus “wider” with more communication lines (e.g. 8 bits to 16 bits and beyond) also improved performance.  The bus wars raged for years getting ever wider and faster.  As engineers, we continued to look for ways to move more data between subsystems and this led to us bumping into the physical laws of nature… primarily skew between bus lines and waveform distortion.  It was even more difficult if you wanted to extend the bus farther than the 10 inches inside the chassis, which at times seemed almost impossible.

It seemed counter-intuitive that the solution would be to move away from wider buses and serialize the data, but this is exactly what happened.  As the speed of the buses increased, there was little margin between each communication line (i.e. data, address, or control signals).  A tiny amount of skew would cause errors in the transfer of data.  Additionally, the mechanics of connecting large numbers of lines to a circuit card added expense.  Serializing the data reduced the number of mechanical connections, reduced or removed issues with skew, and had one additional feature – it reduced the overall power consumed.  This was accomplished by moving away from large voltage-swing technologies such as TTL and using devices based on LVDS.  Additionally, bus designers could now have point-to-point connections to each peripheral due to low connection count (e.g. PCI Express) which greatly improves bus bandwidth.  An example is the DS92LV16 SERDES (Serializer / Deserializer) transceiver. This device simply takes 16 data lines, serializes them, embeds the clock transports it to another DS92LV16 which reconstructs the 16 data lines and clock.  This is a transceiver so it has an upstream and a downstream and is LVDS based so it uses 2 wires for each path (4 wires total).

To compare old bus architectures such as ISA with serialized LVDS, we’ll need to define the parameters.  In my old ISA designs, I decided to use buffers to drive the data over the back-plane (rows of 62 pin edge connectors).  I needed enough drive to make sure the loading caused by a full complement of circuit cards would not degrade the speed of the edges.  The buffers were industry standard 74LS24x TTL level buffers.  I needed one 74LS245 and two 74LS244 buffers to drive the address lines.  There were others for control and bus management, but we’ll use only the 74LS245 for simplicity.  The bus was about 10 inches long (0.254 meters) and the transceiver consumed about 250 milliwatts (with a supply current of roughly 50 mA at 5V).

If we apply the equation from last week’s blog post to calculate energy per bit-meter for the old ISA bus we get 15.4 nJ/bit-meter.  This is using the bus clock rate, not the bus transfer rate. The calculation for the serialized bus running full duplex to the peripheral, we get 1.6 nJ/bit-meter over the same circuit card – an almost 10 to 1 improvement in data transfer energy efficiency.  These calculations are for a single end of the connection.  This ISA example does not take into account all the supporting bus electronics since it was a shared bus.  The serialized bus is much simpler since it connects directly to a single peripheral and can have multiple peripherals communicating with the host at the same time.

If you look deeper, the older parallel bus architectures are even less efficient at moving data than stated above.  If you think about extending the bus to an external chassis (i.e. 1 meter or more), the problems really pile up for parallel buses.  Serialized buses simplify everything from the connector to cable (with less wires) and even the number of connections to processors or FPGAs and are far more efficient at saving energy when moving data.

Let me know your stories or opinions by commenting on this blog or dropping me an email.  I’d love to hear from you.  Until next week…

May 27, 2008

The Efficiency of Moving Bits

Bookmark and Share

I started my engineering career in the early 1980’s as a designer for a modem company.  It was an amazing time since the Internet was growing in leaps and bounds and personal computers were making their big debut.  In those days the price of modems was similar to the price of gold – roughly $1 US per bit per second.  That’s meant if you wanted a 9600 bps modem (a museum piece today), you’d pay roughly $10,000 US for it.  Those were profitable times. 

The modems of those days were in rack chassis (no custom modem ICs existed yet) with hundreds of 74 series logic devices along with discrete analog filters, modulators, demodulators and line drivers made from op-amps and transistors.  The box weighed about 20 pounds and had a gigantic 50W power supply.  Not only were those early modems expensive to buy, they were very inefficient in moving the bits in terms of the power consumed. 

Things have improved over the last 25 years to where a household network has many high performance personal computers or game stations all connected together with 100 Mbps unshielded twisted pair (UTP) Ethernet drops or even wireless 802.11 access points.  These connections are managed by an Ethernet packet switch and typically a router / firewall connecting to a cable modem to provide upwards of a 10 Mbps connection to the Internet. 

But how would we compare those early (or even more recent) communication technologies to what’s available today.  How should we compare how efficient one technology or component is over another for moving information? Equation 1 provides a simple formula for creating a metric that does just that.    Equation1_3 Power is in watts and the transfer rate (fb) is in bits per second.  The variable “ch” is the number of channels in a system or device to normalize the result to a single channel.  The result is the data transfer efficiency (eb) in joules per bit (J / bit). 

This equation normalizes all coding and signal processing which allows you to compare how good a technology (system or device) is at using the least amount of energy to move a bit across a medium error free (i.e. a bit error rate or BER of less than 10-12).  If you want to normalize the length as well,  Equation2simply divide by the length (in meters) of the connection and the result is Joules per bit-meter (J / bit-m) as shown in Equation 2.  This allows technologies that drive various distances to be compared.

Let’s use equation 2 to calculate the efficiency of that old modem I worked on in the 1980’s.  It was capable of moving 9600 bits per second over 15,000 feet (4572 meters) using roughly 50 watts.  That yields a data transfer efficiency of 1.14 microjoules per bit-meter.  So every bit used roughly 1.14 uJ of energy to move it from my office to the telecom central office 3 miles away (worse case).   If the telephone switch was in the office building next door (1000 feet away), the number goes up to 17.1 uJ / bit-meter.  Compare this with a modern Data Over Cable Interface Specification (DOCSIS) cable modem which uses about 5 watts of power, goes the same distance (using coaxial cable) and moves up to 43 Mbps downstream (using 256 QAM) of error free data.  This equates to a data transfer efficiency of 25.4 picojoules per bit-meter.  That is an multiple improvement in transfer efficiency of more than 40,000 over the old modem technology – an amazing accomplishment.

Next time I’ll cover more on data transfer efficiency and we’ll look at bus architectures and interface technology.  If you have any thoughts (agree / disagree / don’t care), please drop me a comment here on the blog!  Thanks for reading and I hope to hear from you soon!