My Photo

Recent Comments

« Micro Power Stations – You Too Can Be a Utility Company | Main | Is Analog Processing Dead? »

October 29, 2008

The Energy Impact of Grid Computing

Bookmark and Share

Metaphorical Grid I was exploring the Internet to see what happened to SETI’s @home project - I once was a member and ran their client on a Windows 2000 machine in my office.  I was quite surprised to see that the project had evolved and was still alive and well.  More interesting than that was the myriad of projects that were using distributed or "grid" computing.

It started me thinking about problem solving in general. If you need large amounts of computer power (e.g. a very expensive supercomputer and infrastructure), you may limit what problems you try to solve.  However, if large amounts of computing power is available and inexpensive (or free), then those seeking to solve complex problems tend to take advantage of it.

For those of you not familiar with grid computing, here’s a brief tutorial.  Traditional computers solve problems in a linear or serial fashion similar to solving long division.  You work on a piece of the problem and when complete you move on to the next section. The results of the first calculations are used in the next step so it proceeds in a serial fashion. Distributed computing uses many computers to solve a problem by breaking it up into tiny pieces.  Each piece is assigned to a single computer for processing so they can all work in parallel greatly speeding up the result. 

Only certain types of problems can be solved this way. For instance, the long division example above does not segment well for distributed computing. It does segment well for vectored computing which works like an assembly line, but requires dedicated processing elements. Problems like weather forecasting, computational fluid dynamics and certain mathematical problems like fractals can all be broken into small pieces and solved in this manner.  In the worlds of physics, pharmaceutical research, genome analysis and others there are many problems well suited to this type of computing.  The pieces are independent of one another and somewhat easily isolated to individual computer elements.

Typically, distributed computing is done with super computers designed with multiple processor cores such as those built by IBM and other vendors.  These systems will have anywhere from 64 to over 1024 independent computing elements (in many cases using multi-core processor chips which multiply the processing capability even more).  This effectively provides thousands of times the computing power that would be available from a single high speed computer.

Now, imagine millions of computers tied together into one massive supercomputer.  You no longer have simply 1000 computing elements, but millions of them.  By tying together average home computers (which are usually pretty snappy by today’s standards) using the Internet, this is exactly what you have. This is grid computing, or tying together computing elements with a communication grid such as the Internet. The computers are not tightly coupled as in dedicated cluster supercomputers.  They can come and go as users take the tasks on and off line.  By simply the shear number of computers in the grid, large problems can be solved using this technique.

It does take software to coordinate the distribution of data sets and the collection of results.  One such technology is BOINC which stands for Berkeley Open Infrastructure for Network Computing. The BOINC platform allows users to distribute their problems over the grid and collect results.  Many projects, such as SETI have moved to BOINC.

While looking at several of the grid computing sites, I started thinking about the power consumed by the computers in the grid.  Typically, the clients are screen savers that scavenge computer time when the user is away (or not using the machine).  When you walk away from your computer and the screen saver kicks in, instead of moving images around to prevent CRT burn-in (old school), it starts up the compute tasks assigned to the machine. 

If you’re like me, I rarely turn off my computers to get around the time it takes to "boot-up" the machine.  Instead, I set the computer to enter into a sleep mode while being idle which greatly reduces the power consumption of the machine. In this mode small parts of the system stay powered up to monitor mouse or keyboard activity to alert the computer to “wake up” and go into full power mode.  This can dramatically lower the power consumption from 150 watts (full speed with LCD on) to 20 of watts or less (LCD off, hard drives powered down, graphics off-line, processor speed reduced to a crawl, etc.). 

Looking at this, 20 watts may still seem high considering it is on all the time, but compared to 150 watts it’s a considerable savings.  If you consider a single grid system like BOINC has over 500,000 computers in their cluster, the additional compute time increases the overall power consumption dramatically. If you assume a computer sleeps 70% of the day while the user is away from the machine, the overall power is reduced to 14% of the normal level (I think I’m going to do some measurements and advise in a later post).  For 500,000 machines entering sleep mode, that is a reduction of over 65 megawatts of power! Over the period of one day, each computer would consume 1.4 kW-hrs and for 500,000 units the total daily energy consumption is 708,000 kW-hrs.

Since the BOINC client does not allow the computer to enter sleep mode, the power consumption of the machine stays relatively flat all day.  Only the LCD display can be powered down (or enter stand-by mode). To calculate the average active power of the system, let’s assume the LCD is allowed to enter stand-by mode while the client runs.  A modern LCD such as the Dell E228WFP consumes roughly 40 watts while running and 2 watts in stand-by. So, the LCD can power down, but the computer is still running at full power since it is reading and writing to the hard drive and doing intensive calculations.  The power of the system is only reduced to roughly 112 watts due to the LCD display entering stand-by mode (See diagram below).

Grid Computing Comparison  

If you now consider that each machine running the client will consume roughly 112 watts for 70% of the time, each machine uses a little over 2.9 kW-hrs per day (compared with 1.4 kW-hrs per day for a non-grid computer). At US$0.16 per kilowatt-hour, that’s an increase in cost of only US$0.24 per day (US$7.20 per month) for any one user.  However, the grid now consumes 1.481 million kW-hrs per day compared to 708,000 kW-hrs which is an increase of 773,000 kW-hrs per day.

If you assume an average U.S. household consumes roughly 30 kW-hrs per day, that increase is the equivalent of adding 25,700 average homes to the power grid. This is not necessarily a bad thing, since solving incredibly compute-intensive problems could lead to a better world for humanity... but it does make you think "wow, that’s a lot of power!" Till next time...

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00e5522adbfd8834010535ada1db970b

Listed below are links to weblogs that reference The Energy Impact of Grid Computing:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Darren Smits

Really a very nice information I am very glad to see this one it is really a very nice things for us.

Adam Ronchi

While looking at several of the grid computing sites, I started thinking about the power consumed by the computers in the grid. Typically, the clients are screen savers that scavenge computer time when the user is away...

Juan

And if you add "SolarMagic" too them you get even better results.

Of course some of the huge computing problems do not yield the accurate result if the correct initial conditions and boundaries are not set up. It would be interesting to apply RAP's crtitcal eye used regarding blind use of SPICE to see if there are blind spots on how weather projections get simulated.

Andy turudic

Intensive "binary" computing's power dissipation may actually motivate a rebirth of analog computers before we know it, and for the very reasons you cite.

Mother Nature has proven, however, that a cluster of neurons does not necessarily produce the best outcome (to wit, the Republican National Convention), so the advantages of analog computing over digital are probably always going to be application dependent.

For now, though, , the boneheaded "compute it digitally" orthodoxies we now possess take the front stage, with MicroSoft trying to make like they invented clusters.

Mellissa

Now this is a practical approach on how we can coordinate to reduce the amount of energy used. Talk about the increased potential for more "green" jobs!

Hugh Weinrich

It seems to me that need for grid computing could be supplemented by the OLPC XO Laptop Computer population. See
http://en.wikipedia.org/wiki/OLPC_XO-1

Being Solar powered and network capable, these computers could do the work but be off the power grid, saving the 30 kW-hrs per day for your 500K part time grid computers.

It would be an interesting project to compare 10,000 full time XO's to the “part time grid computers “ or one super computer.

Say for 5M$ you could purchase 10,000 units @ 200$ each; 1M$ for infrastructure & housing; 1M$ for support (window washers, software designer & marketing )and my royalty being 1M$ as the idea-man.

If it works this could put a solar-powered super-computer in the hands of most medium and large companies.

The comments to this entry are closed.