Facebook's new energy efficient data center





The interactive transcript could not be loaded.



Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 28, 2011

Cloud technologies power some of Internet's most well-known sites—Picasa, Gmail, Facebook and Zynga, just to name a few—and cloud companies are striving to make the computer processing behind these sites as energy efficient as possible. With that in mind, Facebook, Dell, HP, Rackspace, Skype, Zynga and others have teamed together to form the Open Compute Project to share best practices for making more energy efficient and economical data centers.

To kick-start the project, Facebook unveiled its innovative new data center and contributed the specifications and designs to Open Compute. "Cloud companies are working hard to become more and more energy efficient...[and] this is a big step forward today in having computing be more and more green," explains Graham Weston, Chairman of Rackspace.

A small team of Facebook engineers has been working on the project for two years. They custom designed the software, servers and data center from the ground up.

One of the most significant features of the facility was that Facebook eliminated the centralized UPS system found in most data centers. "In a typical data center, you're taking utility voltage, you're transforming it, you're bringing it into the data center and you're distributing it to your servers," explains Tom Furlong, Director of Site Operations at Facebook. "There are some intermediary steps there with a UPS system and with energy transformations that occur that cost you money and energy—between about 11% and 17%. In our case, you do the same thing from the utility, but you distribute it straight to the rack, and you do not have that energy transformation at a UPS or at a PDU level. You get very efficient energy to the actual server. The server itself is then taking that energy and making useful work out of it."

To regulate temperature in the facility, Facebook utilizes an evaporative cooling system. Outside air comes into the facility through a set of dampers and proceeds into a succession of stages where the air is mixed, filtered and cooled before being sent down into the data center itself.

"The system is always looking at [the conditions] coming in", says Furlong, "and then it's trying to decide, 'what is it that I want to present to the servers? Do I need to add moisture to [the air]? How much of the warm air do I add back into it?'" The upper temperature threshold for the center is set for 80.6 degrees Fahrenheit, but it will likely be raised to 85 degrees, as the servers have proven capable of tolerating higher temperatures than had originally been thought.

The servers used in the data center are unique as well. They are "vanity free"—no extra plastic and significantly fewer parts than traditional servers. And, by thoughtful placing of the memory, CPU and other parts, they are engineered to be easier to cool.

Now that these plans and specifications have been released as part of the Open Compute Project, the goal is for other companies to benefit from and contribute to them. "Open source, crowd sourcing, Wikipedia—these are all capitalizing on, or enabled by, the same force," explains Weston, "which is that when things are open, there's more innovation around them."

More info:

Facebook announcement: http://tinyurl.com/4x67au9
Open Compute Project web site: http://opencompute.org/


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...