Social networking pioneer Facebook is leading efforts to address data centre energy consumption, with the launch of the Open Compute project – www.opencompute.org.
While the goal of the project is to develop an open forum to drive data centre efficiency, the initiative has grown out of Facebook's desire to be green.
Until recently, Facebook's website has been hosted at several leased facilities. But it has now opened its own data centre in Oregon, designed from a "clean sheet of paper".
Open Compute is the result of two years' development work by a team at Facebook's Palo Alto HQ. Hardware engineering manager Amir Michael says the project had a wide remit: everything from the power grid to gates on the processors was up for examination.
He says the best optimisations can only be achieved by modifying the way the data centre and servers work together. "The big win comes when you modify both," he states.
Along with innovative server design, Michael believes there have been wins in the way the servers are powered and cooled. "The environment within which the servers operate has a big impact," he asserts.
While the design of the servers is innovative, the processors that feature are not. For the moment, Facebook has specified what Michael calls same source processors.
"We started out with devices from Intel and AMD, and each has its own strengths and weaknesses." Intel processors are used for front end tasks, while AMD's are associated with memory operations.
Michael accepts that Intel and AMD are safe choices. "We were familiar with the parts and we can bet on them to have competitive products when we want to evaluate new devices."
Meanwhile, however, along with ARM-based devices and those from Tilera, Facebook is evaluating Atom and processors aimed at desktop applications. "Whatever we can get our hands on to see if it makes sense," Michael observes.
The team also developed a tray-based design for the servers at Oregon. Michael says that design was driven not only by the quest for greater power efficiency, but also easier maintainability. "Many data centre technicians spend their days swapping things in and out. We wanted to make their job as easy as possible."
A time and motion study on the new design showed a factor of 10 improvement. "Every hour a server is out of commission costs money," he notes, "and our aim was that less than 1% of servers would be offline at any one time."
Not only are the trays easier to remove; they also have fewer parts and weigh 3kg less than a 6U server, he adds. Trays are then grouped into triplet racking systems. "We looked at 19in racks," Michael notes, "but decided to simplify them while keeping the form factor."
Previously, there were 40 servers per rack. Now, the triplet scheme boasts 45. "A typical network card supports up to 48 servers," he adds, "so we don't use all the capacity and this adds cost. Although we're now placing 45 servers, with extra capacity should we decide to expand."
In building its own data centres, Facebook has also looked at the entire cost of ownership, with power one of the largest contributors. Michael points to the PUE (power usage effectiveness) ratio, and explains that the US Environmental Protection Agency has a target figure of 1.5, which implies 33% of power is consumed by air conditioning.
"Our target was 1.15 and, as we got further down the design road, we revised that to 1.05. We have ended up with a figure of 1.07, where some large data centres typically have a figure of 2," states Michael.
The Oregon facility has a series of buildings, each consuming 'tens of MW'. Plans show the company looking to expand this with a power transmission line supplying 120MW. ITs efficiency is complemented with an energy efficient evaporative cooling system, LED lighting and a tracking photovoltaic solar array generating 100kW.
Power supply efficiency has also been targeted. "Most devices are more than 94% efficient," claims Michael, "and some next generation parts will be more than 95% efficient."