News Stay informed about the latest enterprise technology news and product updates.

Will Facebook's 'open hardware' initiative help server builders?

On the surface, the Open Compute Project — first announced by Facebook several months ago — is focused on sharing best practices and data center architecture approaches that can help data centers become more energy-efficient and “greener” overall.

But the theme of “open hardware” that dominated the latest summit held by the group in New York suggests that there is actually a much bigger movement afoot, one that I think could provide new momentum for system builders that integrate their own servers based on Intel technology.

Andy Bechtolsheim, chief development officer and chairman of Arista Networks (and, of course, one of the Sun Microsystems co-founders), said that information technology industry has a long history of standards development that has helped drive adoption and drive down costs. “What has been missing is a standard at the system level,” he told attendees of the second Open Compute Summit.

Bechtolsheim went on to criticize the “gratuitous differentiation” that distinguishes data center infrastructure technologies from each other and makes it tough for VARs and systems integrators — and businesses for that matter — to ensure interoperability. “This benefits the vendor more than the customer,” he said.

It is also a big reason that Facebook choose to build its own servers when constructing its data centers, said Frank Frankovsky, director of technical operations for Facebook executive, who founded the Open Compute Project and now sits on its board. Frankovsky’s fellow directors are Bechtolsheim; Don Duet, managing director with Goldman-Sachs; Mark Roenigk, chief operating officer of Rackspace Hosting; and Jason Waxman, general manager of high-density computing for the Intel data center group.

By thinking about the rack holistically (in effect, the rack is the new chassis), Frankovsky said Facebook was able to reduce the energy consumption of Facebook’s Prineville, Oregon, data center by 38 percent compared with existing data centers tasked with doing the same amount of work. The cost to build out that facility was 24 percent less, because Facebook exercised total control. Among other things, it opted for a 480-volt power distribution system to help reduce power losses during the conversion process and it reuses the hot aisle air to heat offices in the winter time.

Here’s the interesting part. As part of the Open Compute Project, Facebook plans to make its approaches available to the Open Compute Project community. This community will operate according to the model embraced by the Apache Software Foundation, adopting the contributions it deems appropriate. One of the early contributions are motherboards from ASUS. In addition, Red Hat has said it will ensure that it will support Red Hat Enterprise Linux on certified systems.

How far will the Open Compute Project reach? Frankovsky said that in order for “scale computing” — the infrastructure necessary to support the cloud computing movement — to succeed, the pace of hardware innovation needs to increase.

Open Compute encourages the best brains in the community share their ideas, including the best members of the white-box server channel. Other technology companies that have jumped on the bandwagon include Baidu, Cloudera, Dell, DRT, Future Facilities, Huawei, Hyve (Synnex), Mellanox, Nebula and Silicon Mechanics. Netflix, another company that relies on massive data centers, has also joined the community.

Check out more IT channel news on and follow us on Twitter! Here’s how to follow Heather Clancy directly.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.