How Facebook Aims to Reinvent Hardware

By Contributed Article | May 13, 2013

How Facebook Aims to Reinvent Hardware

Facebook used to be a company just like many others: It would buy servers, racks, and other hardware from vendors like HP and Dell and rent out co-location space from vendors like DuPont Fabros and others. But when Facebook got really big, those traditional IT infrastructure components were just not working as well as they could be, leading to wasted money, energy, and resources.

“We knew we could be more efficient and effective,” recalls Frank Frankovsky, the vice president of hardware design and supply chain management at the world’s most popular social network. Frankovsky is responsible for running the infrastructure that supports 1.1 billion users around the globe, 250 billion photos, with 350 million new ones added each day, and about 5.1 billion interactions on the site per day between likes, posts, and comments users make. That requires quite a back-end infrastructure to handle all that capacity, and it’s Frankovsky’s job to ensure the company’s data center operations are up to the task.

Frankovsky assembled a small team within Facebook and planted this seed in their heads: “What if we had an opportunity to start with a clean sheet of paper,” and design Facebook’s infrastructure from the utility pole supplying the power down to the server, in the most efficient way possible?

Prototypes were created, they designed servers, storage, and networking components, and then they built it. The results were more than Frankovsky could have expected: a 38% gain in energy efficiency and a 24% cost savings, and that was on top of “pretty aggressive baselines” the team had already implemented to optimize their co-location space. Frankovsky and the Facebook team got excited about the results to such a point they decided to open source the project.

And hence the Open Compute Foundation (OCP) was born in April 2011. Now, Frankovsky wants to change the way hardware is built for everyone else too. Facebook is not alone building its own hardware tuned specifically for its needs. Google has famously done this too, but Frankovsky proudly touts that Facebook has been the first to do it in an open source fashion. As the amount of digital data in the world continues to balloon, the need to have more efficient infrastructure will only increase, or else the waste of resources—power, money, and time—will grow exponentially, Frankovsky argues. Open source is the way to solve this problem, he says.

At Interop, Frankovsky discussed what the OCP has meant for Facebook and where the project is going in the future during a keynote speech he gave at the show. While he detailed the future direction of the OCP, it’s already evolved in the two years since it launched. After being founded solely by four people from Facebook, the OCP’s most recent summit in Santa Clara in January grew to include some of the biggest names in tech. The likes of AMD, Fidelity, HP, Dell, Intel, VMware, Rackspace, Goldman Sachs, Arista, EMC, Broadcom, ARM, and Salesforce.com, among others, are all now on board.

The involvement by some of those companies is slightly paradoxical, says John Abbott, analyst at the 451 Research Group who has been tracking the OCP. It puts some of those legacy hardware vendors in a precarious position. Participating companies in the project used to buy infrastructure directly from these companies; now OCP members like Facebook work with original equipment manufacturers (OEMs), buy commodity components at less expensive prices and assemble it themselves. That’s what Facebook has done. Rackspace, the open source cloud computing provider, said it’s now doing this too. Every server Facebook and Rackspace builds itself is one less server Dell or HP makes money from.

If there have been any criticisms of the OCP thus far, though, it’s that the initial rollout of the project has been geared mostly to large-scale data center users and service providers. The big question has been: What does this mean for regular old enterprises?

“The way it’s developing, it could have an impact on enterprises sooner rather than later,” Abbott said.

That’s what Frankovsky is hoping for. He admits that this do-it-yourself model of assembling disparate hardware pieces may not be a good fit for everyone. A lot of companies don’t have the scale to support staff for management hardware design and supply chain management. But OCP is making a concerted effort to target more entry-level involvement for IT shops of medium- and large-size businesses.

For example, a variety of OCP integrators have begun cropping up in recent months. Firms like Synnex and Avnet are Open Compute Project system integrator companies that act as a bridge between OEMs and enterprises implementing the systems, with a goal of selling preconfigured hardware.

Facebook has also attempted to make OCP work more palatable for enterprises. One of the hottest portions of the project is suggestions Facebook has released related to reference architectures for making co-location spaces more efficient. Adjusting air temperatures and input and output of exhaust, for example, can create efficiencies for companies using co-location spaces, Frankovsky noted. The OCP doesn’t just want to be a series of open source projects for building new data centers and hardware boxes. “There are optimizations that mainstream enterprises can certainly benefit from,” he said.

The overall point of the project is that infrastructure design has not had a big shakeup in a long time, and given the amount of new data being created, Frankovsky said it will become increasingly important that businesses find better ways to manage their hardware and tune it to their individual needs, not just buy proprietary, out-of-the-box solutions from legacy vendors.

The OCP is a project that encourages users of all sizes and manufacturers to think outside the box, beyond just the monolithic infrastructure designs that have dominated the industry in the past. Components of a data center can be disaggregated—meaning that compute, storage, and the chips controlling it are all physically separated and optimized for the company’s using them and the software running on them. Even the legacy vendors like HP and Dell can have a role in supplying those components, Frankovsky said.

While hardware vendors have discussed such ideas in the past, Frankovsky said no measurable progress has been made. The idea of disaggregation is intrinsically linked to open source though, he argues. A market can’t get behind this model unless there are some common standards for manufacturers to design to and users to implement on. That’s where OCP comes in, to create what Frankovsky calls a hardware API. “There’s an explosion of data center capacity, and the old proprietary approach just isn’t going to keep up,” he said. “No one technology provider in a closed-source fashion will be able to provide the ability to deploy hardware in such an efficient manner.”

This article by Brandon Butler originally appeared in Network World.

Copyright (C) 2013 LexisNexis, a division of Reed Elsevier Inc. All Rights Reserved.

Get the Latest News
x