The Big Chill
One way to tackle cooling is through strategic site selection. Mild climates help keep server temperatures down naturally. Data centers can also capitalize on their surrounding environment: A building on Lake Erie can take advantage of cool lake winds, while an underground building, such as Pionen, can benefit from the milder temperatures subgrade.
Companies are also allowing their servers to run a little hotter to reduce the building’s cooling load. Facebook is elevating its data centers’ operating temperatures a few degrees above the standard 70 F. Increasing the running temperature of a data center by a single degree can save 2 percent on utility costs, says Garr Di Salvo, an associate principal in the New York office of Arup. This handful of percentage points is significant considering that data centers can consume upwards of 50 megawatts of energy per year and generate annual utility bills that run easily into the millions.
Architects can also design the building shape and form to exhaust heat more effectively. Completed in 2009, Yahoo’s data center in Buffalo, N.Y., “looks very much like an oversized chicken coop,” Di Salvo says. Controlled louvers clad the entire building exterior. “The idea is that they allow the building to exhaust all that heat in an energy-efficient manner,” Di Salvo says.
The cooling systems themselves can also be improved. “Based on the selection of the mechanical system, the architects can provide both experience and innovation in wrapping that building around that mechanical system,” Sheehan says. In Prineville, Ore., his firm recently completed Facebook’s approximately 333,000-square-foot data center, which is cooled entirely with evaporative misters instead of the standard, megasize chillers. Facebook reports that the facility is 38 percent more efficient overall than the industry average, but does not specify how much of the energy savings is attributable to its cooling system, Sheehan says.
All of these efficiency measures come with a catch. Measuring actual efficiency in a data center is quite difficult. Many engineers and architects turn to the Power Usage Effectiveness (PUE) ratio, which compares the amount of power consumed by the entire building to the amount of power that goes to the servers. For a long time, this number was 2.0, meaning that data centers used just as much power to operate and condition themselves as they did to run their servers. Now, the number hovers around 1.4. Reducing this number by just another one-tenth of a point is a big deal, Sheehan says. For a data center spending $7 million per year on electricity, going from 1.4 to 1.3 can save roughly $600,000 per year. But PUE isn’t standardized among firms—everyone includes slightly different metrics—and without a standard, it’s hard to compare centers.
Because high performance and data centers have not traditionally gone together, only a few buildings in this typology have achieved LEED certification. Its machine-first, people-second program doesn’t lend the project type well to the current bookshelf of LEED standards. “We do only a few LEED buildings because it’s good marketing—not because it’s good data center design,” says Peter Norris, an associate at Boston-based Integrated Design Group, an architecture and engineering firm that specializes in data centers. Rather, the firm’s motivation to design efficient data centers comes from money—as in potential savings in operating costs for its clients.
Corey Enck, a LEED specialist with the USGBC, is helping lead the charge to craft guidelines tailored to data centers, which will be included in the forthcoming release, LEED v4. One challenge, he says, is that architects often have no say regarding the types of computing equipment used within the building shell. If one server style is far more efficient, the architect can’t ask the remaining server owners and operators to emulate that particular configuration. As a result, data centers conceived in collaboration with the IT consultants from the project outset are far more successful in pursuing sustainability goals.