Story image

It’s time to consider other cooling options for Open Compute Project data centers

24 Aug 2017

The Open Compute Project (OCP) is a topic interest in IT circles, as the reference architecture for data centers offer the potential to provide efficient, flexible and scalable compute power at a relatively low cost.

But since Facebook first announced the architecture in 2011, one area has yet to be revisited: cooling.

As Facebook originally envisioned it for its Prineville, Oregon, data center, the architecture uses direct air economization for cooling. That works well in cooler climates like Oregon, where you can often take advantage of cool outside air.

But if customers consider additional cooling options, they’ll find they’re able to build OCP-compliant data centers in other, much warmer climates, as well as improve performance and minimize risk.

Indirect air economizer cooling, for example, greatly extends the geographies in which air economizer modes of cooling can be used. With indirect air economizer cooling, a heat exchanger is used to isolate indoor from outdoor air. 

This acts as a barrier between the indoor and outdoor environments, so you’re not as susceptible to humidity levels outside, thus enabling the economizer to run more hours to get the benefit of higher efficiency.

When combined with evaporative cooling you can further extend the hours of operation by leveraging the wet bulb temperature even on humid days to provide partial cooling, meaning you don’t need colder ambient temperatures to bring the data center to an acceptable temperature.

Once again, you get more hours of economizer use over the course of the year, thus increasing overall efficiency.

What’s more, with direct systems, there’s always a chance that poor air quality or high humidity conditions – such as from a fire or a thunderstorm – would cause you to have to turn the economizer off.

That means you need to size the cooling compressor such that it can handle 100% of the data center load if needed. With an indirect air economizer, because you’re minimizing that risk of air contamination and separating outdoor air from inside, you only need a compressor that can help you get through the hottest days of the year, saving you money on capital expenses.

And those hot days may be fewer than you think. The OCP specifies data center operating temperatures in the range of 65° to 85° F. The idea was basically to match data center temperature to whatever the temperature outside is. If it’s cool enough outside to operate at 65° F, then fine.

If not, it’s OK to let the temperature rise to as much as 85° F. The idea is to not use additional energy make the data center any cooler than 85° F if you don’t have to.

ASHRAE TC 9.9 several years ago published the third edition of its Thermal Guidelines for Data Processing Environments which outlined server reliability rates at various temperatures.

The upshot was that servers, especially newer ones, can handle operating temperatures far higher than most data centers were using at the time, and probably still.

What’s more, we now know that in terms of reliability, operating IT gear at cooler temperatures offsets the times when you operate at higher temps, an issue I covered in this previous post.

So, if you are in a cooler climate and can operate at, say 65° F at times during the winter, that will offset the hours you operate at 85°F in the summer.

The point is, if you’re interested in the OCP data center architecture, don’t limit yourself to the direct air economization described in the guidelines.

You’ve now got better options, notably indirect air economization.

Article by John Niemann, Schneider Electric Data Center Blog

Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."
Norwegian aluminium manufacturer hit hard by LockerGoga ransomware attack
“IT systems in most business areas are impacted and Hydro is switching to manual operations as far as possible.”
HPE launches 'right mix' hybrid cloud assessment tool
HPE has launched an ‘industry-first assessment software’ to help businesses work out the right mix of hybrid cloud for their needs.
ADLINK and Charles announce multi-access pole-mounted edge AI solution
The new solution is a compact low profile pole or wall mountable unit based on an integration of ADLINK’s latest AI Edge Server MECS-7210 and Charles’ SC102 Micro Edge Enclosure.