Is it time to let the X-factor determine data center temperature?
It’s been a couple of years now since a group of experts from major server hardware vendors, wrote a paper about the “x-factor,” which is a way to quantify server reliability at different data center temperatures.
The idea was to help companies make more informed, business-driven decisions about how to operate their data centers.
I’m wondering whether companies are now comfortable enough to control their data center temperatures around this idea of the x-factor, rather than simply setting it at a certain temperature all year ‘round.
First, a little background on how the x-factor works. The idea is that servers are sensitive to temperature; they fail more quickly at higher temperatures than lower temperatures.
ASHRAE TC 9.9 several years ago published the third edition of its Thermal Guidelines for Data Processing Environments which outlined server reliability rates at various temperatures.
This is where the x-factor comes in. It’s a way to measure the relative expected server reliability at different temperatures.
The TC 9.9 group used a data center operating temperature of 68°F as its baseline; this temperature represents an x-factor of 1.00. If the temperature goes higher, the x-factor goes up; when it goes lower, the x-factor goes down.
For example, at 59°F the x-factor is 0.72, meaning there is a 28% lower probability of server failure if the data center were operated constantly at that temperature vs. operating at 68°F.
At the other extreme, at 113°F the x-factor is 1.76, meaning there’s a 76% higher probability of failure vs. operating at 68°F.
It’s important to note that 76% figure is relative to what the server failure rate would normally be; it doesn’t mean there’s a 76% chance of failure.
Typical annual server failure rates are quite low, around 2% to 4%. Even using the higher 4% figure, according to ASHRAE TC 9.9 calculations, operating at 113°F continuously would only raise the actual server failure rate by an additional 3%, to 7% annually.
Here’s where things get interesting.
As it turns out, if you operate for part of the year at a lower temperature, that can offset the times when you operate at higher temps, and this can have a profound impact on data centers that use ambient outside air for cooling at least part of the time.
Let your data center get very cool in winter months and you can operate it at higher temps in summer, so long as your x-factor stays at whatever you’re comfortable with given your risk profile.
In fact, you could let your x-factor decision dictate the temperature at which your data center runs at different times.
Using an intelligent cooling system and a data center infrastructure management (DCIM) platform, it’s certainly possible to operate in this fashion – and perhaps realize significant gains in energy efficiency.
The question is, are companies ready to take that kind of leap?
Is the kind of reliability data we have from ASHRAE TC 9.9 enough to convince you that this approach would work?
The data is highly focused on servers; it does not address other gear such as networking equipment and storage systems to the same extent.
Is that a deal-breaker?
Article by John Niemann, Schneider Electric Data Center Blog