Story image

Greening the data centre

01 Jun 11

In today’s 24x7 world of information availability, on-demand services, and round-the-clock commerce sites, companies are increasingly adding high-performance servers, storage and other equipment to their data centres to satisfy user and customer demand.
As a result, companies find they need more and more power to run and cool this equipment. At the same time, the cost of electricity is on the rise. And many companies are trying to be good corporate citizens by becoming green – or at least greener.
The combination of these factors is forcing many enterprises to evaluate their data centre power consumption and find ways to become more energy-efficient.
Several trends are significantly driving up data centre power requirements. Firstly, most companies need more computing power to run their web sites or business and financial applications, for which servers must often run round-the-clock. Secondly, newer computers use higher performing processors that consume more electricity. And thirdly, there is a trend to physically consolidate servers by moving to high-density rack and blade servers, thus packing more processing power into smaller spaces within data centres.
The result is that the power usage in corporate data centres is shooting through the roof. In fact, data centres typically required, on average, 1 kilowatt (kW) per rack in 2000. Six years later, the average per rack was up to 6.8 kW, and today it’s significantly higher again. The amount of electricity needed to cool the equipment in these racks has risen in a similar fashion.
If nothing changes, power and cooling issues (and costs) are likely to only get worse in the future, as both power requirements and the price of electricity are expected to keep rising.
Practical steps
Faced with growing power consumption requirements to run and cool data centre equipment, companies are looking for ways to reduce electrical usage and costs.
To figure out where to focus attention on energy, one must understand what contributes to power consumption. Studies have shown that up to 50% of data centre energy is consumed by IT equipment, and another 35 to 40% is for cooling.
Given that IT equipment is the biggest energy consumer, it makes sense to look at equipment itself to reduce power usage. But that has not been the case. Most companies do not even know how much power their equipment is drawing.
A popular approach to reducing data centre power consumption would be to simply use fewer servers, and that is exactly what many companies are doing today by virtue of server consolidation and virtualisation projects.
Virtualisation extends the benefits of physical consolidation, allowing applications run on virtual machines – several virtual servers on one physical box – which consume computing resources based on an application’s needs. This allows for even more efficient use of a server’s capabilities.
Consolidation and virtualisation can produce significant results. In some cases, companies have been able to realise a 10-to-1 reduction in the number of servers they required, which would consequently cut power consumption.
Air flow management
Another area of focus is optimising air flow within the data centre. In the past, data centre racks were typically arranged to all face the same direction. But because most equipment manufactured today is designed to draw air through the front and exhaust it from the rear, there is a more efficient way to set up racks: the hot-aisle/cold-aisle arrangement.
This approach arranges racks front-to-front so the cooling air rising into the cold aisle is pulled through the front of the racks on both sides of the aisle and exhausted at the back of the racks into the hot aisle. Only cold aisles have perforated tiles, and floor-mounted cooling is placed at the end of the hot aisles – not parallel to the row of racks. Parallel placement can cause air from the hot aisle to be drawn across the top of the racks and to mix with the cold air, causing insufficient cooling to equipment at the top of racks and reducing overall energy efficiency.
Air flow management of another sort should also be taken into account. Specifically, the high number of servers in many data centre racks often means there are many power and Ethernet cables running throughout any single rack or under the floor of a raised-floor data centre. In some cases, the cables obstruct air flow and do not allow the heat to be removed or the cool air to circulate. IT managers should thus check to be sure the cables are not obstructing air flow.
In many parts of the country, winter provides an opportunity to augment traditional data centre cooling. In particular, outside air can be used to help cool data centres. Accomplishing this requires the use of what are called economiser systems, which come in two types.
First, there are air-side economisers that allow outside air to enter a data centre to aid in cooling. The second type of economiser is a fluid-side economizer. These systems are commonly incorporated into a chilled-water or glycol-based cooling system.
An overseas study on building control systems found that, on average, the normalised heating and cooling Energy Use Intensity of buildings with economisers was 13% lower than those without economisers.
Supplemental cooling
While raised-floor cooling has proven itself an effective approach to data centre environmental management, as rack densities exceed 5 kW, and load diversity across the room increases, supplemental cooling should be evaluated for its impact on cooling system performance and efficiency.
At higher densities, equipment in the bottom of the rack may consume so much cold air that remaining quantities of cold air are insufficient to cool equipment at the top of the rack. The height of the raised floor creates a physical limitation on the volume of air that can be distributed into the room, so adding additional room air conditioners may not solve the problem.
Rising rack densities and high room diversity can be solved by pumped refrigerant cooling infrastructure that supports cooling modules placed directly above or alongside high-density racks to supplement the air coming up through the floor. This has a number of advantages, including increased cooling system scalability, greater flexibility and improved energy efficiency. Two factors contribute to improved energy efficiency: the location of the cooling modules and the fluid used to transport the heat. A two-phase refrigerant (R134a) is the most effective.
Higher density applications require fluid-based cooling to effectively remove the high concentrations of heat being generated. From an efficiency perspective, refrigerant performs better than water for high-density cooling. The R134 refrigerant can be pumped as a liquid, but converts to gas when it reaches the air. This phase change contributes to greater system efficiency. R134 is approximately 700 % more effective in moving heat than water, which coincidentally, is 700% more effective than air. It also ensures that expensive IT equipment is not damaged in the event of a refrigerant leak.
Traditional floor-mounted cooling systems with under-floor air delivery will continue to play an essential role in data centre environmental management. It is recommended that traditional systems be configured to deliver the required cooling for the first 50-100 Watts per square foot of heat load as well as solve the room’s full humidification and filtration requirements. Supplemental cooling can be deployed for greater densities.
Going Green
A good way to start the ‘greening process’ is through a thorough analysis of your own server room or data centre. Specialist data centre infrastructure providers can audit your specific environment – often at little or no cost to the business – and make practical recommendations on how the business can save money and lessen its environmental impact by reducing energy consumption.
It’s equally important to find a consultant that offers a holistic approach to greening the data centre. Improving your business’s efficiency, while compromising the availability or security of your business systems, will only result in complications down the line. A partner that understands how your business works and can deliver specific solutions that address your entire ecosystem of hardware, software and infrastructure, will ultimately prove a better companion as you embark on the road to greener pastures. 

Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill. 
Schneider Electric's bets for the 2019 data centre industry
From IT and telco merging to the renaissance of liquid cooling, here are the company's top predictions for the year ahead.
China to usurp Europe in becoming AI research world leader
A new study has found China is outpacing Europe and the US in terms of AI research output and growth.