DataCentreNews UK - Specialist news for cloud & data centre decision-makers
Story image
Schneider Electric’s key considerations for deploying a hybrid data centre
Mon, 22nd Jul 2019
FYI, this story is more than a year old

In recent years, there has been a fundamental shift in the provision of IT services, from a point where organisations designed and built large on-premise data centers, to one in which their needs were outsourced.

This has driven a tremendous amount of growth in the Multi-Tenant Data Center (MTDC) market, whilst increasing the adoption of distributed edge computing applications.

Many IT applications, which could easily and cost-effectively be outsourced, were moved to the cloud, leaving on-premise IT environments that were too integrated into local operations or too expensive to move.

This change in dynamic forced a shift in the way data center owners and operators began to manage their assets, creating a hybrid IT environment with some applications outsourced and others kept on-premise, or at ‘the edge'.

A hybrid data center architecture comprises a centralised cloud, characterised by massive compute and storage, regional data centers and local edge computing facilities, near to where such data is being created, processed and consumed.

According to data gathered by 451 Research in 2019, 46% of enterprise workloads are running in on-premise IT environments, with the remainder kept off-premise.

A consequence of the hybrid IT environment is the requirement for rapid rollout of new data center resources.

To respond to increasingly diverse demands for computing, communications and storage infrastructure, operators have utilised new technologies to deliver such facilities to the highest standards but in a way that operating costs are kept as low as possible.

Hybrid environments and modularity

An emphasis on modularity, including the containment required for racks, has resulted in the development of Pod-style architectures, which comprise standardised units of IT racks in a row, or pair of rows and share common hardware elements that include UPS, power distribution units, network routers and cooling solutions.

As legacy applications, often running on legacy hardware, are typically unsuited to run off-premise, many enterprises are trying to modernise existing workloads and develop new applications built for the cloud.

The ability to spread workloads across both on-premise and collocated hardware also allows operators to analyse the infrastructure on a case-by-case basis, allocating data center resource, according to the optimal trade-off between the variables of risk, cost and IT performance.

To effect these decisions, companies are also investing in Cloud-based Data Center Infrastructure Management (DCIM) tools, to gain greater visibility into the data center and simplify the task of managing hybrid environments.

Increasing the speed of large data center deployments

Pod architectures allow new data center resources to be deployed in scalable increments, whilst speeding the time to market.

The solution also reduces capital expenditure and operational costs.

Such pods contain overhead power and network cabling, bus-way systems or cooling ducts, a capability that eliminates much of the construction required to build them into the fabric of a facility, or to modify an existing building prior to installation.

This results in significant savings and increased speed of deployment.

IT pods also allow greater flexibility in the choice between a hard or raised floor, but as ducting can be mounted on the frame itself, a raised floor is not necessary.

Should it be preferred, network and power cables can also be mounted on the frame to make the use of under-floor cooling more efficient.

The return of liquid cooling

With cooling being the most critical consideration, other than the IT, when seeking to improve data center efficiency, efforts to reduce the power demand are a continuous focus for operators.

Liquid cooling as such, has re-emerged as a topic of interest thanks to the increased processing demands of today's compute-intensive applications, including Artificial Intelligence (AI).

Two basic approaches to liquid cooling may be considered; direct liquid cooling (DLC) and total liquid cooling (TLC).

The first involves placing a small, fully sealed heat sink comprising a metal plate full of cool liquid on top of the server board or chip.

As the liquid absorbs heat from the processing element, sealed tubes transfer the liquid to a cooler that rejects the heat outdoors and routes the cooled fluid back to the heat sink.

With this approach, it's possible to absorb around 60% - 80% of the heat generated by a server.

The TLC approach involves no air-cooled components, instead, the server is completely immersed in a dielectric fluid which absorbs heat.

TLC can be chassis-based, or an entire IT rack of servers can be sealed and contained in a tub full of fluid.

In this instance network and power cabling will hang from rails above and heat generated by the servers is absorbed into the fluid before it is pumped away and cooled.

TLC approaches make the compute environment far less susceptible to fluctuations in humidity and air quality.

The sealed immersion technique for example, is particularly well-suited for ruggedised applications or harsh or outdoor environments, and due to the compute being fully sealed, there are no concerns about dust, sand or other contaminants getting in.

Another advantage is that because of the reduced need for air-cooling, a liquid-cooled server could survive far longer in the event of a power outage.

Even without replenishing the liquid supply, servers could expel heat for up to an hour before the liquid would become too hot to cool the load.

In many instances, that is plenty of time to restore mains power, shift to backup or perform a controlled shut down of the IT equipment.

Despite various considerations to adoption, which include the complexity of designs, requirement for new server chassis, piping and liquid flow management, leaks and the difficulty of servicing fully immersed systems - especially those in remote environments - the advantages offered by liquid cooling far outweigh the drawbacks.