DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
Schneider Electric's bets for the 2019 data centre industry
Wed, 12th Dec 2018
FYI, this story is more than a year old

The last twelve months have presented some seriously interesting developments in the industry. One trend has gained much traction, as the majority of computing and storage continues to be funneled into the largest hyperscale and centralized data centers. 

At Schneider Electric, we're also are seeing a targeted move of the same Internet giants to occupy colocation facilities as tenants, deploying infrastructure and applications closer to their customers; the consumers of the data. This is driven somewhat by an insatiable demand for cloud computing, accompanied by the need to reduce latency and transmission costs, which has in turn, resulted in the emergence of ‘regional' edge computing – something that could also be described as smaller, or localized, versions of ‘cloud stacks'.

We don't believe this will stop at the regional edge. As the trend continues to drive momentum, it's probable that we will begin to see smaller and more localized versions of these ‘cloud stacks' popping up in the most unlikely of places.

A good example might be to think about the thought process or theory behind Amazon's choice to purchase Whole foods. The main driver for this was to enable them to move into the grocery retail business – but what about the choice to use their supply rooms to house local edge data centers for AWS Greengrass, or AWS software? 2019 is the year when we will really see the edge creep closer to the users.

Against this backdrop, I've been thinking about what we can expect for 2019 and have come up with a few other predictions.

Need for speed in building hyperscale data centers

The demand for cloud computing will neither subside nor slow down. 2019 will see it accelerate, which means the Internet Giants will continue to build more compute capacity in the form of hyperscale data centers. Market forces will demand they build these facilities increasingly quickly, meaning 10MW to 100MW projects will have to be designed, built and become operational - from start to finish - in less than a year.

One key to accomplishing such aggressive timeframes is the use of prefabricated, modular power skids.   Power skids combine MW sized UPS, switchgear (very large breakers) and management software all in one package that is built in a factory. These systems are pre-tested and placed on a lowboy trailer ready for a reliable, “plug and play” deployment once it reaches the final data center site.

Since the lead-time for this type of power equipment can in some regions, take up to 12 months, having the solution built and ready to deploy, eliminates any delays during the critical path of the design and construction phase.

You might also consider that within such a facilities data halls, its compute capacity will also become more modular, simply being rolled in place. Schneider Electric has created a rack-ready deployment model, already in use by many colocation data centers around the world.

In this solution freestanding infrastructure backbones are assembled within the data halls where at the same time, IT equipment is ‘racked and stacked' inside scalable IT racks. These pre-populated IT racks can then be quickly and easily rolled into place, greatly reducing both the time and complexity for customers.

Worlds of IT and telco data centers colliding

In order for 5G to deliver on its promise of ‘sub 1 ms latency', it needs a distributed cloud computing environment that will be scalable, resilient, and fault-tolerant. This distributed cloud architecture will become virtualized in a new way called cloud based radio access networks (cRAN).

cRAN moves processing from base stations at cell sites to a group of virtualized servers running in an edge data center. From that perspective, I believe significant buildouts will occur worldwide in metro core clouds throughout 2019 and well into 2020.  

You might think of these facilities as ‘regional data centers', ranging from 500 kW to 2 MW in size,  combining telco functionality (data routing and flow management) with IT functionality data cache, processing and delivery.

While they will enable performance improvements, it's unlikely that they will be able to deliver on the promise of sub 1ms latency due to their physical location. It's far more likely that we will begin to see sub 1ms latency times when the massive edge core cloud deployment happens in 2021 and beyond. Here localized micro data centers will provide the vehicle for super fast latency, with high levels of connectivity and availability.

Liquid cooling is coming

Artificial intelligence (AI) applications are placing massive demands on processing in data centers worldwide and with it, AI has begun to hit its stride, springing from research labs into real business and consumer applications.

 These applications are so compute heavy that many IT hardware architects are using GPU's as core processing, or as supplemental processing.  The heat profile for many GPU-based servers are double that of more traditional servers, with a TDP (total design power) of 300W vs 150W, which is one of the many drivers behind the renaissance of liquid cooling. 

Liquid cooling has of course been in use within high performance computing (HPC) and niche applications for a while, but the new core applications for AI are placing increased demands in a much bigger and more intensive way.

Data Center management moves to the cloud

DCIM systems were originally designed to gather information from infrastructure equipment in a single data center, being deployed as an on-premise software solution.

While newer cloud-based management solutions will quite obviously live within the cloud, they enable the user to collect larger volumes of data from a broader range of IoT-enabled equipment. What's more, the software can be used in number of large or small sized data centers, whether in one or thousands of locations.

These new systems, often described as DMaaS or Data Center Management as a Service, use Big Data analytics to enable the user to make more informed, data driven decisions that mitigate unplanned events or downtime - far more quickly than traditional DCIM solutions. Being cloud-based, the software leverages “data lakes”, which store the collected information for future trend analysis, helping to plan operations at a more strategic level.

Cloud-based systems also simplify the task of deploying new equipment and upgrading existing installations with software updates across a number of locations. In all cases, managing upgrades on a site-by-site basis, especially at the edge, with only on-premises management software leaves the user in a challenging, resource intensive and time consuming position.