Story image

Redefining efficiencies for downsized, on-premise enterprise data centers

15 Jun 2017

Over the last five years, three major transformations have rocked the data center industry.

These transformations have occurred so quickly that many data center owners have been caught flat-footed, uncertain of the best way to shape their data center modernization strategies. The first transformational phase involved a rapid, virtualization-driven consolidation of on-premise data centers.

In a typical consolidation example, five medium-sized data centers would consolidate into one larger data center. This approach was replicated across many companies to varying degrees.

In the midst of all of this, the second transformational phase occurred: the mass exodus of applications to the cloud. This left behind thousands of downsized on-premise data centers that were mere shells of their former selves.

Today, the industry is undergoing a third transformational phase: a retrenchment, and even a renewed growth, of enterprise on-premise data centers.

What is driving this unanticipated third phase of transformation? Major market and technology trends such as the Internet of Things (IoT) have driven exponential growth in the amount of data that needs to be captured, stored, analyzed and connected. That data is being gathered and analyzed to create both business value and competitive advantage.

As a result, analysts are forecasting growth for both cloud/colo and on-premise data centers (in the case of on-premise, the growth forecast is 5% over the next five years).

On the enterprise side a second driver is emerging: entrenched applications that integrate into many on-premise business systems (like Lotus Notes) are staying put. These are applications that are difficult to cost-effectively break up into separate applications and push off to the cloud. Stakeholders deem these applications simpler and less costly to maintain on-premise. And these applications are growing.

The downsized data centers that have been left behind are quite different from their pre-cloud predecessors. As a result, the approach to managing and operating them has to be different. If not, stakeholders will be forced to sustain high OPEX costs as they preside over what are essentially very inefficient data centers.

Consider the example of power and cooling systems.  Even though 50-75% of the servers may have been displaced and those applications moved to the cloud, oversized power and cooling systems remain. When downsizing IT, the utilization of power and cooling systems can drop as low as 10%.

Power and cooling systems are not proportionally reduced in an IT downsizing scenario. Oversized gear is not energy efficient and is costly to maintain.

Therefore, the challenge within downsized data centers is to determine what pieces of equipment are inefficient (and how inefficient are they) and to measure how much these inefficiencies are inflating operational costs. Then, once reliable data is gathered and analyzed, decisions can be made regarding changes that render the downsized data center more efficient.

Monitoring and analytics are the keys for improvement

Both on-premise and cloud-based data center infrastructure management (DCIM) tools can assist in fixing inefficiencies in downsized data centers. On-premise tools, can start recording the power draw from all of the components of the data center physical infrastructure.

Then, through benchmarking, opportunities for improvement are identified. These tools are also effective in planning capacity (forecasting how much power and cooling is really needed to address the current data center requirements).

New, cloud-based tools are also emerging that are capable of capturing data center physical infrastructure asset performance data. These systems not only remotely monitor equipment performance, but they also perform predictive diagnostics that can leverage data from multiple similar data centers to create more precise performance benchmarks.

By recording factors such as operating temperature and the number of battery discharge operations sustained (e.g., in the case of a UPS), the predictive analytics will show what the probability of failure will be within a given window of time.

Article by Steven Carlini, Schneider Electric Data Center Blog Network

Cloud providers increasingly jumping into gaming market
Aa number of major cloud service providers are uniquely placed to capitalise on the lucrative cloud gaming market.
Intel building US’s first exascale supercomputer
Intel and the Department of Energy are building potentially the world’s first exascale supercomputer, capable of a quintillion calculations per second.
NVIDIA announces enterprise servers optimised for data science
“The rapid adoption of T4 on the world’s most popular business servers signals the start of a new era in enterprise computing."
Unencrypted Gearbest database leaves over 1.5mil shoppers’ records exposed
Depending on the countries and information requirements, the data could give hackers access to online government portals, banking apps, and health insurance records.
Storage is all the rage, and SmartNICs are the key
Mellanox’s Kevin Deierling shares the results from a new survey that identifies the key role of the network in boosting data centre performance.
Opinion: Moving applications between cloud and data centre
OpsRamp's Bhanu Singh discusses the process of moving legacy systems and applications to the cloud, as well as pitfalls to avoid.
Global server market maintains healthy growth in Q4 2018
New data from Gartner reveals that while there was growth in the market as a whole, some of the big vendors actually declined.
Cloud application attacks in Q1 up by 65% - Proofpoint
Proofpoint found that the education sector was the most targeted of both brute-force and sophisticated phishing attempts.