If you could go back in time, how would you possibly explain modern data center concepts to a late-1960s “data processing programmer”?
In those days it was not uncommon to see a paper sign posted in the data center that would read “The computer is down today”. Computer systems were big and finnicky. Not many people knew how to run them or fix them.
In fact, the building that housed THE computer was not called a data center at all. “Processing center” was the more familiar nomenclature.
Most of these processing centers were owned, operated and maintained by big banks. Trucks loaded with paper would arrive in the evening, and the data would be “crunched” overnight in the processing center.
Printouts would be created and sent back to bank branches the next day. State of the art. Right?
Given the realities of that bygone era, how would our late-60’s processing operator react to the following list of modern data center “unimaginables”?
Data centers used to rely on utility power alone. If utility power failed, you were out of luck. Now, in addition to elaborate power backup plans (supported by UPS and generators), power devices within racks are modular and hot swappable.
If one fails, the other power modules take on the added load. The failed power module can be replaced by simply sliding out the bad module and replacing it with a new one.
All without interruption, and invisible to the end user who may be sitting thousands of miles away.
Computer equipment was highly sensitive in the old days, and internal environmental conditions had to be precisely controlled.
Today eco-mode cooling techniques that deploy a variety of economizer technologies allow data center owners to save lots of money by using the power of mother nature to cool their data centers…all without the fear of risking downtime.
The concept of outsourcing was virtually unknown in the late 1960’s. Trusting your computer operations to any outside organization was unthinkable. Today, even the most specialized aspects of data center operations can be outsourced to any number of highly qualified experts.
Today, the popularity of pre-fab data centers is on the rise.
The power, cooling, and racks required are all pre-configured and preassembled for rapid delivery and for immediate “plug and play” upgrades, or for quickly commissioning “edge” data centers that help support bandwidth-intensive remote applications.
As the “Internet of Things” (IoT) revolution accelerates into full swing, it is now possible to gather much more precise data surrounding data center and facility equipment performance and to analyze that data for the purpose of predicting much more accurate future performance.
Such practices save millions of dollars each year for those businesses that still rely on break/fix and preventative approaches for maintaining their data centers.
Nostalgia is nice, but in this day and age, technology advancements are too good to ignore.
Data centers have come a long way, especially in the area of integrated data center architectures.
Schneider Electric’s EcoStruxure IT architecture, for example, can be delivered to end users through reference designs, pre-configured solutions, and prefabricated solutions.
It can be configured as an entire data center or it can start out as an infrastructure product, like an Uninterruptible Power Supply (UPS) that is managed through the cloud and supported with a 24/7 service bureau.
It can be deployed all at once or it can be built in stages or pieces.
EcoStruxure IT consists of three layers ̶ connected products, edge control, and analytics ̶ that are integrated to facilitate IoT connectivity and mobility, cloud analytics, and cybersecurity.
What is the benefit of embracing such an open data center architecture?
Your next big idea can be delivered over a compressed time period to help your data center produce business value and drive your organization’s competitive advantage.
Article Abby Gabriel, Schneider Electric Data Center Blog