Story image

Lessons from the field

01 Aug 11

Before the Christchurch earthquakes a lot of businesses took the attitude of "It won’t happen to me”. Business continuity was a tick box exercise. Since the earthquakes we are now seeing more businesses taking action to improve their backup and disaster recovery plans.
Much like insurance, the question underpinning backup and disaster recovering planning has changed from "Can I afford to have it?” to "Can I afford not to?”. People have also realised that if you can’t rely on disaster recovery being there when you need it, then it isn’t really worth having.
The two most significant changes we are now seeing is a greater appreciation of the physical location of the protected data, and a move from manual to automated recovery processes.
Data storage: more than one site and city In terms of the physical storage of data, I think everybody has heard at least one IT horror story from the earthquake where backup tapes were stored conveniently on top of the server. Or how someone had routinely inserted tapes into servers but never checked that the data had been successfully written to them.
It is important to ensure data is routinely and successfully stored offsite and that the data is usable.
We are also finding that more businesses are looking for a second geographical location to store or back-up their data, ensuring their systems are replicated at a data centre situated in another town or city.
Automating recovery: when people aren’t available to help
With automation, the question we ask people is "What does your recovery plan look like when your key people are affected by the disaster?”
These people may be too busy dealing with the essentials of life to be focussing on your business.
It is also important your provider works with you to build a solution that focuses not only on securing your data, but that also considers how you will reconnect with that data after a significant incident. It’s not likely to be from the office or from a corporate device.
Relying on service providers or integrators for manual services isn’t really any better than the DIY approach as most service providers will be inundated with demands on their time post-disaster.
This is a key reason we have invested in networks traffic management products and in creating services that enable automated failover between our data centres.
Questions to ask your provider
Business needs and budgets vary greatly. When choosing a backup or disaster recovery service, there is more to consider than price.
Investigate the varying levels of management attached to the service.
The important questions to ask yourself are:


  1. What outcome are you actually seeking?
    There is a big difference between just backing up your data and having a full business continuity plan.


  2. Do you want a managed service and what level of management is required?
    Some services are self-service whereas others are fully managed.


  3. What‘s your responsibility as the customer?
    For example, you may be required to action tests to confirm data is usable.


  4. Is the service dedicated or overcommitted?
    Some disaster recovery services are cheaper because they are overcommitted or contended, which means they are oversold at a particular ratio. In a widespread disaster the demand will be high. An oversell ratio of 1:4 may mean you won’t get the service you were expecting.


  5. Where is the data housed and in how many locations?
    You will typically want a service at least 15km away from your primary location, if not 100km.


  6. How will you and your staff access data and services during or after a disaster?
    Protecting data is only half the solution. The ease of recovering and accessing it is also important.


Beyond these questions, your service provider is likely to ask about your Recovery Point Objective (RPO) and Recovery Time Objectives (RTO). All this really means is you that you need to figure out how much data your business can afford to lose (measured in time) and how long your business can be without the service or data (also measured in time).
From here you will be able to at least define a fairly good list of business requirements and weigh the service options available.

Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill. 
Schneider Electric's bets for the 2019 data centre industry
From IT and telco merging to the renaissance of liquid cooling, here are the company's top predictions for the year ahead.
China to usurp Europe in becoming AI research world leader
A new study has found China is outpacing Europe and the US in terms of AI research output and growth.