Story image

Lessons from the field

01 Aug 2011

Before the Christchurch earthquakes a lot of businesses took the attitude of "It won’t happen to me”. Business continuity was a tick box exercise. Since the earthquakes we are now seeing more businesses taking action to improve their backup and disaster recovery plans. Much like insurance, the question underpinning backup and disaster recovering planning has changed from "Can I afford to have it?” to "Can I afford not to?”. People have also realised that if you can’t rely on disaster recovery being there when you need it, then it isn’t really worth having. The two most significant changes we are now seeing is a greater appreciation of the physical location of the protected data, and a move from manual to automated recovery processes.Data storage: more than one site and city In terms of the physical storage of data, I think everybody has heard at least one IT horror story from the earthquake where backup tapes were stored conveniently on top of the server. Or how someone had routinely inserted tapes into servers but never checked that the data had been successfully written to them. It is important to ensure data is routinely and successfully stored offsite and that the data is usable. We are also finding that more businesses are looking for a second geographical location to store or back-up their data, ensuring their systems are replicated at a data centre situated in another town or city.Automating recovery: when people aren’t available to helpWith automation, the question we ask people is "What does your recovery plan look like when your key people are affected by the disaster?”These people may be too busy dealing with the essentials of life to be focussing on your business.It is also important your provider works with you to build a solution that focuses not only on securing your data, but that also considers how you will reconnect with that data after a significant incident. It’s not likely to be from the office or from a corporate device. Relying on service providers or integrators for manual services isn’t really any better than the DIY approach as most service providers will be inundated with demands on their time post-disaster. This is a key reason we have invested in networks traffic management products and in creating services that enable automated failover between our data centres. Questions to ask your providerBusiness needs and budgets vary greatly. When choosing a backup or disaster recovery service, there is more to consider than price. Investigate the varying levels of management attached to the service.The important questions to ask yourself are:

  1. What outcome are you actually seeking? There is a big difference between just backing up your data and having a full business continuity plan.
  2. Do you want a managed service and what level of management is required? Some services are self-service whereas others are fully managed.
  3. What‘s your responsibility as the customer? For example, you may be required to action tests to confirm data is usable.
  4. Is the service dedicated or overcommitted? Some disaster recovery services are cheaper because they are overcommitted or contended, which means they are oversold at a particular ratio. In a widespread disaster the demand will be high. An oversell ratio of 1:4 may mean you won’t get the service you were expecting.
  5. Where is the data housed and in how many locations? You will typically want a service at least 15km away from your primary location, if not 100km.
  6. How will you and your staff access data and services during or after a disaster? Protecting data is only half the solution. The ease of recovering and accessing it is also important.
Beyond these questions, your service provider is likely to ask about your Recovery Point Objective (RPO) and Recovery Time Objectives (RTO). All this really means is you that you need to figure out how much data your business can afford to lose (measured in time) and how long your business can be without the service or data (also measured in time).From here you will be able to at least define a fairly good list of business requirements and weigh the service options available.

Protecting data centres from fire – your options
Chubb's Pierre Thorne discusses the countless potential implications of a data centre outage, and how to avoid them.
Opinion: How SD-WAN changes the game for 5G networks
5G/SD-WAN mobile edge computing and network slicing will enable and drive innovative NFV services, according to Kelly Ahuja, CEO, Versa Networks
TYAN unveils new inference-optimised GPU platforms with NVIDIA T4 accelerators
“TYAN servers with NVIDIA T4 GPUs are designed to excel at all accelerated workloads, including machine learning, deep learning, and virtual desktops.”
AMD delivers data center grunt for Google's new game streaming platform
'By combining our gaming DNA and data center technology leadership with a long-standing commitment to open platforms, AMD provides unique technologies and expertise to enable world-class cloud gaming experiences."
Inspur announces AI edge computing server with NVIDIA GPUs
“The dynamic nature and rapid expansion of AI workloads require an adaptive and optimised set of hardware, software and services for developers to utilise as they build their own solutions."
Norwegian aluminium manufacturer hit hard by LockerGoga ransomware attack
“IT systems in most business areas are impacted and Hydro is switching to manual operations as far as possible.”
HPE launches 'right mix' hybrid cloud assessment tool
HPE has launched an ‘industry-first assessment software’ to help businesses work out the right mix of hybrid cloud for their needs.
ADLINK and Charles announce multi-access pole-mounted edge AI solution
The new solution is a compact low profile pole or wall mountable unit based on an integration of ADLINK’s latest AI Edge Server MECS-7210 and Charles’ SC102 Micro Edge Enclosure.