Story image

How to… Implement a workable back–up plan

01 Sep 10

"Cloud services are changing the face of the disaster recovery scenarios we face. Many of our clients have adopted cloud backup services and, by and large, these have offered tremendous benefits, streamlining procedures and improving backup windows. But there are some new factors that need to be considered, in particular how network bandwidth is consumed, and the need to seed backups. Likewise, in the event of a major disaster, no-one would want to be waiting for their data restore to dribble back at a typical internet speed, so it’s vital to plan options to overcome this.

Once your data is being retained by a service provider, you need to understand what happens when you cancel your contract. Will the provider be able to delete your data immediately, or does it persist until their backup retention policy finally releases it? This might mean they hold your confidential data for months or even years after the contract is terminated.

It’s tempting to think that by moving to a hosted environment, with all the redundant services and security, that you’ve done all you can to avoid a disaster. They can offer robustness that most businesses simply can’t afford any other way. But it can only reduce risks; nothing can eliminate them.

 Disasters seem to come in all shapes and sizes and the only thing you can predict is that they will happen.”

"In many organisations information technology services underpin the business. When they fail, there is the potential for operations to cease, commercial functions to be interrupted and contractual commitments to be broken. Very critical environments (e.g. medical, security, financial, transport) are sensitive to the slightest interruption and prolonged outages are unthinkable. In these environments, it is irresponsible for IT executives not to put in place appropriate redundancy and process. Many Boards are now taking a direct interest in this.

The key aspect of any continuity or disaster recovery plan is that it is shown to work on a regular basis. There’s a saying that it is not the taking of backups, but the restoring that counts, and this is true at all levels.

Many of Datacom’s customers have disaster recovery plans that are proven on an annual basis, but the best arrangements are those that are exercised on a much more frequent basis (say daily or weekly) and as a consequence, there are no surprises.

Customers with this capability typically maintain two parallel sites, each of which is highly resilient in its own right, with more or less continuous replication of data between them, and the ability to either run processing load shared across the sites or to switch processing between them at will.

Until recently it has only been practical from a financial perspective for the largest organisations to invest in these types of infrastructures. However, sophisticated hosting services are bringing the ceiling down very quickly to the point where most organisations will be able to soon enjoy complete continuity.

A cloud-based hosting service should provide the key building blocks for this type of environment: multiple enterprise-grade data centres, virtualised processing and storage in each location, replication and load balancing, redundant highspeed backhaul and internet access, multiple points of support, and 24/7 professional operations. Sophisticated environments, which otherwise would take months if not years to deploy, can be established in days.”

"Protect your business network with multiple layers of defence

While most attacks are countered at gateway or mail server level, protection at the workstation level is vital, especially for laptops that employees can use to connect to the Internet from hotels, internet cafés and other unsecured locations.

 Establish centralised management to apply security policies across the enterprise

Servers and workstations across the business, both at the headquarters and regional level, should be managed remotely by the same IT team which can easily apply group-based policies that automatically detect and protect newly connected workstations, at the same time as gaining more visibility into the organisation’s security status across multiple locations.

 Increase IT departments’ productivity with remote endpoint auditing and management scripts

If a crisis does hit, the time it takes for the business to recover can be drastically reduced by implementing mass remote management, such as the ability to conduct an audit of installed software applications on all systems and terminate or block any malware programs simultaneously. This can also help to identify and report quickly on non-compliant systems that need to be isolated from the rest of the business network.

Make sure that back-up and restore procedures are automated and vigorously tested

These procedures are crucial in the case of hardware failure or theft, as the need to deploy a new backup solution after the fact can incur higher costs and management efforts from IT staff.

Automation enables the consistent application of security policies, such as removable media scans, or restrictions on running specific applications on endpoints."

"The smart business continuity solution is one that matches your investment to your business requirements. Don’t cut corners, nor waste money on a solution that exceeds needs. It is a waste to invest in continuous replication for services the business can do without for a day, however it is critical to invest in this for the most vital and time-sensitive services.

To get a business continuity plan that delivers at a cost that makes sense, it is crucial to look at each business system separately and establish its recovery point objective (RPO) and recovery time objective (RTO). The RPO determines how up-to-date the recovered data needs to be, while the RTO is how long you can wait to get the system back in operation.

The business continuity solution, including the backup and replication regime, has to be designed to meet the required RPO and RTO for each business process. Once you know what has to be achieved, the question is how. Investing in two of everything is wasteful and backing up to old equipment is suspect — those servers wouldn’t be retired if they were still up to the job. You also really need your back-ups offsite for safety.

A good costing model is one that works like insurance. Rather than being faced with significant capital expense and wearing all the risk yourself, you can pay a monthly premium for your business continuity 'cover'. Because you use shared services, you get access to highquality, modern infrastructure and technical expertise whenever you need it. The equipment is managed and maintained on your behalf by people who do system recoveries daily.

With the sophisticated and cost-efficient services available these days, every company can have reliable business continuity measures in place.”

Dell dominates enterprise storage market, HPE declines
The enterprise storage system market continues to be a goldmine for most vendors with demand relentlessly rising year-on-year.
The key to financial institutions’ path to digital dominance
By 2020, about 1.7 megabytes a second of new information will be created for every human being on the planet.
Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill.