Proper IT planning ensures business continuity. ." > Proper IT planning ensures business continuity." /> Proper IT planning ensures business continuity." >
Story image

No more gaps

01 Aug 2009

Proper IT planning ensures business continuity.

Business continuity planning and disaster recovery used to be primarily the objective of IT. However, that has changed as business units have become more aware of the economic and legal jeopardy that can ensue when critical systems fail and are not quickly restored.

Today’s business continuity plans are being closely scrutinised by senior management, who have a growing demand for real-time information. Globalisation is putting pressure on businesses to be available all day, every day and it is up to the IT departments to figure out how to supply the necessary data and applications to support this always-on environment.

The key for any IT department is to carefully plan, manage and execute a management strategy to ensure availability of mission-critical data and avoid costly downtime. Technologies such as replication, clustering and mirroring play a crucial role in this process. However, business continuity is more than just disaster recovery (DR); it is also about high availability of mission-critical applications and data categories. Different systems have different availability requirements and not all systems need uptime greater than 99.9%.

Database ReplicationBusiness analysts Intelligent Solutions define database replication as enterprise software that enables companies to copy and move data bi-directionally from one database to another at a transaction level in real-time. This is accomplished by reading the database log files, capturing and copying all data changes, and delivering them to distributed database targets without regard to distance limitations. This makes replication a critical piece of the real-time data management, data integration and business continuity puzzle.

But replication has also become important for facilitating efficient reporting and data distribution processes. To deal with performance issues caused by exponential data growth, IT departments are using replication to copy data from transactional database systems onto separate reporting servers. This allows them to run processor-intensive reports against the reporting servers without affecting online transaction processing (OLTP) system performance. Replication ensures that these reports contain completely accurate and up-to-date information, because the tools capture all changes in the OLTP systems as they happen and copy them in real time to the report server.

Disk mirroringDisk mirroring represents a different angle on data protection and is typically used in conjunction with backup. Mirroring technology continuously copies newly created or edited data to a secondary server. A mirrored copy keeps no record of the original source, but simply replaces the original with updated data. In a DR context, mirroring data over long distance is also referred to as storage replication. As it writes synchronously, mirroring effectively prevents loss of data stored on the disk.

However, mirroring cannot prevent disk corruption. Another disadvantage of storage replication as a stand-alone DR tool is the lack of a stand-by live database as provided by database replication. In the case of storage replication the database is cold. This results in significantly longer downtime, typically one to two days, in a disaster scenario. Existing hardware cannot be leveraged during the failover either. In order to achieve zero data loss, organisations are wise to combine mirroring with database replication to leverage the best from both worlds.

Database ClusteringAs a site-specific business continuity solution, database clustering implements clusters of highly available and scalable systems at the database level. According to Bloor Research, this happens for three main reasons: the first is performance because process loads are spread across multiple nodes, the second is scalability as a bigger system results in better performance, and the third is for high or continuous availability to protect your IT infrastructure against unplanned outages.

For many years, clustering was a matter of physically moving any failed process or application from one hardware node in a cluster to another and restarting it there. This technique is known as instant relocation. The problem was that application software running on these clusters was not able to take advantage of the increased number of instances for availability – that is run on multiple instances simultaneously – and often the additional nodes were idle or under-utilised. This made for costly investments in hardware and increased the overall time of a system’s return on investment.

These challenges have been overcome by today’s high availability (HA) technology. The major difference between HA and instance relocation occurs at the database layer. By running two databases simultaneously – one on each node of the cluster – HA eliminates the need to reboot the database on the surviving node of the cluster. Database clustering suits mission-critical applications that need to be protected from possible downtime. Typically, the total time of any outage is measured in minutes or even seconds. When integrating clustering in overall business continuity plans, IT departments should watch out for a solution with low administrative overhead and capabilities such as automated load balancing and failover.

Real world examplesReplication, mirroring and clustering have all proven of value to public and private organisations that have integrated them into their business continuity plans. One of the largest commercial banks in China, Shanghai Pudong Development Bank (SPDB), provides consumer and corporate banking as well as treasury and market services. The bank turned to Sybase Mirror Activator – a hybrid solution that complements existing disk mirroring or storage replication with database replication – to protect critical business information.

This has dramatically reduced failover time for SPDB’s database applications, thanks to the live standby database that can support re-routed traffic within minutes, compared with the hours of time-to-recovery required by cold standby systems. 

Cloud application attacks in Q1 up by 65% - Proofpoint
Proofpoint found that the education sector was the most targeted of both brute-force and sophisticated phishing attempts.
Huawei to deploy Open Rack in all its public cloud data centres
Tech giant Huawei has unveiled plans to adopt Open Rack proposed by the Open Compute Project in its new public cloud data centres across the globe.
Beyond renewables: Emerging technologies for “greening” the data centre
Park Place Technologies’ CEO shares his views on innovations aside from renewable energy that can slim a data centre’s footprint.
Interxion’s David Ruberg wins Europe’s best data centre industry CEO
The European CEO Awards took place this week to celebrate the key figures at the helm of corporations that are driving innovation.
Opinion: 5G’s imminent impact on data centre infrastructure
Digital Realty’s Joseph Badaoui shares his thoughts on how 5G will transform data centre infrastructure now and beyond.
EMEA external storage market hits record high, Dell EMC on top
IDC's recent analysis on the external storage market in EMEA has shown healthy results - with some countries performing better than others - largely fuelled by all-flash arrays.
SolarWinds extends database anomaly detection
As organisations continue their transition from purely on-premises operations into both private and public cloud infrastructures, adapting their IT monitoring and management capabilities can pose a significant challenge.
Was Citrix unaware of its own data breach until the FBI got involved?
According to a blog post from Citrix’s CSIO Stan Black, the FBI contacted Citrix on March 6 and advised that international cybercriminals had allegedly gained access to Citrix’s internal network.