Story image

Zerto - The fresh face of business continuity

15 Sep 2016

Zerto started in 2009 to solve an issue. Existing business continuity and replication solutions weren’t built with virtualisation in mind. That meant that they weren’t as flexible as modern businesses needed them to be.


Traditionally businesses turned off their production systems or accepted degraded performance while a backup occurred. This would’ve typically occurred late at night. This became more of a pain point as society started to expect 24-hour uninterrupted accessibility. Hence business continuity solutions started to appear, which replicated all of an enterprise’s applications and data to a separate site. These redundant sites sat ready to takeover in the event of an issue. The replication itself became faster over the years and nowadays happens continuously and in real time.

A number of IT vendors provide this continuous replication solution at the hardware level. Which means buying a complete redundant set of your storage hardware for the secondary/redundant site. Clearly, this is good for those selling storage hardware, but an inefficient use of resources for something seldom used.

A new approach

Zerto’s solution is built from the ground up to be virtualised. Instead of being hardware dependent or software working inside the virtualised environment, they operate in the middle, at the same level of the virtualisation management tool. They support both VMWare and Microsoft’s hypervisors.

Their focus is on businesses with around 25 virtual machines or more, which equates to enterprises, datacenter operators and cloud service providers.

They don’t provide datacenter infrastructure nor hardware themselves; they sell a very clever and highly automated software tool. The tool is simple to setup and easy to manage, but behind the scenes does some incredibly complex magic.

They charge per virtual machine and have perpetual, annual or monthly usage based pricing options. The later, being ideal for organisations with significant growth planned.

The first version of their tool was launched in 2011. A year later they added support for multi-site and multi-tenant replication. This built upon their initial virtualisation focus and made the tool ideal for most cloud service providers. In fact, their solution is often used by cloud service providers to offer Disaster Recovery as a Service (DRaaS).

With the way their tool is providing continuous replication and is hardware independent, it also lends itself to be used as a migration tool. For example, a client wanting to move a production environment from on-premises to a cloud service provider, this process can be highly automated and graceful with no downtime or lost data.

The story is getting better

In May 2015 Zerto released version 4.0, which introduced support for Amazon Web Services. In March 2016 version 4.5 was released, which included Journal File Level Recovery (JFLR). JFLR enabled recovery of just a file or bunch of files from the last two weeks’ worth of replication data. This sounds simple but is an incredibly complex feat. Version 5.0 being released later this year will extend this JFLR window to four weeks.

Additional 5.0 will bring support for Microsoft Azure and one to many replication.

One to many replication opens up huge new potential. In effect it allows a user to have two or more redundant systems or sites. This level of redundancy means that Zerto can offer solutions to the real big boys running massive cloud datacenters.

In summary

Clearly, Zerto is doing incredibly well. They’ve tapped into the growth of virtualisation and offering a more flexible and innovative solution than their competitors. The word ‘disruptor’ is a great description. They’ve made an elegant solution for a historically complex area.

Even after six years at it, they’re still growing fast with their revenues doubling each year. They now have hundreds of cloud service providers using their solution and employ over 500 people.

Although, what I like most about Zerto is that they’re customer-centric. They’re thinking about how they can help their cloud service provider clients offer new services to the ultimate end user.

Cloud application attacks in Q1 up by 65% - Proofpoint
Proofpoint found that the education sector was the most targeted of both brute-force and sophisticated phishing attempts.
Huawei to deploy Open Rack in all its public cloud data centres
Tech giant Huawei has unveiled plans to adopt Open Rack proposed by the Open Compute Project in its new public cloud data centres across the globe.
Beyond renewables: Emerging technologies for “greening” the data centre
Park Place Technologies’ CEO shares his views on innovations aside from renewable energy that can slim a data centre’s footprint.
Interxion’s David Ruberg wins Europe’s best data centre industry CEO
The European CEO Awards took place this week to celebrate the key figures at the helm of corporations that are driving innovation.
Opinion: 5G’s imminent impact on data centre infrastructure
Digital Realty’s Joseph Badaoui shares his thoughts on how 5G will transform data centre infrastructure now and beyond.
EMEA external storage market hits record high, Dell EMC on top
IDC's recent analysis on the external storage market in EMEA has shown healthy results - with some countries performing better than others - largely fuelled by all-flash arrays.
SolarWinds extends database anomaly detection
As organisations continue their transition from purely on-premises operations into both private and public cloud infrastructures, adapting their IT monitoring and management capabilities can pose a significant challenge.
Was Citrix unaware of its own data breach until the FBI got involved?
According to a blog post from Citrix’s CSIO Stan Black, the FBI contacted Citrix on March 6 and advised that international cybercriminals had allegedly gained access to Citrix’s internal network.