For the past 20-plus years, storage vendors have been telling us we need Tier 1 and Tier 2 storage for our data. All tier 1 data must reside on our most expensive (reliable) disks.
Every 3-5 years, those same storage vendors turn up and explain that storage has become cheaper, but capacity has increased and we need more performance – so we need to perform a forklift upgrade and replace all our perfectly good, working storage with new storage and add on some Flash for extra performance.
Of course, we need a local replica, and a remote DR site (now three times the hardware cost) and their replication software (which works only with their hardware), their de-dupe, their compression, and the list goes on.
Oh, and let’s not forget everything has to be configured as RAID, so now we need near double the disk. Interesting.
Times are changing. Years ago vendor A had better replication software than Vendor B, Vendor C’s de-dupe was incredible (so vendor A bought vendor C), Vendor D developed a performance product based on Flash and vendor B developed great snapshot technology (by the way, that required that we purchased more of Vendor B products).
And eventually Vendor A had four or more different storage products, none of which could talk to each other, let alone to any other vendor’s storage product.
Interestingly, Vendors A, B, C (de-dupe, now part of A), or D today talk about how they are a software company – but we have buy their hardware to make the software work.
Really? How can they be a software company? Even better, they claim to support Software-Defined Storage (SDS), but again only supporting their hardware. SDS 2.0 is all about decoupling data from the hardware layer to deliver true data freedom for a business to help it transform and grow.
Why won’t they sell you the software only? Quite simply because their revenue would plummet and their share price would fall even further. How else can they get users to spend x3, 4 or 5 – sometimes even 10 times more than the exact same disk drive or SSD/Flash drive that can be bought online.
As the chart below illustrates, the days are nearing an end for traditional SAN and NAS products (light green).
Perhaps it is now time to explore the way enterprise software solutions such as Software-Defined Storage 2.0 (the 2.0 is important – as it allows true hardware independence for data) or Objective-Defined Storage which takes SDS 2.0 to new, higher levels of reliability, performance, scalability and value required by the new enterprise.
So let us circle back to the beginning: Your storage vendor has been insisting that you need Tier 1 storage and Tier 2 storage. Why?
Shouldn’t your applications control where data is placed – active and hot data should be on your fastest storage, including RAM, Optane (new high performance SSD from Intel), SSD or perhaps your fastest spinning disks, warm data is moved to slightly less performance storage and cold data to high capacity low cost disks or even the cloud, including AWS, AZURE and Google.
In the new enterprise, tier 1 and 2 storage ceases to be relevant. Data needs to be always on and available. Reliability should be configurable and exceptional for all data.
Doesn’t it make sense that your absolutely business critical Oracle database should have up to eight Live Instances – perhaps two locally, two in a remote data centre, one in a DR site and maybe an additional one instance in say AWS, Azure and Google each.
If one Live Instance fails, another takes over instantly. They are not copies of data – they are actual, real Live Instances. You would need to lose all Live Instances at the same time to suffer downtime.
A less critical application perhaps needs only two 2 local live instances with a third in the cloud or at a remote DR site.
However, unstructured data (often up to 80 per cent by volume of data) may need only two live instances and then a snapshot which is backup ready for whichever is your preferred backup product.
Remember, you no longer RAID.
Applications and artificial intelligence, rather than a storage vendor, should control were data is placed. Data should move freely and ubiquitously across all storage, regardless of type, vendor, make or model depending on applications’ objectives for that data at that specific point in time.
In fact, how much of your cold data is sitting on expensive storage frames or All Flash Arrays today?
It’s your data, and if you are tired of storage silos, tell your storage vendor you want to free that data from the confines of their storage and watch your business transform.
Not so long ago, everyone was saying that no-one would ever use virtualisation on production servers, or that disk is totally unsuitable for backups.
Interestingly, storage is about to go through a similar change – only a lot faster once people realise that storage silos, forklift upgrades and vendor lock-in benefit only the storage vendor.
They also want you to believe that magically after four years and one day, all your storage is going to fail – which is why if you want to keep your storage frame beyond their ‘refresh’ period, they will charge an arm, leg because it is so expensive maintaining old spinning disks.
Really? Objective-Defined Storage was designed to cope with multiple simultaneous storage failures and self-heal data so that our applications to continue working uninterrupted.
Next time, a storage hardware vendor explains that they are a software company, ask to buy their software and deploy it across your own hardware – their faces are likely to register confusion. Next time they tell you need to refresh your hardware, ask them why?
Silos are great for the farm, but totally unsuitable for the new enterprises’ storage and data requirements. So leave those silos where they belong, on the farm.
Article by Greg Wyman, vice president Asia Pacific, ioFabric