Backup Alternatives

The demise of backup and rise of Intelligent Storage

Within the next 5 years I believe many of the top 1,000 global businesses will no longer be performing backups as a daily task. The reason for this is the rise of intelligent storage and the increase in the amount of data generated. By 2017 the amount of data generated annually will rise to 7.5 Exabyte’s (1 Exabyte = 1,000 Petabytes)! The backup task will be performed much less frequently and may only be done once a month on certain data types or applications.

For years the backup windows of many companies has been diminishing, with backup windows of less than 4 hours a day now the norm. The data businesses are generating is becoming harder to keep under control and above all backup. Hard disk capacities are now 12TB with the latest launch of the HGST Helium filled drive, here is the problem. A 3U 16 bay chassis could hold almost 192TB’s of data and using traditional backup methods i.e. data de-duplication, hard disks or LTO-7 tapes would take days. A 400MB/s RAID acting as a disk backup target would take 69 hours to backup 100TB’s. Also it is not just the backup we have a problem with, it is also the restore time, as this is why we backup in the first place. Sure it is fine to restore files but what about complete servers whether physical or virtual?

Unstructured Data Types

Today un-structured data accounts for over 70% of the data businesses generate. In the past, the issue has been to reduce the amount of data backup devices consume, hence the rise of data de-duplication whereby compression ratios of 50:1 or 100:1 are relatively common on unstructured data which is easily compressible and has repetitive characters / symbols. These technologies address the backup window but do not address the growing backup volumes.

backup with intelligent storage

Backup Software

The price paid for backup software has remained static for many years. Today you can purchase backup software on a per server license or capacity license along with the relevant application suites. Sure, they do have better looking GUI’s, more functionality and reporting. The role of backup software is to move target A data to destination B. It normally runs once a day, once a week or daily depending on the amount of data, perform a full system backup. Now surely with all these intelligent backup and application agents, it could provide some form of storage tiering across heterogeneous server platforms and actually manage some of the data it is backing up. Some software companies provide add-ons to this functionality for certain platforms but it isn’t really a fully tiered heterogeneous software package. Having worked in the storage industry for over 35 years I am still staggered that in 2017 this is still not the case. Another problem with backup software is that once you have chosen your preferred vendor you are locked in for “x” years with a support and maintenance contract that can be incredibly high.

Backup software can create a backup of all of the following:

  • Snapshot disks
  • Cloned disks
  • File
  • Delta block changes

Traditional Backup Method

For decades the backups have tended to be made to disk or tape libraries located on the same LAN with some data replicated to a DR site and tapes removed and stored offsite. In the event of a disaster, a tape or files would be recalled and a bare metal restore of the whole server could begin. This worked fine until now when the ability to restore runs into many TB’s of data.

With technologies such as data de-duplication appliances, faster LTO-7 tape drives and bigger capacity disk drives certainly, helps to reduce the backup window.

Backup is not the problem, it’s the storage

In any company, the storage purchased probably solves a particular issue and as such is “Dedicated Storage”. This storage might be hosting VM’s, OLTP, Exchange, File & Print, Web shop, Databases etc. Also, the storage estate that is deployed contains storage from more than one vendor i.e. some for NAS, Fibre Channel, iSCSI etc, another reason is cost, as maintaining all your data on tier 1 storage gets expensive. So from this we have storage from multiple vendors that don’t interact with one another, even if the storage does come from one vendor, the chances of the different models working together are extremely low as the functionality and features of the fibre channel storage differs from the NAS and herein lies the problem.

The Intelligent Storage of Tomorrow

Clearly, the issue of backing up data is becoming a huge headache for businesses along with the escalating costs of backup software, data management and ongoing maintenance of hardware and software. Therefore, our options need to be looked at in more detail and the main 99% of data resides is on the storage platforms themselves and this is where I believe intelligent storage will start to play a bigger part in how companies deploy storage, rather than islands of storage to provide certain applications or operating systems with storage. In essence, the storage becomes an extension of the operating system or application.

What if our storage demands require storage that provides any or all of the following?

  • Fibre Channel – 32/16Gb/s
  • Infiniband – 100/40Gb/s
  • SAS – 12/6Gb/s
  • iSCSI – 10/25/40/100GbE
  • NAS
  • Cloud

This would make the storage “unified” and would involve a large network of differing topologies.

The storage needs to be tier 1, affordable, provide a minimum guaranteed level of performance to all operating systems and hosts and above all interact with many assortment of applications. Also be aware of the application and operating system requirements. The storage needs to support SSD /Flash (solid state drives) for high speed caching of data, fast SAS 15k drives, enterprise class high capacity nearline drives, consume relatively low amounts of power and take up minimum rack-space. You will also need to consider should be clustering, replication, snapshots, thin provisioning, optional data encryption and storage tiering.

Clearly, if this storage existed businesses could invest more in providing a highly advanced storage platform that could grow to accommodate everything the business throws at them.

By deploying a clustered storage solution you ensure continuous availability of data and better performance. In addition to this, you could replicate this information to another site using the built-in replication.

If a vendor could provide this feature set, then the money normally budgeted for backup could be enhance your storage infrastructure.

Let’s look at some of the storage features to manage data

What is Tier 1 storage? A Tier 1 storage solution generally means that the storage technology provides:

  • Greater performance
  • Can scale to multi-petabytes
  • Provide 99.999% up-time or 5.26 minutes downtime per year
  • Respond to predictive failures before they occur
  • Provide enhanced support and cover

As operating systems and applications have more features and functionalities a storage solution that is operating system and application aware. This ensures that the storage can take regular snapshots of the data and offload processing tasks to move data around the infrastructure as an example for storage tiering or DR purposes. Another benefit of intelligent storage is to provide IOPS to key applications on demand.

Thin provisioning enables storage volumes to appear bigger than the physical disk space allows, so you need to be careful when using this technology as you could over provision your data volumes. On the other hand, it could be used to automatically and dynamically grow and shrink data volumes when the application or operating system demands.

Data encryption is also becoming a concern for many businesses. An important consideration is the option to encrypt at the volume or at the drive level. This functionality meets the demand made by legal, corporate or government organisations on data protection.

Another key feature of a storage solution should be Automated Storage Tiering whereby infrequently accessed data is automatically moved from the fastest storage tiers through to i.e. nearline storage, thus freeing up the fastest storage tier to handle critical data.

When performing a snapshot or replicating data this should not impact our applications or operating systems and we should be able to take a number of snapshots every day in order to roll back in the event of data loss or system failure. Replication depending on the links should be asynchronous or synchronous and should be able to replicate a large number of volumes again with no impact to the systems reliant on the storage.

Above all data protection should be the number one feature; any storage system should provide this feature and the ability to protect data even in extreme circumstances when no comparable size or interfaces exist. An integrity check should also be made on all the data that is written to disk again ensuring that the data is 100% correct.

Using SSD/Flash as a caching level speeds up the data transfer between hosts and hard disks massively improving performance to all applications.

Storage Innovation – Solution Available

So, from this, we have shown that in order to say “goodbye to backups” a storage technology needs to exist providing the features and functionality every business needs. This storage solution does exist! If you want to know more about storage which will alleviate many of your IT headaches then call us on 01256 782030 or email: sales.

Leave a Reply