Think of all the tools that you have at your disposal to help your customers improve their backup processes: deduplication, snapshots, replication, high-speed networks and high-speed tape devices. You could also throw out the backup software entirely and starting over from scratch. But what if you could save your customers money without replacing the kitchen sink? An incremental backup strategy could make that happen.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The incremental backup has been around for as long as we've had the backup process. The typical backup schedule back in the early days was a full backup once a week, a month or even a quarter, and then incremental backups in between. The relatively small capacity of tape media drove this strategy. But as tape media capacity grew, reducing the number of tapes it took to store an entire full backup, it became easier to do full backups more frequently since tapes could be changed out less frequently.
Also a factor in the move toward full backups was the recovery issue. As open systems and Windows environments began to increase in number -- carrying with them more data and a slower network -- recovering that data from incrementals became more burdensome. To do a full recovery, first, the full backup had to be restored, and then each subsequent incremental had to be restored. In the early days of open systems and Windows backups, it often took overnight for all the data to be restored from tape. Doing a full backup every night instead eliminated the piecemeal approach to recovery, since all of the data was likely on a single piece of media. Autoloaders and tape jukeboxes were rare at this time, and the thought of being able to upload one piece of media at night and then coming in the next day to a recovered server was appealing.
Even as autoloaders and jukeboxes became more affordable and more popular, the trend of doing multiple full backups remained, mostly on a daily basis (in a worst-case scenario, once per week). That's at least partly because even with the automation of the autoloaders and jukeboxes, the time it took to mount and dismount tape media was considerable. The less mounting and unmounting in the recovery process, the better.
But the downside to frequent full backups -- now and in the past -- is that all the data has to be brought across the network each time a full backup is executed. As data sets increased -- and whose hasn't? -- network infrastructure needed to be expanded significantly. Faster and higher-capacity backup targets needed to be created. Many environments could not complete their full backups during the week and were forced to do incremental backups each night and then perform full backups all weekend long. Eventually, when the data set increased beyond the scope of the weekend backup, something else needed to change again, typically an infrastructure upgrade to allow more data to move across the network. Essentially, storage administrators needed to build a vast infrastructure for a once-a-week event. During the week it went virtually unused.
Compounding this is the fact that 80% to 90% of the data contained in most full backups has not changed since the prior full backup. Most data centers were -- and many still are -- doing more full backups than they need, no matter whether they do it once a day or once a week.
Fast forward to today. We now have disk-based backups in many environments, and with disk, there is no time required to mount the media. Restores are typically no longer bottlenecked at the backup target. Whether the targets are disk- or tape-based, they are now often able to deliver data significantly faster than the network or client can receive the data.
In addition, software has improved. Many backup applications can consolidate the current incremental backups with the prior full backup to create a new full backup, without having to pull all the old, unchanged data across the network again. Other backup applications have adapted a self-leveling technology that automatically load-balances the full backups across the week instead of making it a once-a-week ordeal.
With all these changes, it's time to go back to your customers to suggest a backup assessment to fine-tune the backup application to today's backup targets. Lower the frequency of full backup jobs to maybe once per month or even once per quarter. This will not only reduce the demands on the backup infrastructure, it will also reduce the demands on the backup targets. Your customers will need less network bandwidth, less backup disk capacity, less tape media and fewer tape drives. Leverage a modern backup application that can intelligently create a full backup data set or load-balance full backup jobs across the week. These applications will provide customers with rapid backup jobs without a huge infrastructure investment.
Beyond the opportunity around the backup assessment service, there are also opportunities for you to sell additional products. Being able to consolidate incremental backups into a full backup may be the tipping point for purchasing a new backup application. And, a new backup application might also be the justification for the customer to purchase a new disk backup system or tape library. Finally, it may free up budget dollars for an alternative project that you can get engaged in.
About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.