Right now, storage resellers are sitting with clients, compassionately listening to their sagas about data backup systems failing to keep pace with primary storage growth because of poor data backup times. Backup strategies are an expensive insurance policy and as long as budgets strain to fund primary storage growth, backup investments are overlooked. The architecture that may have served business needs just a few years ago now drowns in failed backups, missed backup windows and overworked operators. Meanwhile, the backup administrator is caught in the paradox of under-funded infrastructure and unprotected data.
If storage budgets were unlimited, you could forklift replace everything! The reality is that our customers need an incremental approach, taking advantage of improvement opportunities that yield the most "bang for the buck". Value-added resellers (VARs) have an opportunity to help customers attack data backup performance problems methodically. This tech tip walks through a few methods to improve backup performance and build trust with customers.
Improve data backup throughput
The life of a backup operator is cruelly challenging with unmovable obstacles. Operators feel like many things are out of their sphere of influence and cannot be changed. For example, even though they know that unimportant data gets full backups every night, they may not feel empowered to change the schedules or file-include lists. Instead, they need to work within the bounds of things they can control.
The first stop in improving data backup time is to tune parameters. Every backup software application is different, but there are mountains of white papers available to help optimize all things from TCP windows size to tape-drive firmware and drivers. Look to the backup software vendor Web sites for performance tuning guides.
When the backup infrastructure is performing optimally, and the bottleneck appears to be limited network bandwidth, it may be time to enable client-side data compression. Results will depend on the type of data to be compressed and the available client CPU capacity, but reducing the amount of data that hits the wire will have a positive effect. Beware that client-side compression can backfire. Slow or overloaded clients may take more time to compress the data than it would take to send data over the wire uncompressed.
Data backup software features
Backup products these days are offering new features with every release. It is common, however, for IT shops to naively struggle, doing business as usual, when salvation is in the latest code release. For example, incremental forever backup techniques and synthetic full backups may drastically reduce the amount of data that must be sent through the infrastructure. These backup schemes combine multiple incremental backups on the backend so that restore times are not penalized.
Data deduplication is all the rage these days. Web blogs, industry magazines, and storage conferences all suggest that dedupe is going to save the day. Many customers, however, don't understand how it works or when to apply it. Remote office backup needs are demanding, yet the bandwidth apportioned is usually a fraction of what is available in the datacenter. Remote office data is also highly duplicated between sites. A centralized dedupe backup infrastructure like Symantec PureDisk, EMC Avamar or Data Domain may be the most cost effective way to solve the remote office backup problem.
Within the data center, some applications have grown beyond their ability to back up over the network at all. LAN-free solutions exist in all of the major backup platforms, allowing larger servers to back up directly to the storage media without traversing the network.
When everything in the backup infrastructure is running smoothly, and it is time to add more tape drives for more bandwidth, your customers may want to consider backup-to-disk options. Whether it be virtual tape or a simple SATA disk storage pool, backup to disk provides generally better throughput and flexibility than tape drives, and the restore times are almost always reduced.
Reduce amount of data to be backed up
Eventually, controlling the cost of backups will require some involvement with the data owners. The 21st century data explosion is partially due to the fact that old data is rarely deleted. Several products are on the market today designed to drive the classification and management of stored files. Tools like Abrevity, Kazeon and others can leverage significant improvements by reducing the amount of data that needs to be backed up.
When the data can't be deleted, it may make sense to archive it to a platform that doesn't require backups. EMC's File Extender can transparently migrate aged files into a CAS device, such as EMC Centera. Data archived to the Centera is protected in such a way that backups are not required.
Reduce the application impact backup window
So the backup infrastructure is running optimally, the data is reduced as much as possible, and the backup window is still not met? It is now time for a system of snapshot-based backups. Most storage arrays (EMC, Netapp, HDS, IBM etc), many volume managers (Symantec's Veritas Foundation Suite) and some SAN-based products (EMC Recover Point) can take point in time copies of data sets so that backups can complete during off-peak periods or off the production server completely.
Backup infrastructures have generally evolved over long periods of time. They are often the source of great pain, yet are ironically, often neglected. Leveraging a few of these ideas for reducing backup windows will go a long way to reduce the client's pain, and improve your relationship with them.