Complicating this is the fact that the typical overprovisioning of the physical server environment can easily make its way to the virtual world during the migration that occurs when establishing the virtual environment.
So how can you help your customers plan for storage requirements in a new virtualized server environment?
Inventory the systems to be migrated
The first step in storage capacity planning for customers' server virtualization environment is to create an inventory of the systems that are going to be migrated into the virtualized server environment. This can either be done by manually auditing each server to determine how much capacity each is using, or it can be done via a third-party capacity monitoring tool that will automate much of that work for you. Many of these tools can either be sold to the customer or, in the case of an audit, licensed on a per-use basis.
Determine what storage efficiency techniques will be used
Once the data is collected, the next step is to factor in any storage efficiency strategies that the customer would like to apply. There are typically three data reduction techniques that can be successfully applied in a server virtualization environment, and customers are typically more receptive to these technologies during a virtualized server project. With the first two techniques -- data storage compression and deduplication -- actual results will vary, but virtualized server environments typically have a high degree of redundant data in the form of operating system data; space savings of 50% or more are not uncommon.
The third type of data reduction technique is writable snapshots, sometimes called cloning. With a writable snapshot there's essentially a golden master of a server image. Then that server master is snapshotted, and the new snapshot volume is assigned to another server. Unlike typical snapshots, which are read-only, this type is read/write, which means its configuration can be customized for a particular virtual instance. The only storage capacity that is required is the capacity to store the changed segments, eliminating the need to potentially store terabytes of redundant data.
In addition to those three techniques, another feature, thin provisioning, will improve utilization rates bymaking sure that the VM only consumes the space it actually uses. One caveat for resellers: Not all migration tools will "migrate thin." Instead, they'll migrate a server image block for block and write data marked for deletion in those blocks as if it were valid data, breaking the thin provisioning model. Make sure that the storage system your customer is using can detect these zeroed pages of data and not write them during migration.
Calculate the impact of storage efficiency techniques
With decisions around these four factors that affect storage capacity planning determined, you can intelligently plan how much capacity your customer is going to require. In the past it was acceptable to add up how much total (used and unused) capacity the customer had, add 50% to that number to accommodate growth and then propose the solution. But nowadays, budgets won't allow for that approach, and the capacity influencers above can curve the actual need so far down that your delivered capacity will look silly if you use the old model.
So, when recommending capacity, you should first set expectations with the customer that while you will try to make a more accurate calculation than just doubling what they have, there are a significant number of variables, more than in the past.
In addition, you'll need to analyze the information you've captured about current storage capacity use. Look for existing data that will be redundant in the new environment. For example, if there are 50 Windows servers and your customer will use either cloning or deduplication, the capacity requirements of the operating systems in those servers should not be included in the calculation 50 times, though more than once makes sense. Even within like servers, look for redundant application suites. For instance, are there redundant database servers or email applications? If compression will be be applied, look for and measure data sets that have very compressible data in them. Databases are typically very compression-friendly, sometimes saving as much as 90% or more.
Finally, look for a migration tool that is thin-aware and a storage system that can detect blocks of storage that have been zeroed out. If these are in your solution, budgeting the actual capacity in use, less the amount to be saved from the storage efficiencies discussed above, should provide a fairly accurate capacity estimate.
Don't be surprised if the actual capacity of a new storage system is actually less than the capacity of the system it is replacing. Advise your customer of the technique being used to calculate the new capacity and make sure they have extra budget in their back pocket in case there is a miss, but if the guidelines above are followed, there shouldn't be.
Plan for growth
Beyond the initial capacity plan, it's also important to plan for growth. Even with all the storage efficiency capabilities available today, virtualized server environments are infamous for growing incredibly fast. Make sure both you and the customer understand what a storage upgrade will look like and the ramifications of installing that upgrade. If upgrades of one system are particularly limiting, that's an opportunity to propose an alternative system that may be more upgradable.
Capacity planning is now more complicated than ever. Not only are there multiple virtual servers contained in a single server, there are also more tools to control storage growth. You have to be careful to factor this into any proposed storage solution. While it's extra work, it gives you a chance to separate yourself from the competition by showing the customer that you're willing to take the time to do it right, which eventually leads to savings.
About the author
George Crump is president of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.