Solution provider takeaway: Resellers implementing storage virtualization technology for customers can learn how...
to best harness the technology to improve storage resource performance and ease provisioning.
While there are two types of storage virtualization technologies on the market, only one has really been fruitful for resellers. Virtualization appliances, the older technology type, are generally sold directly to customers from the likes of Hitachi, IBM, EMC and Network Appliance. System-based storage virtualization, on the other hand, is a good fit for resellers; the technology is easy to implement, doesn't require a Ph.D. to allocate capacity or fine-tune performance, and enables your customers to concentrate on things other than monitoring storage.
The appeal of system-based storage virtualization lies in its simplicity. The technology exists as a core feature of storage hardware from companies such as Compellent and Xiotech, two vendors popular among resellers. In system-based storage virtualization, the drives are allocated either as one big pool or by drive type. For example, there could be three groups: one of 15,000-rpm Fibre Channel drives, the second of 10,000-rpm Fibre Channel drives and the third of 7,500-rpm SATA drives. Like other virtualization technologies, system-based storage virtualization abstracts the hardware from its function, so you don't need to be concerned about what's taking place on individual drives.
Virtualized storage systems are also more powerful and forgiving than a nonvirtualized system. With virtualized systems, there's no need to calculate the optimal number of drives that should go in the RAID group based on the workload you're going to place on that group. Instead, you should group drives by type and speed, and the system will use every drive of that type as you assign workloads. For example, a database would be assigned to the group of high-speed drives; the virtualized system tends to scatter the data across every drive in that group. By distributing data in this way, the risk of hot spots because of RAID parity are greatly diminished; the parity is simply spread across all the drives just like any other data set.
This distribution allows for an unlimited number of volume snapshots without a performance impact. Compare this with traditional storage systems that typically have a finite limit on the number of snapshots that can be maintained and almost always suffer a significant performance impact as the number of snapshots grow.
With traditional storage systems, provisioning can be a multiweek task: The request has to be made, the storage system analyzed to decide which drives should be part of the new volume, and the volume created and then assigned to the server. With a virtualized system, though, there's no need to figure out which drives should go into a new RAID group; you simply tell the system that you need to create a 200 GB volume, and the system decides the best way to lay out that data.
Thin provisioning -- defining a virtual volume that consumes actual disk capacity only as needed -- can be used with virtualized storage to reduce costs and increase overall utilization. For example, assume your customer's Oracle DBA requests a 500 GB volume but you know that it will take him two years to use that capacity, and in fact the database will need less than 100 GB to get started. Instead of creating a 500 GB volume and wasting 400 GB of disk space plus the cost to power and cool that disk space, you can create a 500 GB virtual volume that consumes only 100 GB of actual capacity as the system comes into production. Both Oracle and the Oracle DBA think they have 500 GB of space.
Expansion of storage is also greatly simplified with storage virtualization technology. Simply attach a new drive shelf; the system recognizes the shelf and the storage is automatically added to the virtualized storage pool. The system will either allocate storage as needed to your thin-provisioned volumes, or you can expand them yourself or create new volumes as needed.
While the benefits of thin provisioning are clear, overallocation can get you into trouble. What if you overallocate your customer's storage beyond the current physical capacity and then they actually need all that capacity? You'd need to either delete old data and snapshots or order more storage and wait for it to arrive. While systems that allow overallocation have a host of built-in alerts and warnings relating to actual storage utilization, you'll need to make sure the alerts come early enough that you can order and install the new storage before it's all used. The timing of the alerts can be controlled based on the percentage of physical capacity used, and most systems on the market will allow you to add your own email alias to the alert list.
Despite the potential for problems, it's important to become comfortable with overallocation. It will enable you to save your customers money by spending less on physical capacity. On many nonvirtualized systems, volumes are hard-allocated and, as in the example above, there are terabytes upon terabytes of allocated but unused capacity. Thin provisioning allows you to offer a solution that is less expensive and more power-efficient. In many cases, storage capacity can be treated like a "just in time" inventory item, making for a more cost-effective investment for your customers.
In the end, you're likely to find that customers that implement storage virtualization technology are extremely happy with their decision and quickly move these systems into production. Maybe more importantly for you, the more comfortable they are with the system, the faster they'll move production workloads onto the system, which in turn means they'll need more of your products and services.
About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection.