Storage is handled differently in the world of virtual servers vs. physical servers in a number of ways. Compared with the physical environment, where each server ran a few applications and had its own dedicated storage to support a relatively constant workload, the virtual environment stacks more (usually many more) applications onto a physical host. These virtual machines (VMs) share resources, including storage, with all the other VMs on that host, which often shares networked storage with other hosts as well. This creates a very dynamic environment, one that can be characterized by highly randomized I/O and unpredictability.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Storage management is also different with virtual servers vs. physical servers. In a physical environment, resources were allocated to meet immediate demands plus expected future growth, with some “headroom” added for safety’s sake. This largely set-and-forget method was typically effective, as resource requirements didn’t change as much since applications were fixed on each server. In a virtual server environment, optimizing instead of simply allocating resources is key to meeting project expectations. This is especially true in storage. But establishing and maintaining this optimum resource condition is more complicated with a virtualized server environment. It requires some different information and a different mindset than the legacy physical server environment did.
In a virtual server environment, management objectives are to decrease costs through better storage resource utilization and to increase flexibility and responsiveness of the storage systems -- all while maintaining adequate levels of performance. Putting more VMs on each host creates an aggregate workload that can change dramatically, especially with the ability to move VMs between hosts. This means managing storage for a virtual environment requires continuously reallocating storage to support the hosts with the greatest needs and moving “hot” VMs to hosts with spare resources. The task involves keeping a balance between the resource consumers so that applications get what they need while minimizing storage purchased.
To effectively manage these resources in a virtual server environment, IT organizations may need to adopt a different mindset than they’ve historically had. The virtualization layer between storage hardware and guest OSes impacts most of the storage processes that occur in the storage infrastructure supporting virtual servers -- things like backup, monitoring, scaling, etc. -- and may drive some changes in the way IT approaches these tasks. Below are some areas to look at.
Storage infrastructure simplification
Flexibility is key to matching storage with changing host workloads so consolidating a bunch of direct-attached arrays into a SAN or NAS system is a good place to start. This may seem obvious, but using storage systems with good provisioning and management tools can also help administrators to better support such a dynamic environment. The use of templates can further simplify the VM creation process, while snapshots and cloning features can reduce storage consumption at the same time.
Real-time monitoring tools
As discussed, the dynamic virtual server environment is not set-and-forget, as was more common of physical server environments in the past. Optimization is the objective and storage resource management (SRM) tools that rely on periodic polling of storage systems and hosts can leave IT administrators in the dark. Customers need real-time information; tools that focus on transactions between storage and hosts, instead of static snapshots of resource conditions, can provide this kind of data.
VM lifecycle management
While most companies have some kind of process control over setting up new physical servers, for many, this structure didn’t seem to make the transition to the virtual server world. The result is the creation of more VMs, faster, which can aggravate resource problems. There are tools available to put a process around the creation, provisioning, maintenance and decommissioning of virtual servers and help to keep VM sprawl under control while improving management efficiency.
Integration between storage and virtualization platform
Integrating storage systems management with virtualization platforms can simplify the management process and improve storage efficiency. For example, VMware’s vStorage API for Data Protection (VADP) enables the backup of VMs from one or more vSphere hosts to a single backup server without requiring agents on each VM or host. This can significantly reduce overhead on hosts by removing the backup processing tasks, and the API integrates into the backup software application for a single point of control. Similarly, the vStorage API for Array Integration (VAAI) allows storage array vendors to integrate their products with vSphere so that users can handle storage management alongside VM management. This integration also allows some of the storage processes to be offloaded from the hypervisor to the array controller, increasing storage performance.
Even when using the vStorage API described above, traditional backup applications can still move a large amount of data to the backup server. Virtualization-specific backup and recovery tools are built to leverage snapshots and virtual server images and make the backup process more efficient. Compared with traditional backup software, which maintains a complex understanding of the structure of data associated with each server and its applications, image backups manage data for the entire VM at the file image level (VMDK). By tracking changed blocks of data within that file and only handling those blocks, they greatly simplify backups. This can significantly reduce the storage capacity and IOPS consumed as image-based backups move much less data.
The consolidation of VMs onto a few physical servers creates a situation where a large number of I/O streams are being processed by each hypervisor. With each application on each VM potentially creating simultaneous storage demands, the host servers can run into problems trying to manage IOPS. To stay on top of this “I/O blender,” administrators need to ensure that the infrastructure can maintain IOPS for critical VMs as they’re moved between hosts. They also need to be aware of storage system IOPS capacity, something that’s more fixed and not typically as scalable as storage capacity is, and implement systems that can provide the required horsepower for future growth. Also, knowing the IOPS consumed by various applications and VMs is important, as is the impact of storage operations, like VM migration and backup.
While the benefits of server consolidation are very real, many end users aren’t seeing the results from server virtualization projects they had hoped for. For VARs, this means a chance to step in and be the trusted advisor their customers turn to in situations like these. For example, presenting an image-based solution to upgrade a legacy backup system can improve backup performance significantly and reduce storage costs. Resource monitoring and management software that’s designed for the virtualized infrastructure can also be a big help for administrators by providing some structure for their environment and the tools to reduce time spent troubleshooting or optimizing performance, as well as resources costs. With an understanding of the special challenges that the storage infrastructure presents with virtual servers vs. physical servers -- like managing storage IOPS instead of just capacity -- VARs should find that the explosion of server virtualization presents some real opportunities.