Solution provider takeaway: Solution providers will learn about VirtualCenter's shortcomings and how to fill the gaps left by VMware's central management console.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
VirtualCenter, VMware's central management console, does an amazing job at providing a single window into the virtual infrastructure, but it's not without a few gaps. An integrator's job is to fill in the holes and provide a complete solution for customers while cementing their value-add.
We've laid out where the VirtualCenter gaps are, as well as how you can fix them.
VirtualCenter can show you what portion of server resources a virtual machine is allotted and how much of those resources it is using. But the tool is somewhat myopic in that it doesn't show aggregate information about all the VMs and how resource allotment changes to one affect the others. With VMware, performance doesn't decrease linearly with load; rather, as more virtual machines begin to request host resources, the impact to the overall environment can be exponential. It's important to understand that processor utilization isn't the only concern; memory, network I/O and storage I/O also need to be examined. In fact, more often than not a physical host gets maxed out as a result of I/O shortages long before it runs out of CPU resources.
The response to this lack of aggregate information is the same as it is in the nonvirtualized world: resources get overprovisioned to cover peak workload compute needs. This is particularly ironic in a virtualization project, since most are undertaken to address the overprovisioning of server compute resources.
Providing the right tools to monitor and manage this infrastructure is critical, and companies like Tek-Tools, Akorri and NetApp (via its Onaro acquisition) can address this need. You can use these tools to identify which virtual machines are heavy consumers of server resources, so you can better isolate them to keep them from affecting other virtual workloads. The tools can even identify what server resources a particular VM is using. For example, if a VM is light on compute requirements but hard on network I/O resources, you can make sure that other VMs on the same host are not big consumers of network I/O resources. Following this example through might lead you to recommend a 10 Gigabit Ethernet card with network I/O virtualization (IOV) like those from Neterion or Intel. With these monitoring tools, you could make that recommendation knowing that it will make a difference in performance, and be able to project additional VMs that could be enabled as a result.
Using this forecasting information you may decide that it makes sense to use a tool like Scalent Systems' Virtual Operating Environment, which can provide virtual-to-physical migration so that on peak loads you can move a virtual server image to a dedicated machine and then back again. For example, if your customer has an application that would be best suited on its own server for the final three weeks of a quarter but runs just fine as a virtual machine for the rest of a quarter, Virtual Operating Environment could be used to move the workload back and forth.
Beyond the monitoring and management shortcomings, VirtualCenter has limited understanding of the storage environment. It is particularly challenging to manage the server-to-storage path when the server has the ability to move. It is important to be able to see a view to storage from both the physical host and the virtual machine viewpoints. Being able to see which virtual machines are connected to which storage resources and how that storage is configured to handle a VMotion migration is critical to ensuring that the migration will work correctly. This is not typically an issue during initial configuration but becomes one as the environment evolves; change management in a virtual environment is almost impossible. What's needed is a tool that not only shows how hosts and guests are connected but also how they were connected, as well as how connection changes would affect the environment. Tools from companies like Tek-Tools, Akorri, NetApp and others fill this gap.
And then there's the issue of wasted disk space with virtual machines; many VMs are created with a standard template that frequently overallocates storage. Products like Storage Center from Compellent can perform thin provisioning to prevent overallocation of VMs. But you'll need use the default option of zerodthick on Virtual Machine Disk Format (VMDK) files; otherwise, the VMDK file is written out as zeros and consumes the storage upon creation, defeating thin provisioning schemes.
The tools we mentioned from Tek-Tools, Akorri and NetApp can track storage utilization either by physical host or by individual virtual machine. They can then provide trending and forecasting based on that data. They can also track individual file activity by virtual machine.
As you know, a virtual environment makes it very easy to deploy new servers and applications. As we discussed in a previous article, that ease of deployment can lead to out-of-control virtual server growth. The problem is important enough that VMware has a solution, Lifecycle Manager, to address the situation but, like VirtualCenter, Lifecycle Manager also has gaps. Tools, again from the likes of Tek-Tools and others, can monitor, trend and predict underutilized ESX hosts and underutilized VMs, giving you the info you need to prevent and correct server sprawl.
These tools can also show the storage impact of virtual server sprawl by analyzing VMDK file utilization and orphaned volumes that are no longer being accessed.
Capacity planning gaps
Capacity planning is another area where VirtualCenter falls short. The application assesses current system status but doesn't provide much "what if" logic to help with capacity planning. It would be helpful to know answers to the following types of questions: If your customer purchases a new quad-core four-processor server, which VMs are the best candidates to move to that physical host? What will be the impact on the other hosts after freeing up those virtual machines? Does it make more sense to power them off or to continue to use them as part of the virtual environment? Another use for a data center virtualization tool like Scalent's Virtual Operating Environment is to allow these machines to be powered off but still useful. With Virtual Operating Environment, you can power on a server, load an OS image (that image can be VMware) and, in the time it takes to boot a system, be ready to start migrating VMs to the box. This gives you the redundant capacity without the cost of powering and cooling that capacity until absolutely necessary.
Beyond new hardware acquisition, capacity planning also entails determining the impact of adding new virtual servers, moving virtual servers, enabling or broadening the use of VMotion. In addition, VMware's Site Recovery Manager tool increases the importance of capacity planning. With Site Recovery Manager, it's critical to have an understanding of impact on network and WAN bandwidth as well as impact to the remote site. For example, say your customer's disaster recovery plan designates a production site as its the second site and in the event of a disaster your customer would move critical virtual machines to that environment. They'll need to know the impact of those new machines, the impact of the monitoring of the two sites and the impact and estimated time to move the virtual machines. All of these variables can be determined by providing your customers with tools like the ones from Tek-Tools, Akorri or NetApp.
Filling the gaps
From an integrator perspective, there are two options in delivery. Many of the makers of these solutions offer the software on a rental basis so you can implement at the customer site, capture the data, create a report and -- leveraging your experience and expertise -- add significant value to the offering. And of course the products can be sold intact to your customer. A straight sale won't eliminate you from adding value, though: Installation and training opportunities abound, and there's still the need to determine the right course of action for a particular environment. Even though these solutions addressing VirtualCenter gaps can tell you what the problem is, someone still needs to analyze the results and repair the situation.
About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection.