In this expert videocast, Jon Toigo, founder and CEO of Toigo Partners International, talks about inherent server virtualization risks, including the potential for VM failure caused by illegal resource calls or an overprovisioned storage array. Toigo also explains why Microsoft Hyper-V is less subject to these risks than other hypervisors such as VMware ESX Server and Citrix XenServer.
Read the full transcript from this video below:
Expert video: Server virtualization risks
SearchStorageChannel.com: What are your concerns around server virtualization?
Jon Toigo: Server virtualization is essentially stacking up a bunch of applications and virtual machines inside an X86 environment, usually one where the chip has special extent code supporting multitenancy. It introduces a piece of code that’s an abstraction layer, called a virtualization hypervisor. It abstracts the environments so that it takes all the resource requests from the applications and transfers them into normalized requests at the infrastructure. Nothing wrong in principle with that. In fact, IBM’s been doing it in mainframes for 30 years. The problem is the resilience in the environment and the kinds of troubles that can develop when you stack a lot of applications into that kind of a setting.
SearchStorageChannel.com: Why is there so much potential for trouble?
Toigo: The core technology of most virtualization engines is the hypervisor; it’s a piece of code that acts like a traffic cop. It takes the requests that are coming from the applications for resources in the environment and normalizes those requests, then passes them on to the environment. It’s actually fighting a two-front war, when you think about it. The applications weren’t designed to be virtualized or held in a multitenant kind of container, so they’re making resource calls that would be considered illegal, in some cases, from the hypervisor’s standpoint. A well-educated hypervisor deals with most of these resource calls very efficiently. However, the preponderance of applications out there, Microsoft applications, are notorious for being illegal in their resource call construction to everybody but Microsoft. They feel like they’re very legal.
At the end of the day, what you’ve got is the real potential for a single call to be construed incorrectly, or to be framed incorrectly, and for a VM to fail. When one VM fails, there is limited insulation between the various virtual machines that are all collectivized in that machine, so oftentimes all the stack fails, all the virtual machines. That’s a really bad day, for a disaster recovery standpoint.
The other part of that two-front war, and I won’t dwell on it, is that some of the storage arrays that are out there are actually advertising resources that they really don’t have. Even if you normalize that request from that application for a resource and it hits a back-end array that’s doing something like thin provisioning, where it’s allocated the resource to some other application, you run into the problem that you would have even in a physical server environment, where all of a sudden the application isn’t getting what it asked for, and it abends. Sometimes, if the application abends, all the virtual machines fail, then the server operating system fails, and then smoke comes billowing out of the sides of the servers. Again, a really bad day. We’re looking at a situation where people are kind of moving with a herd mentality toward particular technology that conceptually is a very good one, but unfortunately, it doesn’t necessarily pan out in actual operations the way that it says in the brochure.
SearchStorageChannel.com: Are there any hypervisors that are less subject to these problems?
Toigo: I would say that the storage infrastructure issues that I cited are going to persist, regardless of the hypervisor you’re using. However, the edge right now appears to go to Microsoft Hyper-V, because the most misbehaved applications are Microsoft applications and if you use Microsoft code and a Microsoft hypervisor, chances are they know their code, they can rectify the problem a little faster. We’re using Hyper-V with a cluster 8 server from Microsoft in [Windows] Server 2009, and it seems to be working great. VMware, we’ve had a little less success with; failover, for example, within the same sub-network, only seems to work about 40% of the time.
SearchStorageChannel.com: With all these problems, what's good about server virtualization?
Toigo: I think that the virtualization market suffers from a lot of hype. The fact is that virtualization is not a strategy. Server virtualization, regardless of what VMware says, is not a strategy, it is a tool. Like any tools, there are good things you can do with a tool, and there are bad things you can do with a tool. Some tools are appropriate for certain jobs, some aren’t. My feeling is, and I have evidence of this from the University of Texas at Brownsville, where they take an environment that’s a highly configured cluster supporting email, Microsoft Exchange mail, talking to a highly configured back-end Fibre Channel fabric. It would be prohibitively expensive for this organization, this college, to replicate that entire infrastructure somewhere else. Instead, they’re using virtualization wisely.
At their recovery site, they’ve got one virtual machine running Exchange mail and a locally attached disk array that basically serves as the backup for this highly configured infrastructure; [it] doesn’t cost them very much, at all. The users don’t notice any difference if they’re using the virtual environment or not. Typically, in a disaster, there’s going to be less of a workload, there’s going to be less stress put on this server environment. Used judiciously, virtualization can actually be a godsend when it comes to low-cost replication of infrastructure for failover and disaster recovery.
Would I use it with all the bells and whistles that are in some of these products right now? No. I prefer using a product like a CA XOsoft, and putting a wrap around my physical and handing it over to virtual, so I go physical to virtual.