Server virtualization explained

Server virtualization can improve server utilization in your customers' IT environments and allow for better workload management. Although there are many benefits, there are also drawbacks to server virtualization that can affect deployment. Find out what you need to know about server virtualization planning and management.

By Stephen J. Bigelow, Senior Technology Writer

This article is part of the Virtualization Explained series. For more information check out our related articles on storage virtualization and application virtualization solutions.

Server virtualization inserts a layer of abstraction between the physical server hardware and the software that runs on the server. The physical machine is translated into one or more virtual machines (VMs). Each VM runs its own operating system and applications, and each utilizes some allocated portion of the server's processing resources such as CPU, memory, network access and storage I/O.

The primary advantage with server virtualization is improved server utilization. Rather than deploying numerous servers that may not be fully utilized, server virtualization allows multiple VMs to operate on the same physical platform. "Instead of running 10 systems at less than 5% utilization each, virtualization allows a single system to run the 10 sessions at 50% utilization without the applications conflicting with one another," said Rand Morimoto, president of Convergent Computing, a solution provider located in Oakland, Calif.

For example, rather than using separate servers to run a corporate database, email system and document management, all of those applications can be virtualized onto a single server. Running more virtual machines on less physical hardware reduces the cost of hardware, service and maintenance requirements, and also simplifies management.

Learn more about virtualization
Virtualization technologies explained

Storage virtualization explained

Application virtualization solutions explained

Server virtualization also provides greater flexibility in workload arrangements. The virtual machines on one server can be moved to another virtualized server without regard for differences in the actual hardware. As examples, one virtual machine could be moved to a newer or more powerful server, a VM could failover to another virtualized server in the event of a fault, and processing resources (e.g., CPU or memory) could be added or subtracted to optimize the performance of any given virtual machine.

Server virtualization also complements disaster planning. "You can keep copies of your servers in off-site locations," said Dave Sobel, CEO of Evolve Technologies, a solution provider located in Fairfax, Va. Virtual servers can even be copied manually onto USB drives for migration or movement between geographic locations.

Implementing server virtualization requires a multi-processor system with 16 GB or more of RAM, along with multiple network connections and storage connectivity. In many cases, servers with virtualization-ready processors yield the best results. Virtualization also requires a software component to provide the abstraction layer. This includes products like ESXi from VMware, Xen Server 5 from Citrix Systems Inc., Virtual Iron from Virtual Iron Software Inc. or Hyper-V with Microsoft Virtual Server 2008. Servers do not exist in a vacuum, so a solution provider should also evaluate the associated storage infrastructure when considering a server virtualization deployment. A customer may need iSCSI or Fibre Channel storage network upgrades to improve storage performance once the physical server is running multiple virtual machines.

What are the issues or limitations with server virtualization?

Server virtualization has limitations that a solution provider should consider. The applications themselves are one potential stumbling block. Point-of-sale (POS) programs and applications that are sensitive to TCP timing issues may not be appropriate on virtual machines, but that hasn't stopped the virtualization of other high-end programs. "You'll see people virtualizing Exchange, SQL, Oracle and having no reservations about it," said Keith Norbie, director of storage and virtualization at Nexus Information Systems in Plymouth, Minn. Norbie noted a recent EMC test with Exchange running 50,000 mailboxes on a single virtual machine while achieving about 100,000 input/output operations per second (IOPS).

Another potential issue with server virtualization is "server sprawl," where the number of virtual servers increases across a company. Most software is licensed based on the notion of CPUs, but virtual machines are quickly changing this standard. As virtual machines proliferate, the licensing cost for operating systems and applications can become prohibitive. A solution provider must understand the licensing implications and financial impact of their client's software before implementing server virtualization. Virtualization also complicates the client's environment by adding a second operating system -- the virtualization hypervisor -- which needs to be managed and maintained along with regular operating systems and applications.

There are several lesser-known issues to consider. Some technologies, like vmHA (virtual machine high availability) and DRS (distributed resource scheduling, used with VMware Virtual Infrastructure 3), require identical processor speed steppings. "If you've got really old servers and really new servers, those are not going to coexist in the same highly available grid as if you had all new or all old servers," said Scott Gorcester, president of Moose Logic, a solution provider headquartered in Bothell, Wash. Also, multipathing software (often used for failover) does not work in a virtualized server environment, so the hypervisor software runs its own multipathing features.

Finally, a physical server failure can knock multiple virtual machines out of service, resulting in far greater potential service disruption than nonvirtualized environments. "Companies need to avoid putting all their eggs in one basket," Gorcester said. This means solution providers will need to include resilience in the physical design, using techniques like server clustering to share processing duties and continue processing even when part of the cluster fails.

This was last published in October 2008

Dig Deeper on Server virtualization technology and services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

MicroscopeUK

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchDataManagement

SearchBusinessAnalytics

Close