Solutions provider takeaway: Solutions providers have a few options when using kernel virtualization, and this chapter excerpt explains what you need to do to meet your customer's virtualization needs. Find out the advantages and disadvantages of shared kernel virtualization and whether a virtualization product such as the OpenVZ kernel is your best kernel virtualization option.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
About the Book
This chapter excerpt on Comparing virtualization technologies (download PDF) is taken from the book Practical virtualization solutions: Virtualization from the trenches. This book provides vital information on virtualization management, deployment and implementation while offering details on how to plan enterprise virtualization projects.
Kernel-level virtualization is kind of an oddball in the virtualization world in that each VM uses its own unique kernel to boot the guest VM (called a root file system) regardless of the host's running kernel.
Linux KVM (Kernel Virtual Machine) is a modified QEMU, but unlike QEMU, KVM uses virtualization processor extensions (Intel-VT and AMD-V). KVM supports a large number of x86 and x86_64 architecture guest operating systems, including Windows, Linux, and FreeBSD. It uses the Linux kernel as a hypervisor and runs as a kernel loadable module.
User-mode Linux (UML) uses an executable kernel and a root file system to create a VM. To create a VM, you need a user-space executable kernel (guest kernel) and a UML-created root file system. These two components together make up a UML VM. The command-line terminal session you use to connect to the remote host system becomes your VM console. UML is included with all 2.6.x kernels.
Shared kernel virtualization, also called operating system virtualization or system level virtualization, takes advantage of the unique ability of UNIX and Linux to share their kernels with other processes on the system. This shared kernel virtualization is achieved by using a feature called change root (chroot). The chroot feature changes the root file system of a process to isolate it in such a way as to provide some security. It (chroot) is often called a chroot jail or container-based virtualization. A chrooted program, set of programs, or entire system in the case of shared kernel virtualization is protected by setting up the chrooted system to believe that it is a standalone machine with its own root file system.
The chroot mechanism has been enhanced to mimic an entire file system so that an entire system can be chrooted, hence creating a VM. The technical advantages and disadvantages of shared kernel virtualization are listed next:
- Enhanced Security and Isolation
- Native Performance
- Higher Density of Virtualized Systems
- Host Kernel and Guest Compatibility
The chroot system offers much in the way of enhanced security features and isolation; however, the greatest advantages of shared kernel virtualization are not in its security, although that's certainly important to consider, but in its performance. With this kind of virtualization, you'll get native performance for each individual system. Not only does each system perform at native speeds, but you can also have more than the standard number of VMs on a host system. By standard number, we mean the number that you could logically have on a host system if you used memory as the limiting factor -- leaving 1GB for the host and taking the rest of the RAM for VMs.
The limit of the number of chrooted systems you can have on a host system more closely resembles a standalone system supporting multiple applications. If you think of each chroot system as an application instead of a VM, you'll more accurately allocate resources and enjoy performance that surpasses many other types of virtualization.
The disadvantage of shared kernel virtualization is a big one: All VMs have to be compatible with your running kernel. In other words, you can't run Windows operating systems, Solaris, Mac OS X, or any other operating system that couldn't run your system's kernel on its own. Major web hosting providers have run this scenario for years so that customers get their own virtual server for their hosting needs. They don't know that the system is virtual, nor can they contact the host system through their VM.
Solaris Containers (Zones)
Solaris 10 comes with built-in virtualization. The Solaris 10 operating system, itself, is known as the Global Zone. Solaris Zones are actually BSD jails, each with its own virtual root that mimics a complete operating system and file system. When you create a new zone, a full file system is copied to the new zone directory. Each zone sees only its own processes and file systems. The zone believes that it is a full, independent operating system; only the Global Zone has any knowledge of virtualization.
Each zone essentially creates a clean sandbox in which you may install applications, provide services, or test patches. Solaris zones are a scalable, enterprise-level virtualization solution providing ease of use and native performance.
We use the OpenVZ kernel on my personal Linux server system. The OpenVZ kernel is optimized for virtualization and proves to be extremely efficient at handling VM performance for other virtualization products as well.
On my personal Linux server system, we run VMware Server, Sun's xVM, and QEMU. Before we installed the OpenVZ kernel, we had many CPU-related performance problems with some of my VMs. OpenVZ is similar to Solaris Zones except that you can run different Linux distributions under the same kernel. Various distribution templates are available on the OpenVZ website at www.openvz.org.
In the Virtual Trenches
As someone who works with virtualization software on a daily basis, we can give you some pointers, opinions, and suggestions for your environment. These are from my experiences; they may be biased, and, as always, your mileage may vary.
For true Enterprise-ready virtualization, you can't beat Xen or VMware ESX. They are robust, easy to use, well supported, well documented, and ready to go to work for you. Hypervisor technology is absolutely the right decision if you need to virtualize multiple operating systems on one host system. They are both costly solutions but well worth the price you pay for the performance you receive. You should use this technology in situations where disk I/O is of major concern.
As to which one of the hypervisor technologies we prefer, we're afraid that we can't answer that for you. Either one you choose will serve you well.
Solaris Zones (containers), and any jail-type virtualization, works extremely well for UNIX host systems where you want a consistent and secure environment with native performance. Kernel-level virtualization is extremely well suited for isolating applications from each other and the global zone (host operating system). This type of virtualization is an excellent choice for anyone who wants to get acquainted with virtualization for no money, little hassle, and ease of use. We highly recommend this virtualization method for your Solaris 10 systems.
Microsoft Virtual PC and VMware Server are great choices for testing new applications, services, patches, service packs, and much more. We use Virtual PC and VMware Server on a daily basis and can't live without them. We wouldn't recommend either for heavy production or Enterprise use, but for smaller environments, desktops, or IT laboratories, you can't go wrong with these. They're free, easy to use, durable, and can host a wide range of guest operating systems. In this same arena, Sun's xVM is also very good.
VMware Server and Sun xVM are both available on multiple platforms, whereas Virtual PC is available only for Windows.
We deliberately left out several other virtualization products from this dialog. Either we've had less experience with them or less good experience with them than the others mentioned previously, and we don't want to keep you from investigating them on your own. We are not diminishing their value or importance for viable virtualization solutions, but we just don't feel qualified to speak for or against them in this context.
This chapter was an overview of virtualization technology from a vendor-neutral perspective. There is always the question of which virtualization software is best. There is no single correct answer to this question unless it is either emotionally based or prejudicial in some way.
All virtualization software does the same thing: virtualize physical machines and the services that they provide. You'll have to decide what you need from virtualization and then choose the best technology that fits that need -- and worry about vendor specifics later. You may also use more than one virtualization solution to solve the various needs within your network.
If you're going to invest thousands, perhaps hundreds of thousands, in virtualization, you need to experience the software for yourself. Vendors know this and are willing to work with you. Many offer full versions for a trial period. If a trial version won't work for you, get in touch with the vendor and get the actual licensed software for evaluation.
Hosting Untrusted Users under Xen: Lessons from the Trenches
- Comparing a guest/host OS, hypervisor technology, emulation software
- Choosing between kernel virtualization methods
About the Authors
Kenneth Hess is the virtualization columnist at Linux Magazine and covers all aspects of virtualization, from server to the cloud. Hess has also published Microsoft Office Access 2007: The L Line, the express line to learning.
Amy Newman has been the managing editor of ServerWatch.com since 1999 and has a weekly column, Virtually Speaking, that covers news and analysis from the virtualization world.
Printed with permission from Prentice Hall Inc. Copyright 2009. Practical virtualization solutions: Virtualization from the trenches by Kenneth Hess and Amy Newman. For more information about this title and other similar books, please visit Pearson Education.