Helping customers with VMware ESX version comparisons

Some differences between VMware ESX v3 and ESX v4 are more subtle than others, such as how the system boots. This chapter excerpt details the significant changes, including v3's Intel 10/100 NIC device no longer being viable in v4.

Solution provider’s takeaway: Being able to explain the major differences between VMware ESX v3 and ESX v4 and what the best fit for a customer’s environment can open up new opportunities for VARs. Read how each version’s features vary in this chapter excerpt.

Virtualization as a technology has been around for a very long time. VMware was founded by a group out of Stanford and was one of the early companies that brought virtualization to the x86 platform. Their initial product was a “please try this, it is cool, and tell us what to fix” version of VMware Workstation. Soon after that, VMware Workstation version 2 came out, and the world of computing changed. When version 4 of VMware Workstation came out, more and more people started to use the product, and soon after came the server versions GSX and ESX. With ESX, another change to computing took place; virtualization has become the buzzword and driving force behind many datacenter choices.

VMware produces four major products with varying capabilities and functionality. The products form a triangle in which VMware Workstation, Player and Fusion are at the bottom with the broadest ranges of functionality and capability. Here VMware tries out new ideas and concepts, making it the leading edge of virtualization technology. The second tier is VMware ACE that adds to VMware Workstation the Assured Computing Environment that allows more control over the virtual machines in use. 

The third tier of the triangle is VMware Server (formerly GSX Server), which could be said to be VMware ESX-light because it is a middle ground between VMware Workstation and ESX, providing VM Workstation-style functionality while running on another operating system: Windows or Linux. VMware Server is a collection of programs that includes a management interface that has its own SDK, and other programs to launch VMs in the background. The pinnacle tier is ESX and ESXi, which are their own operating systems and the version comparison covered within this chapter. 

ESX v3 and ESX v4 differ in many small ways, but both differ greatly from ESX v2.These differences revolve around how the system boots and how the functionality of the earlier version was implemented inside the new version. Between ESX v3 and ESX v4 there are changes in just about every subsystem, and all for the better. This is the release of ESX that brings many of the VMware Workstation cutting-edge technologies into the server environment, including the new virtual hardware and disk file format and functionality (including thin provisioning).

Because so many subsystems have had modifications and enhancements, they need to be broken out in more detail. It is easy to say it is a new operating system, but in essence ESX v4 is an enhancement to ESX v3 that simplifies administration, increases functionality and performance, and incorporates common customer-requested improvements.

The version comparison of ESX in this chapter looks at the following:

• The vmkernel (the nuts and bolts of the virtualization hypervisor)

• The boot process and tools of the console operating system or service console

(SC)

• The changes to virtual networking (vNetwork)

• VMFS datastores

• Availability

• Backup methods

• Licensing methods

• Virtual hardware functionality

• VM management

• Server and VM security

• Installation differences

• VMware Certified Professional changes

Although the text of these sections will discuss the differences between ESX v3 and ESX v4, the author has left in the tables the data referring to ESX v2 so that the reader can see the growth of the product. In addition, the tables also included VMware ESXi where necessary. In many ways, ESXi is identical to ESX. The major differences are in how ESXi boots and the lack of a full-blown service console. Unless stated in the table or discussion, the differences also refer to ESXi as well as ESX.

VMware ESX/ESXi Architecture Overview
VMware ESX and ESXi is a multilayer architecture comprising multiple types of software. In Figure 2.1 , we see that the top of the software stack is the application that runs within each guest operating system, which in turn runs within the virtual machine. The virtual machine is composed of the stack that contains the Application (APP) and the Guest Operating System (OS). Below the Guest OS is the virtual machine manager (VMM). Each VM talks to its own VMM, which is composed, among other things, with the virtual hardware in use by the VM. 

The VMM is a software layer that provides interaction between the Guest OS and the kernel layer. The kernel layer is referred to as the vmkernel, which provides a layer that coordinates VMM interactions with the physical hardware and schedules the VMs to run on their associated physical CPUs. The vmkernel is the guts of the VMware ESX and ESXi. The vmkernel coordination includes the virtual network components, such as virtual switches that can connect the virtual machines to each other as well as to physical NICs. The vmkernel provides VMs access to the physical resources of the host. The vmkernel breaks down all resources into CPU, memory, network, and disk and provides these to the VMM layer for use by the VMs.

Figure 2.1 ESX/ESXi architecture in a nutshell

The vmkernel manages the physical devices within its core code but talks to them using drivers or modules that speak the language of the physical devices. Even though this is the case, each VM could possibly talk to one of the devices directly using a pass through mode supported by the vmkernel, which literally maps the device directly to the VM, bypassing many of the vmkernel layers (dashed line in Figure 2.1). The VMM, Kernel, and Driver or Modules layer compose what is referred to as a hypervisor.

VMs run within the hypervisor that coordinates and schedules access to the physical hardware required by the VMs. Hosts can be combined into clusters of hypervisors. Hypervisors can also communicate directly with each other via management and other networks that have physical and virtual components. VMs can communicate with each other and physical machines via virtual and physical networks.

In short, a hypervisor runs VMs and interacts with hardware.

Vmkernel Differences
The heart of ESX is the vmkernel, and future enhancements all stem from improvements to this all-important subsystem. The new vmkernel supports new and different guest operating systems and upgrades to support the latest service console version and driver interactions. The vmkernel looks similar to a Linux kernel, but it is not a Linux kernel. The most interesting similarity is the way modules are loaded, and the list of supported modules has changed. Table 2.1 shows the standard modules loaded by each version of ESX.

Table 2.1 Module Version Differences (Proliant DL380; Found Using vmkload –b
Command)

In ESX v3, the split between the service console and the vmkernel was physical, but there was quite a bit of bleed-through nonetheless. As of ESX v3.5, this bleed-through had been nearly eliminated. This bleed-through was in the need for third-party management agents to properly control some aspects of the hardware; however, with ESX v4 VMware has pretty much done away with the need for third-party management agents by including new drivers to handle these needs. Such agents include the Dell Openmanage and HP Insight Management agents, which are now handled by the improved IPMI support.

With the introduction of ESX v4, VMware deprecated some modules that were present in earlier versions of ESX. If the devices that these modules support are a requirement for your ESX installation, you will not be able to upgrade to ESX v4. Table 2.2 lists the devices in ESX v3 that are missing from ESX v4, whereas Table 2.3 includes the differences between ESX 2.5 and ESX v3 for historical purposes. The developers of ESX v4 preferred to settle on modern hardware, and much of the older PCI or PCI-X hardware is obsolete. From a stability point of view, this is a very good thing. Minimizing the number and type of devices that must be supported enables the development team to focus their attention on building quality support for the devices that are supported.

Table 2.2 ESX v3 Devices Obsolete in ESX v4

Table 2.3 ESX 2.5 Devices Obsolete in ESX v3

Several other vmkernel features should be noted that are different between ESX v3 and ESX v4. The first and foremost change is the exposure of internal vmkernel constructs via well-defined APIs that allow third parties to add elements into the vmkernel. These APIs are vNetwork, vStorage, vCompute, and VMsafe, which are discussed throughout the rest of this book. 

vStorage is a new marketing name for the virtual disk development kit (vDDK) that was available for ESX v3. The other APIs are all brand new and add major functionality.

In addition to these changes, with ESX v4, the vmkernel is now 64-bit and supports up to 1TB of memory and 320 VMs utilizing up to 512 virtual CPUs.

ESX Boot Differences
Simply put, the service console has been upgraded from being based on a variant of 32-bit Red Hat Enterprise Linux Enterprise Server 3 Update 8 to being based on a variant of 64-bit Red Hat Enterprise Linux Enterprise Server 5.1. ESX is in no way a complete distribution of GNU/Linux. Technically, it is not Linux at all, because the vmkernel is what is interacting with the hardware, and the service console (SC) is running within a VM. Legally, the vmkernel is not Linux either, because it is proprietary. Although the SC is a variant of GNU/Linux, it is a management appliance and not the operating system of ESX.

Even with the change in SC version, the rule that “no Red Hat updates should be used” has not been changed. All updates to the SC should come only from VMware. This is crucial. Consider the following: ESX consists of a single CD-ROM, whereas the official version of RHEL5 takes up five CD-ROMs. Therefore, they are not the same and should never be considered the same. For RHEL5, the method to configure any part of the system is to use the supplied system-config- scripts. These are not a part of ESX. Instead, there are a series of esxcfg- scripts that do not map one-to-one to the original Red Hat scripts.

The esxcfg- scripts, however, outlined in a later chapter, do map pretty well to the new management tool: vSphere Client (vSC). This allows for either the client to be used to configure an ESX host directly or through the use of a VMware vCenter server. Although there continues to be a web-based interface, it does not present a method to configure the ESX host or a way to create VMs.

ESX v4 has a kernel that is proprietary, the vmkernel, as well as a modified from the stock RHEL5 kernel that runs within the service console and therefore cannot be Linux. ESXi v4 has a kernel that is proprietary, the vmkernel, only. The modifications to the stock kernel enable the SC to manage the ESX hypervisor. The SC in ESX v4 sees only devices presented or passed through from the vmkernel and does not interact directly with the hardware unless using a pass-through device. Granted, the modifications for ESX are limited in scope to controlling the addition and removal of device drivers to the vmkernel and the ability to control virtual machine and virtual switch objects running within the vmkernel.

In ESX versions earlier than version 3, the vmkernel would load after the SC had fully booted, and the vmkernel would usurp all the PCI devices set to be controlled by the kernel options. In ESX version 3, this changed. The vmkernel loads first, and then the SC, which runs within a specialized VM with more privileges than a standard VM. In ESX v3 the SC was installed onto a local disk, and the VM accessed the local disk through a RAW pass-through SCSI device. In ESX v4, this has changed so that the RAW pass-through SCSI device is no longer used. Instead, the GNU/Linux environment lives within a virtual machine disk file (VMDK). This change further punctuates the difference between the hypervisor and GNU/Linux. So to repeat: The hypervisor is notLinux.

About the author
Edward L. Haletky is also the author of VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment. Edward owns AstroArch Consulting, Inc., where he provides virtualization, security, and network consulting.

Printed with permission from Pearson Publishing. Copyright 2011. VMware ESX and ESXi in the Enterprise: Planning Deployment of Virtualization Servers (2nd Edition) by Edward L. Haletky. For more information about this title and other similar books, please visit http://www.pearsoned.com/professional/.

This was first published in August 2011

Dig deeper on Virtualization Technology and Services

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close