Big vSphere features that don't get much attention

The Virtual Machine Communication Interface and Dynamic Voltage and Frequency Scaling are two big vSphere features that provide great benefits but are often overlooked.

There is no shortage of new VMware vSphere features, but not all have received the attention they deserve. In this tip, we'll cover two valuable but lesser-known vSphere features that solutions providers should be aware of: the Virtual Machine Communication Interface (VMCI) and Dynamic Voltage and Frequency Scaling (DVFS). Both vSphere features are significant to solutions providers. VMCI greatly increases the speed at which virtual machines (VMs) send data to each other, and DVFS helps decrease operational costs -- a very important benefit for customers.

More resources on vSphere features

FAQ: Comparing vSphere features and editions

VSphere features and upgrade guide

Overlooked vSphere features: VMCI

VMware Inc. introduced VMCI, a high-speed interface, in vSphere. The interface allows two VMs on the same host to communicate with each other without going through the network stack that is typically between them. Additionally, the interface allows for high-speed communication between a VM and its host server. VMCI operates independently of the guest networking stack, which means the communication rate is close to that of the memory bus of the host server. For an example of the speed difference compared to traditional 1 GBps networking, use the following formula to calculate memory bandwidth and throughput:

  • Base DRAM frequency in MHz (millions of DRAM clock cycles per second) X Memory interface (or bus) width (DDR, DDR2 and DDR3 are 64 bytes) X Number of interfaces (single, dual or triple channel) X Number of bits per clock cycle per line (2 for DDR, DDR2 and DDR3).

Using an HP DL385 G6 that uses PC2-6400 DDR2 DIMMs at 800 MHz, the memory bandwidth would be

  • 800 million hertz * 64 bits * 2 interfaces * 2 bits = 23.84 GBps

The result is almost 24 times as fast as a 1 GBps network connection, and it is not subject to the typical latency that is inherent in most networks. This type of speed is very beneficial to multi-tier applications that have different components running on different servers, such as Web, application and database servers. While this speed sounds great, it only works with applications that are designed to communicate via VMCI instead of the network.

VMware has a special VMCI Sockets API that allows solutions providers to adapt their applications to use the VMCI interface. The drivers for VMCI are inside VMware Tools for both Windows and Linux VMs and are installed by default with a typical installation. Once you install the drivers, you need to enable VMCI -- it is disabled by default -- by editing the VM's settings. When a virtual machine is created with Virtual Hardware Version 7, a few lines are automatically added to its configuration files (.vmx) for VMCI that are listed below:

vmci0.present = "TRUE"
vmci0.pciSlotNumber = "33"
vmci0.id = "-1848525795"

The VMCI adapter is essentially a special network interface card, and it is assigned a Peripheral Component Interconnect (PCI) slot number as well as a unique ID that is equivalent to a MAC address. The ID is automatically generated and must be unique among the other VMs on a host. To enable VMCI, you edit a VM's settings, and on the hardware tab, you will see a VMCI device that is automatically created when a VM is created. This device cannot be removed and can only be enabled or disabled. Once you check the box to enable it (Figure 1), the status changes from restricted to unrestricted, which allows it to be used.

Figure 1 -- Enable VMCI on a VM by editing its hardware settings.

To take advantage of this new feature, you must have applications that specifically support it. At this time, there isn't much vendor support for it, despite its clear advantages. That should change soon, however, because VMCI is a great feature that greatly speeds up multi-tier applications once they are modified for VMCI use.

Overlooked vSphere features: DVFS

Most solutions providers are probably already familiar with the Distributed Power Management (DPM) feature that was introduced in Virtual Infrastructure 3. DPM helps save on power and cooling costs by powering off hosts during periods of inactivity.

One little-known vSphere feature, called Dynamic Voltage and Frequency Scaling (DVFS), can help your customers save even more money. DVFS uses a new CPU power management technology called Enhanced Intel SpeedStep and AMD Enhanced PowerNow. These technologies allow a server to dynamically switch CPU frequencies and voltages based on workload demands. As a result, processors draw less power and create less heat, which allows the fans to spin slower. Earlier versions of SpeedStep and PowerNow only supported switching between high and low frequencies.

The enhanced versions also add support for varying the voltage of the processor -- an interface allows a system to change the performance states (P-states) of a CPU. P-states are fixed operating frequencies and voltages that allow a CPU to operate at different power levels. Depending on the processor, a CPU can operate in several different P-states, with the lowest level being Pmin and the upper level being Pmax. For example, a 3 GHz processor will have a Pmax of 3 GHz and might have a Pmin of 2 GHz with several in between P-states like 2.3 GHz, 2.5 GHz and 2.7 GHz.

Processor P-states are controlled by either the system ROM or by an operating system. In vSphere, the VMkernel controls the P-states and optimizes each CPU's frequency to match demand and improve power efficiency without affecting performance. Before you can use this feature, you need to enable it in the server's BIOS. The BIOS setting will vary from server to server but is typically labeled as Power Regulator, Demand Based Switching, PowerNow or SpeedStep. For HP servers, it is labeled Power Regulator for ProLiant and you'll find it listed under System Options in the BIOS. Using an HP ProLiant DL385 G6 as an example, you will see there are four modes listed in the BIOS for Power Regulator for ProLiant (Figure 2).

Figure 2 -- Changing the power mode settings on an HP ProLiant server

For vSphere, choose the OS Control Mode option to enable your hosts to take advantage of this feature. OS Control Mode allows the VMkernel to change P-states on the CPUs in a host, which conserves power whenever possible. If you disable this feature in the BIOS or set it to Static, then the processor configuration in the vSphere Client will show that Power Management Technology is not available. Once you set it to OS Control Mode in the servers BIOS, you will see this change reflects whichever technology is currently being used (PowerNow or SpeedStep) as shown below (Figure 3).

Figure 3 -- Processor power management settings on a host server

VSphere defaults to static mode, and to take advantage of this feature, you must change the Power Management Policy to dynamic by clicking on the Configuration tab in the vSphere Client, and in the Software panel, by selecting Advanced Settings. Next, select Power in the left pane, and in the right pane, change the Power.CpuPolicy from static to dynamic and click OK.

The static and dynamic settings for Power.CpuPolicy are defined as follows:

  • Static (the default setting): The VMkernel can detect power management features available on the host but does not actively use them unless requested by the BIOS for power capping or thermal events.
  • Dynamic: The VMkernel optimizes each CPU's frequency to match demand, which improves power efficiency without affecting performance. When CPU demand increases, this policy setting ensures that CPU frequencies also increase.

Once it is enabled, there is no way to monitor the operation of this feature in vSphere, but some server management tools will show you P-state and power consumption. For example, on an HP server using the Integrated Lights-Out management board, you can see the change in P-states from 0 (highest or P-max) to 4 (lowest or P-min) after this feature is enabled (Figure 4).

Figure 4 -- Change in processor P-states on an HP server after enabling power management in vSphere

You can also see the reduction in power usage once this feature was enabled in Figure 5.

Figure 5 -- Reduction in power consumption on an HP server after enabling DVFS

Using DVFS along with DPM can lead to significant energy savings, especially in larger data centers that have regular periods of low workloads. Using these vSphere features is a must in virtual desktop environments where almost two-thirds of the day and weekends typically see very little activity.

Both VMCI and DVFS are two very beneficial new vSphere features, and solutions providers should try and take advantage of them if they can. Both features are available in all editions of vSphere, from the lower-cost Essentials editions all the way up to the Enterprise Plus edition.

About the expert
Eric Siebert is a 25-year IT veteran whose primary focus is VMware virtualization and Windows server administration. He is one of the 300 vExperts named by VMware Inc. for 2009. He is the author of the book
VI3 Implementation and Administration and a frequent TechTarget contributor. In addition, he maintains vSphere-land.com, a VMware information site.

Dig Deeper on Server virtualization technology and services