Using VMware NetQueue to virtualize high-bandwidth servers

Sometimes high-bandwidth servers are left out of the virtualization picture due to I/O issues, but a new VMware queuing feature, NetQueue, alleviates these performance problems while allowing you to virtualize more servers.

This Content Component encountered an error

Solution provider takeaway: Learn how to use VMware NetQueue to ease the performance problems that may arise when virtualizing high-bandwidth servers.

The whole point of a VMware server virtualization project is to consolidate server resources, thereby saving your customers money. But due to technical limitations -- such as high I/O requirements -- or organizational politics, certain servers may remain unvirtualized. While the political issues tend to iron themselves out as a virtualization project proves successful, the technical concerns have been more problematic. But a new feature in VMware ESX Server 3.5, NetQueue, will help you virtualize those high-bandwidth servers so your customers get more oomph from server virtualization.

Virtualization projects typically start with the virtualization of low-risk servers -- servers with low-bandwidth requirements are almost always first targets. But as your customers' server virtualization projects mature, you may begin to look at their unvirtualized servers and wonder if there is a way to integrate them into the virtualized environment. The I/O performance problems, however, present a real obstacle -- high network bandwidth is a serious issue.

As an integrator, your job is to solve that problem. One way of addressing this issue is to install multiple 1 Gigabit Ethernet (GbE) cards. The best practice is to use one 1 GbE card per virtual machine (VM) and dedicate that card to that VM. This avoids queuing issues and allows the VM to have guaranteed access to 1 Gb of bandwidth. But that approach carries several problems. Most servers, especially densely packed ones, have a limited number of slots available to them, and these slots are not the sole property of IP networking. Slots will need to be allocated for storage I/O as well. This means that the number of slots available to you is the gating factor in the amount of VMs you can create -- not the amount of CPU or I/O resources at your disposal. Other issues with multiple 1 GbE cards include cable complexity, power consumption and cost.

Another option is to implement 10 Gigabit Ethernet. This solves the cabling, power and cost problems but does add a performance issue. It is unlikely that you will use one 10 GbE network card per VM in the host; most likely, you will use one for the whole environment and then a redundant card for failover. Since this is a shared resource for all the VMs, a queuing mechanism is needed.

VMware's standard network I/O queuing mechanism creates a bottleneck. In fact, at VMworld in September, a test on a standard 10 GbE card with standard queuing achieved only about 4 Gb of total throughput.

Better queuing in high-bandwidth environments

To address limitations in its standard queuing mechanism, VMware introduced NetQueue. This mechanism enhances performance of host ESX servers that have a 10 GbE infrastructure while reducing overhead caused by heavy network I/O traffic. NetQueue requires MSI-X support from the server platform, so only specific host systems can run NetQueue (including servers from IBM, Dell and HP). The network adapter also requires NetQueue support; this support is currently offered by Neterion, SolarFlare Communications and Intel.

NetQueue is disabled by default, and a command line is required to enable it. Once enabled it dramatically improves 10 GbE performance, achieving close to 9.7 Gb of throughput. Without NetQueue, all the I/O requests go to a common queue. When using NetQueue, each VM is assigned to a virtual NIC, and each virtual NIC has its own queue, eliminating the bottleneck. This allows you to subdivide the 10 GbE card on a per-VM basis, delivering near-maximum throughput.

How VMware NetQueue works

With standard queuing, a large number of CPU cycles must be allocated to managing Ethernet traffic flow. This robs those CPU cycles from the applications running within VMs. In a NetQueue-enabled environment, when the supported network interface card (NIC) receives a transmission packet, it classifies that packet into the appropriate I/O queue that is assigned to each virtual NIC and then to each VM. Since this classification is done by the actual NIC, the process is offloaded from the host server, further improving performance. It then notifies the VMware hypervisor that this has happened. Since there are multiple queues available, the solution can scale with the number of processing cores available, because the I/O queues can be balanced across them.

Additional server virtualization resources
Beware of hidden costs in server consolidation or virtualization

Deploying and using Windows virtualization

NetQueue-enabled cards have a limited number of queues they can support. As a result these queues should be assigned to the VMs that consume the most I/O resources and then provide a shared set of queues to the remaining low-I/O VMs.

The next step to improving network performance after implementing 10 GbE is to add a quality of service (QoS) feature on the card. Some manufacturers, like Neterion, have this capability now. We expect to see NIC-based QoS from most network and storage I/O card manufactures within the next year. This type of QoS-enabled card with NetQueue will allow you to make a 10 GbE card look like 10 1 GbE cards to the virtual environment.

With NetQueue in place, a server with high bandwidth demands is a viable candidate for virtualization. Its I/O traffic will not interfere with its fellow VMs. This allows you to offer your customers increased consolidation and further reap ROI from the virtualization project.

The good news for you is that this technology solves a real problem in VMware-based virtualization. These high-bandwidth servers are no longer a roadblock to a more thoroughly virtualized environment. While these solutions are straightforward to install, you can add value by assembling the right network queues for NetQueue to leverage.

About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.


This was first published in November 2008

Dig deeper on Virtualization Technology and Services

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close