Maximize server virtualization ROI with network I/O virtualization

I/O traffic can stop your server virtualization project short by requiring too much card space and creating traffic bottlenecks. With network I/O virtualization you can help your customers improve consolidation and performance on their networks, as well as increase system resiliency.

This Content Component encountered an error

Solution provider takeaway: Solution providers will learn how to use network I/O virtualization to improve consolidation and performance on their customers' networks while increasing system resiliency. 

During virtualization projects, servers are typically split into two groups: servers to be virtualized and servers not to be virtualized. As processing power of the virtualization host has increased, the determining factor between the two groups has moved from processor resource requirements to input/output requirements.

Today, high I/O servers are ruled out of server virtualization projects because they consume so much of the available I/O bandwidth that the virtualization host is unable to sustain more than a few other virtual machines -- even though there may be plenty of computing resources remaining. The result is a lowered ROI on the virtualization project, because fewer of the physical machines in the environment are virtualized. However, you can help clients optimize the ROI of their server virtualization projects with network input/output virtualization.

The problem: Network I/O bottlenecks

It's important to understand what causes network I/O bottlenecks when implementing server virtualization. If you have 10 virtual machines all making I/O requests, the hypervisor can quickly become overburdened with handling those requests and maintaining overall performance. However, network I/O isn't the only thing to suffer. Overall system processing and memory are also affected when the hypervisor is busy handling I/O tasks. To avoid this problem, companies like VMware recommend a dedicated 1Gigabit Ethernet (GigE) card per virtual machine or at least a dedicated card for each virtual machine with high I/O requirements.

Learn more about network virtualization
Channel Explained: 10 Gigabit Ethernet

This practice has several problems. Physical host servers do not have enough card slots to support the number of 1 Gigabit Ethernet (GigE) cards needed. Many integrators recommend dedicated network cards for VMotion and virtual infrastructure management. On top of that, cards are required for storage I/O. Moreover, adding cards eliminates the virtualization benefit of reducing power consumption. A typical 10 by 1 GigE implementation can use about 80 watts of power for the cards. Finally, management of ten 1 GigE cards is a challenge both from a physical cabling perspective as well as from a virtual assignment perspective.

Furthermore, while installing a 1 GigE card certainly increases the I/O capabilities of a virtualization host, it does not lessen the bottleneck of the hypervisor. Hypervisor bottleneck occurs as the number of virtualized servers increases. The more the hypervisor is interrupted in order to handle the I/O requests of the virtual machines, the less capable the system is of sustaining I/O performance. The I/O subsystem needs to have intelligence added to parse the I/O traffic effectively and to preserve the quality of service for each application.

The solution: Network I/O virtualization (IOV)

One solution is to apply the concepts of server virtualization to the network. Essentially, network IOV shares the I/O bandwidth of a network interface card (NIC) across several compute engines -- in this case, virtual machines. Implementing IOV at the NIC level makes the interface appear as multiple interface cards to the host machine.

Network I/O virtualization can leverage new capabilities in the virtualization OSes to bring order and control to the 10 GigE. VMware, for example, calls this Netqueue -- a performance enhancement technology within VMware Virtual Infrastructure. It increases multiple receive queues which can be assigned to each virtual NIC. Network adapters are required to support Netqueue. These NICs can divide their bandwidth into hardware channels, and each channel can then be assigned to the different Netqueues. Together, the technologies provide administrators with the capability to individually allocate bandwidth to each of the hardware channels.

These independent channels allow each of the virtual machines to control the virtual network I/O path as if it is their exclusive path, removing the burden of I/O load balancing from the hypervisor. Because functions such as traffic classification can be offloaded from the hypervisor, a virtualized network I/O system will not affect the response time of applications running on VMs because of CPU cycles needing to be allocated to manage Ethernet traffic flow.

Network I/O virtualization results

The result is that a single card can be divided into multiple channels. For example, a 10 GigE card can function as if it were ten 1 GigE cards with almost no speed lost to latency. Additionally, an allocation of bandwidth can still be shared across multiple VMs. For example, 3 GB of performance can be allocated to the general-purpose, limited I/O virtual machines, and the remaining 7 GB of performance can be allocated as five 1 GigE cards and one 2 GigE card. Failover of the high-performance virtual cards could still occur to the 3 GB general-purpose area, but each individual hardware channel can be reclassified or reset by the hypervisor as needed. With network IOV, a system administrator can achieve 10 GigE line speed performance while offloading the CPU, maintaining quality of service per virtual machine by isolating the network I/O channels.

In addition to improved consolidation and performance, network I/O virtualization can improve system resiliency. Network virtualization solves one risk associated with consolidating to a single or -- more likely -- dual 10 GigE cards. In a nonvirtualized 10 GigE, it is quite possible that a runaway process will consume all the I/O available to that card. With network IOV, you simulate the 10 x 1 GigE configuration, and a runaway process can only consume the Netqueue assigned to it. Also, resetting that queue only affects those machines assigned to that card. In a virtualized environment, it is critical to ensure that one runaway virtualized OS does not suddenly create a network storm that will consume the entire 10 GigE bandwidth.

Virtualization becomes complicated when there is a mix of virtualized and nonvirtualized workloads. The greater that mix, the more complex the overall environment is to manage, which limits the real potential ROI of virtualization. Network I/O virtualization enables you to expand the virtualized server count. More of the classic low I/O servers can be consolidated on fewer virtualization hosts, and a whole new crop of candidates are now available that were not previously considered.

About the author George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection.


 

This was first published in August 2008

Dig deeper on Network Management Services

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close