I/O virtualization (IOV): Delivering cost and power savings to data center servers

Find out how I/O virtualization (IOV) can cut costs and save power for data center servers, how it compares with other networking technologies, what IOV products are on the market, when and how to implement it, and what the main management concerns are.

This Content Component encountered an error
This Content Component encountered an error

I/O virtualization (IOV) is an emerging technology area that extends the virtualization concept to the I/O handled by data center servers. It's not exactly a new concept, with virtualization already being used for some network I/O technologies. For instance, a virtual LAN disconnects the physical and logical network components so one physical network can be managed as multiple logical networks. NIC teaming, on the other hand, combines multiple network adapters to function as a single adapter with additional bandwidth. In both scenarios, logic in the hardware and management software levels enables decoupling of the logical functions from the hardware.

Within an enterprise data center, each server typically needs access to a LAN, a SAN and direct-attached storage (DAS). For some servers, access to high-end graphics processing is also required. Access to these resources usually comes via an internal system bus. In a multicore server with a high-speed PCI Express (PCIe) bus, all of these I/O channels sometimes reach peak bandwidth, but seldom at the same time or for an extended period of time. With many virtualized servers running on a single physical server, these I/O pipes are busy but probably won't run at full bandwidth at the same time or for a sustained period of time.

What if, instead of installing separate network and storage adapters in every data center server, the PCIe bus adapters could be virtualized and shared across multiple servers? Consider the potential cost and power savings for NICs, host bus adapters (HBAs) and SAS/ SATA disk controller cards that could be shared across a rack of servers. A rack full of servers could have only one cable for each server connecting it to a virtualized set of I/O adapters at the top of the rack. Then that top-of-rack unit could dynamically direct all LAN, SAN and DAS traffic to the appropriate location as needed, such as end-of-row switches for example, leaving the servers to focus on computing. This "rack-area network" (RAN) concept can allow an entire rack of servers to have some of the same benefits as blade servers, but without the limitations of a blade server chassis. The consolidation realized in this scenario would also mean that the size of the rack servers could be reduced to 1 rack unit (1U) or even one-half of a rack unit (1/2 U).

Consider the movement of a virtual machine (VM) from one physical server to another physical data center server. Typically, this requires a SAN, because SANs are separate from the physical server and can be accessed from any server, assuming all of the security, zoning and logical unit numbering (LUN) masking issues have been addressed. What if movement of virtual machines could be made to work with any storage, rather than requiring a SAN? I/O virtualization-capable adapters would run some of the hypervisor functions in hardware, offloading the host CPU and freeing up CPU resource that could be used to host additional virtual machines or applications.

I/O virtualization vs. other networking technologies

Ethernet Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) are a pair of technologies that are young, but slightly more mature than I/O virtualization in today's marketplace. Together, DCB and FCoE allow for hardware consolidation by combining lossless Ethernet with Fibre Channel at the switch and at the host adapter. This DCB/FCoE technology combination provides some of the same type of consolidation that IOV provides, but it's actually complementary to IOV. Because the DCB/FCoE converged adapters run on the PCI Express bus, they can be used in an I/O virtualization environment and, therefore, could be shared across multiple servers. The host adapters that support DCB and FCoE currently support or will soon support IOV technologies such as Single-Root IOV (SR-IOV). An IOV environment can communicate with existing Ethernet, Fibre Channel and DCB/FCoE switches using existing adapters and, as far as the host servers are concerned, they're connected directly to those switch environments.

InfiniBand is another high-speed, low-latency network technology that's typically used in compute cluster environments for server-to-server communication. InfiniBand provides faster speeds than Ethernet today. The newer InfiniBand host adapters, known as host channel adapters (HCAs), run on the PCI Express bus and can support IOV. In addition, some vendors are developing I/O virtualization solutions built around InfiniBand technology, using InfiniBand as the high-speed carrier for the IOV infrastructure.

Current I/O virtualization products

The general IOV approach that most current products take is to connect the local host servers into a top-of-rack unit that holds a variety of network, storage and graphics adapters that can act as a dynamic pool of I/O connectivity resources. The top-of-rack device acts as an I/O fabric for the servers in the rack, and can communicate with other servers in the rack or can connect to end-of-row switches for more distant resources. These IOV top-of-rack units may be less expensive than some of the newer high-speed top-of-rack switches.

Two specific implementation models for IOV are emerging: PCIe- and InfiniBand-based approaches.

One approach to IOV is to extend the PCI Express bus out of the server chassis and into a separate box or chassis populated with IOV-capable adapters that can be shared across multiple data center servers. The I/O virtualization box would be installed in a rack and would function somewhat similarly to a top-of-rack switch, except that instead of only supporting Ethernet or Fibre Channel, this IOV box would act as a type of fabric switch for all LAN, SAN, DAS and possibly graphics traffic. At least three companies are working on products that extend the PCI Express bus into a separate box for the purpose of virtualizing I/O adapters. One advantage to this approach is that servers today already support PCI Express. Some IOV vendors now have first-generation products available and some are publicly discussing products that will appear this year. Some of these products require support for SR-IOV or Multi-Root IOV (MR-IOV), but others don't have that requirement. These products are built around the PCI Express 2.0 specifications, and vendors already have PCI Express 3.0 plans in their product roadmaps.

Aprius Inc. is a small vendor that's building a PCI Express gateway device that will support almost any type of PCI Express adapter (including network cards, storage controllers and graphics coprocessors) that can then be shared across multiple servers. These adapters basically form an I/O resource pool that can be dynamically assigned to physical or virtual servers.

NextIO is a company that was involved with developing the PCI-SIG I/O virtualization specifications and had some IOV products as early as 2005. NextIO is working in several areas, including the high-performance computing (HPC) market and is interested in virtualizing graphics coprocessing in addition to traditional networking and storage I/O traffic. They're partnering with several big name vendors for a variety of IOV applications.

VirtenSys Inc. extends the PCIe bus with its IOV switches that can virtualize the major types of server networking and storage connectivity, as well as interprocessor communication (IPC) for HPC compute cluster environments.

Another approach to I/O virtualization is to use an existing network interconnect technology such as InfiniBand or 40 Gigabit Ethernet (GbE) as the transport for virtualizing I/O adapters. Two companies are building products to handle IOV in this fashion:

Mellanox Technologies Ltd., well-known for its InfiniBand products, provides its I/O consolidation solutions using either InfiniBand or 10 GbE as the transport for performing IOV. They're also building 40 GbE adapters that are compliant with SR-IOV.

Xsigo Systems Inc. uses InfiniBand HCAs that connect to its I/O Director that provides the infrastructure for IOV-capable adapters. One reason for using InfiniBand is its high speed and very low latency. Inside the I/O Director are the same PCI Express network and storage adapters that would otherwise be installed in each host server. Xsigo's I/O Director has been available for approximately two years, and the company has established partnerships with a number of storage vendors, including Dell Inc. and EMC Corp.

Many network and storage adapter vendors are working on full support for I/O virtualization, especially for compliance with the SR-IOV and/or MR-IOV specifications. The vendor roster includes Emulex Corp., Intel Corp., LSI, Neterion Inc., QLogic Corp. and others. The big data center server vendors, including Dell, Hewlett-Packard (HP) Co. and IBM Corp., are beginning to demonstrate solutions that support I/O virtualization, either in their rack servers or blade servers, or both. Cisco Systems Inc. has also joined the movement with its Cisco UCS M81KR Virtual Interface Card. The big processor vendors, Advanced Micro Devices (AMD) Inc. and Intel, include virtualization technologies that help enable some of these IOV functions.

How and when to implement I/O virtualization

Implementation of IOV technologies will most likely be a slow, deliberate process. That's because the work to make all the adapters function in this manner isn't complete yet, and because the top-of-rack IOV units are still in their early stages. For I/O virtualization to work properly, development work needs to be completed on the adapter hardware and firmware, drivers, operating systems and hypervisors. Several vendors will be announcing support for various forms of IOV in 2010, and it's anticipated that IOV will emerge as one of the top new technologies for the year. However, expect I/O virtualization to take a few years to become commonplace.

Look for 10 GbE adapters to be the first to fully support IOV. Demonstrations of IOV-capable 10 GbE adapters were shown publicly in 2009 at a number of trade shows. After the Ethernet adapters, you can expect to see storage adapters such as Fibre Channel HBAs, FCoE (CNAs and SAS/SATA non-RAID adapters to support I/O virtualization. The last category of storage adapters that will likely fully support IOV are the RAID controllers, due to the complexity of sharing RAID functions across servers. Separately, some graphics coprocessor adapters will support IOV, with some products possibly available in 2010.

Implementing IOV-capable adapters will require top-of-rack I/O virtualization units and either PCIe bus extender cards or InfiniBand HCAs for the host servers, depending on the implementation. The IOV-capable adapters are then placed in the top-of-rack IOV units and can be shared across data center servers. Drivers for these adapters will be needed, and few production-ready drivers for any operating system are currently available.

I/O virtualization should be implemented in stages, as with the adoption of any other new technology. The IOV implemented stages should begin with pilot tests run on a small number of servers; the pilot implementation should run until the products operate in a stable manner and benefits can be shown. The Demartek lab will be testing various IOV solutions during 2010, and we'll be able to provide first-hand commentary and results.

A good candidate environment for I/O virtualization might be a virtual server environment that would benefit from sharing some higher-end 10 GbE NICs or similar high-speed adapters. One of the goals of IOV implementations may be to acquire the necessary I/O adapters based on the overall bandwidth needs of all the servers in a rack, rather than simply buying adapters based on raw data center server count. This will require adjustments to the planning process to account for applications and bandwidth usage, and may require more bandwidth measurements to be taken in the current environments.

Management issues with I/O virtualization

Managing virtual pools of I/O resources will require some new thinking. The adjustment is similar to what was required to effectively manage storage systems when SANs and virtualized storage solutions were first deployed. You'll need to understand that the I/O adapters and paths will no longer be exclusively owned by a particular server, in the same way that storage on a SAN isn't owned by a specific server. Rather, these adapters and paths will be dynamically assigned to servers, and can be released or adjusted as needed. Each of the vendors providing top-of-rack IOV units will have their own management interface for the I/O virtualization unit itself, and some level of adapter management. In addition, each of the adapter manufacturers will provide some basic element manager, similar to what's provided today.

It remains to be seen how the operating systems and hypervisors will view these virtualized I/O adapters. Because ownership of the adapters will no longer be tied to a particular operating system or hypervisor, the management of these IOV resources will have to be aware that these resources can logically move around in the data center and that the I/O resources can have multiple personalities.

About the author

Dennis Martin has been working in the IT industry since 1980 and is the founder and president of Demartek, a computer industry analyst organization and testing lab.

This story was originally published in Storage magazine.

This was first published in March 2010

Dig deeper on Data Storage Hardware

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close