Comparing I/O virtualization and virtual I/O benefits

How do you differentiate between virtual I/O and I/O virtualization? In this tip, Greg Schulz details what each method can contribute to server performance and infrastructure and also compares their distinct qualities.

There is an old industry saying that the best I/O is the I/O that does not have to be done. However, back in the real world, I/Os are an essential function for enabling IT and other information services. Solutions providers find that enabling I/O is vital for getting information to and from servers or storage. Without I/O, servers stop working, storage is unable to respond to requests from servers and there is no data to move over the network.

More virtual I/O and I/O virtualization resources
Maximize server virtualization ROI with network I/O virtualization

Virtualization education a VAR must-have

Tech Watch: Virtualization-ready servers built for the job

Virtual I/O (VIO) and I/O virtualization (IOV) sound similar, but there are distinct differences between them. VIO is used to provide abstraction and transparency for applications. Using VIO means you don't need to perform I/O operations or wait for them to finish. On the other hand, IOV uses emulation and consolidation to improve utilization of I/O adapters and support infrastructures. For solutions providers, VIO and IOV are valuable for selling technologies that use servers and/or storage hardware and software-related solutions.

Converged networking solutions, such as Fibre Channel over Ethernet or InfiniBand, are forms of I/O virtualization that support multiple adapters and associated software. This added support benefits your clients by saving money and reducing space requirements.

I/O virtualization benefits and value propositions:

  • Doing more with resources that exist, such as people and technology, or reducing costs.

  • Single (or pair for high-availability) interconnect for networking and storage I/O.

  • Reduction of power, better cooling and floor space and other green benefits.

  • Simplified cabling and reduced complexity for server network and storage interconnects.

  • Boosting of server performance, which maximizes PCI or mezzanine slots.

  • Rapid redeployment to meet changing workload and I/O profiles of virtual servers.

  • Scaling I/O capacity to meet high-performance and clustered application needs.

  • Optimal use of common cabling infrastructure and physical networking facilities.

A general tip is that the faster a processor or server is, the more prone it is to a performance hit when waiting for slower I/O operations. Consequently, faster servers need better-performing I/O connectivity and networks. Better performance means lower latency, more input/output operations per second and improved bandwidth. When selling or configuring fast servers with blades, make sure to include fast I/O and networking interfaces to access fast storage systems and address data center or IT bottlenecks.

IOV and VIO technologies, such as Fibre Channel N-Port ID Virtualization, simplify adapter management and changes in dynamic virtual environments. This benefits solutions providers by enabling virtual machines to move to different physical servers with unique physical adapters. Solutions providers don't have to make zoning or management changes. InfiniBand-converged LANs, storage area networks and adapters simplify connectivity and cabling while reducing the number and types of adapters that support consolidation or performance enhancement.

Peripheral Component Interconnect (PCI) IOV is definitely worth discussing with your customers. PCI IOV consists of a PCI Express (PCIe) bridge, which is attached to a PCI root complex, and an attachment to a separate PCI enclosure. Two PCI IOV approaches are single-root (SR-IOV) and multi-root (MR-IOV). SR-IOV lets multiple-guest operating systems on the same physical server access a single I/O device simultaneously without having to rely on a hypervisor for a virtual host bus adapter or network interface card. MR-IOV is the next step in PCI and I/O evolution that lets a PCIe or SR-IOV device be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures.

One PCI IOV advantage is that physical adapter cards that are located in physically separate enclosures can be shared within a single physical server or multiple physical servers, and they don't have to incur potential I/O overhead through virtualization software infrastructure.

Action points for I/O:

  • Minimize the impact of I/O to applications, servers, storage and networks.

  • Improve efficiency, utilization and performance.

  • Consider latency, bandwidth effectiveness and availability in addition to cost.

  • Apply the appropriate type and tier of I/O and networking to the task at hand.

Takeaways for I/O:

  • I/O operations and connectivity are being virtualized to simplify management.

  • Convergence of networking transports and protocols continues to evolve.

About the expert
Greg Schulz is founder of The StorageIO Group, an IT consulting firm. He has worked as a programmer, systems administrator and disaster recovery consultant and capacity planner for various IT organizations, and he also worked for several vendors before joining an analyst firm and later forming StorageIO. In addition, Schulz is a prolific writer, blogger and speaker who regularly appears at conferences and other events around the world. He is the author of The Green and Virtual Data Center and Resilient Storage Networks. Find him on Twitter -- @storageio.

Dig Deeper on Server virtualization technology and services