IT Channel Explained

TCP/IP offload engine (TOE) cards

Our Channel Explained series provides targeted articles that flesh out detail on channel terminology but avoid information overload. This week we examine the question, What is a TOE card?

Network servers are under enormous stress. They're running more applications than ever before, while each application on those servers produces larger, richer files. Server virtualization has exacerbated the problem by introducing multiple "virtual machines" to increase resource utilization of a physical server. To make matters worse, the introduction of iSCSI can add significant storage traffic to the corporate LAN.

Some value-added resellers (VARs) are recommending additional CPUs or blade processors to handle their clients' network traffic -- and that's often a suitable solution. But VARs also need to include TCP/IP offload engine (TOE) adapters (sometimes called "network accelerators") in their portfolios. TOE cards remove the host CPU from network communication, easing LAN latency and reducing CPU overhead in certain server configurations. While TOE cards certainly support all types of network traffic, they have been adopted most aggressively in iSCSI storage environments. But TOEs are not a universal fix. Before bringing TCP/IP offload engines into a client's environment, a VAR must understand the role of TOEs, why they work, when they're best deployed and their major caveats.

What is TOE technology and how does it work?

The traditional approach to network data transfers uses processor interrupts. For example, the network interface card (NIC) receives a data packet, the NIC interrupts the host CPU, the CPU extracts the data payload from the remainder of the packet, then moves data from the network buffer to the server's memory so that an application can receive it. When data needs to be sent, the CPU is interrupted so that it can copy data from application memory to the network buffer, where it is divided into packets for eventual transmission across the network. This interrupt-driven process is highly inefficient and wastes processor time simply shuffling data around on the server rather than doing its more important jobs of processing instructions and performing mathematical calculations for applications.

Learn more
Search our library of expert answers to storage channel questions, or ask the channel experts on IT Knowledge Exchange.

TCP/IP offload engine devices offload these network data processing tasks from the host CPU. A TOE card relies on a specialized chip to intercept network packets and move the data payload from each packet directly to memory (and vice versa) without any intervention from the host processor. This concept of offloading work from the main processor is virtually identical to that governing graphics coprocessors, which offload 3D calculations and visual rendering tasks from the main CPU.

Early TOE cards provided optimization only for iSCSI traffic, but TOE adoption has been slow in the iSCSI arena. While TCP/IP offload engine technology is still primarily associated with iSCSI, vendors have broadened TOE support in recent years to encompass all types of LAN traffic, including email, Web (HTML) data, file data, backup data and so on. "Surviving original TOE vendors have retooled and realized that their real core market opportunity will be around regular TCP traffic acceleration -- not iSCSI," said Greg Schulz, founder and senior analyst at StorageIO Group in Stillwater, Minn. It's important for VARs to research the capabilities and specifications of prospective TOE cards from manufacturers like Alacritech Inc. or QLogic Corp. because adapter models can vary in their physical configuration and traffic optimizations.

There are other key features to consider. TCP/IP offload engine cards typically operate up to full-duplex Gigabit Ethernet (GigE) speeds; provide one or two RJ-45 (copper) or SX-type (optical) multimode fiber ports; and are implemented as a PCIe or PCI-X expansion card, which can be added to any corresponding slot in the server. TOE cards often add support for jumbo frames and lower latency associated with chatty TCP applications. The TOE card should also include drivers suitable for the server's operating system, such as Windows Server 2003 or a specific flavor of Unix/Linux.

What are the biggest issues with TOE technology?

TOE adapters promise the important benefit of improved network performance -- exchanging more network traffic with less CPU overhead, allowing the server to spend more time processing useful applications or run additional applications -- particularly critical in a virtual server environment where multiple virtual machines are vying for network I/O. Ultimately, the goal is to reduce the need for more processors, saving CPU expenses, power and software licensing costs associated with additional processors in the application environment.

One place where TOE is clearly essential is iSCSI SAN boot capability, allowing servers to start up using data on the greater iSCSI storage system rather than relying on local storage within each server. This supports the notion of a "diskless" server environment. It's worth noting, though, that VMware's ESX Server 3i supports iSCSI SAN boot without TOE hardware.

In spite of TOE technology's benefits, however, experts agree that it's not a "must have" feature in every environment. For TCP/IP offload engines to be worthwhile, they must actually address a traffic bottleneck. For example, if the CPU cannot reasonably accommodate additional traffic across a conventional NIC, a TOE card could help.

"The [potential] benefit is there -- to get more work done -- but you can compensate for that by throwing processors and more hardware at it," Schulz said. "And not everyone has those performance needs." For example, moving to a 10 Gigabit Ethernet adapter may be just as effective a solution as deploying a 1 Gbit TOE card.

Some VARs are even more critical of TOE technology. "I honestly don't know of anybody that is using TOE cards for [non-iSCSI] applications. … No one's asked for it, no one's needed it," said Keith Norbie, director of storage and virtualization at Nexus Information Systems in Plymouth, Minn.

He explained that there are very few applications that drive the need for TOE outside of storage. Rather than globally add or omit TOE from the environment, Norbie's best advice is to analyze servers and examine the CPU, memory, network and I/O consumption patterns on those systems -- and then selectively add TOE on systems that will clearly benefit from the technology. "I think you'd be stunned to see how little TOE is needed," Norbie said.

Cost is unquestionably another issue that has hindered the adoption of TOE in the general enterprise. Experts agree that the value proposition of TOE technology is being squeezed by total cost of ownership (TCO) concerns. GigE ports are readily available in PCs and GigE NICs are available and cheap. By comparison, Ethernet TOE cards can range in price from $500 to $1,500 -- using additional server expansion slots and significantly diminishing the cost benefit that is a fundamental tenant of Ethernet (and iSCSI specifically).

"That's why TOEs haven't been anywhere near as successful as they were hyped to be," Schulz said. "Their success is in specialized situations." In practice, some organizations may actually find it more cost-effective to add dual- or quad-core processors and more memory to the server, reaping broader server performance benefits than just network traffic acceleration.

TOE technology does support high-reliability/high-availability configurations, but it comes at a cost. For example, a TOE card with multiple Ethernet ports can typically be trunked for better performance or configured for failover. But two TOE adapters are needed to prevent the TOE card itself from becoming a single point of failure. The necessity of two TOE cards presents the dual specter of double cost and two expansion slots allocated in the server; high-availability TOE architectures require difficult strategic choices by network architects.

What is the future of TOE technology

As Ethernet continues its advance from 1 Gigabit Ethernet to 10 Gigabit Ethernet, the future of TCP/IP offload engine technology is uncertain. It's important to note that the normal progression of technological integration has not occurred -- there is no indication that TOE features will be included in major "South Bridge," or I/O, chips on popular motherboards. "If [TOE features are] so killer, especially for non-iSCSI implementation, why don't Intel and AMD just add those features in [their chipsets]?" Norbie said.

The implication is that the market simply doesn't need TOE functionality enough to make it worth including. TCP/IP offload engines may not reach acceptance by the mass market but should remain viable solutions in specialized cases -- particularly in iSCSI SAN deployments. Other experts note that enterprises are always faced with more data to move in less time, so the value proposition of TOE should stay unchanged even at higher speeds. But emerging 10 GigE adapters like the X3100 or Xframe E from Neterion Inc. may render current 1 GigE TOE products irrelevant. "For TOEs to survive, they have to go to 10 GigE," Schulz said.


This was first published in April 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: