In this tip, George Crump takes a look at converged network services -- converging storage and data networks -- from the perspective of a networking service provider. Offering network convergence services is becoming more popular, and networking service providers must be well-versed in the necessary storage protocols to be successful at this. In this tip, you'll learn best practices for network convergence services, as well as a basic outline of what networking service providers need to know.
The data center infrastructure is converging. The messaging network and the storage network are becoming one. Consequently, network integrators will need to understand which storage protocols are available to work in this new converged environment and what the best practices are for implementing them before offering network convergence services. Most importantly, they'll need to understand what can't be converged and how to navigate through those limitations.
The goal of a converged infrastructure is to simplify its management and reduce operational costs. In the initial phases of convergence, this amounts mostly a reduction in the number of NICs required in a server and the number of cables each needs. This reduction is important as the number of NICs required per server vs. the slots available and the resulting "cable farm" are particularly troublesome in virtualized environments. A typical configuration for a virtual host is two quad-port 1 GbE cards and two 4 Gb Fibre Channel SAN adapters. In many servers, moving to a converged network can reduce the number of cards from four to two and cables from 12 to two. This represents a hard cost savings and often results in lower power consumption because airflow is increased, reducing the load on the cooling system.
Converged network services: Storage protocols
The typical assumption is that convergence is synonymous with Fibre Channel over Ethernet (FCoE). While FCoE is going to be one of the more popular converged protocols, it's not the only method. Convergence simply means running traditionally separate functions on a common infrastructure and typically leveraging the same cable and interface cards. For example, legacy iSCSI and NAS don't typically qualify as "converged" since they're usually put on a separate network infrastructure, which can include specialized or dedicated cards in the servers. But clearly, given the right planning, iSCSI and NAS can be converged.
To accommodate multiple traffic types on a single network adapter and cable, most data centers will need to upgrade the overall bandwidth of their infrastructure. In most cases, this means the implementation of a 10 Gb product, of which FCoE is an example. But even traditional Ethernet at 10 Gb speeds may be able to handle normal IP and storage traffic using either NAS, ATA-over-Ethernet or iSCSI.
FCoE is getting much of the attention as the discussion of converged networking heats up. It dominates the shared storage market and has the advantage of being a "pure transport" of the Fibre Channel protocol, meaning there is no conversion made. The converged network adapter (CNA) leverages the same drivers that standalone Fibre cards do. Minimal changes need to be made to the storage infrastructure; it's simply Fibre storage running across an Ethernet segment. This does not preclude FCoE from carrying the IP-based protocols like NAS and iSCSI.
A 10 GbE-only infrastructure may be an acceptable alternative for an integrator to propose to a customer. This would involve using a 10 GbE NIC and then having the majority of the storage traffic be delivered via iSCSI or NAS. iSCSI is a shared storage block protocol similar to Fibre, but it leverages traditional IP infrastructures to move data. One shortcoming of using a traditional 10 GbE NIC instead of an FCoE CNA is that the Fibre protocol is not supported, which could be a problem for existing Fibre environments. A second is that both NAS and iSCSI are IP based and therefore involve processing the TCP/IP "stack," which represents additional overhead compared with Fibre Channel. That processing adds a few microseconds of latency and more importantly will consume local processor resources unless a special NIC is used. In many environments, this is not an issue -- the servers have CPU cycles to spare and top-end performance is not a requirement. But in those environments where server resources are at a premium or consistency is an absolute must, it can be an issue.
Also coming onto the converged network marketplace is ATA-over-Ethernet (AoE), which communicates straight ATA commands across Ethernet cabling. In other words, it doesn't have the IP overhead that iSCSI and NAS do. It does, however, require a special network adapter in the host, but it can be converged into the same switch. For the extra NIC, you get Fibre-like performance without the resource penalty of traditional IP protocols.
Best practices for offering network convergence services
As you begin to discuss converged networking with your customers, there are a couple of points to make with them. First, it's best to move to a converged network incrementally, basically a rack at a time, but do make that move. If you know that the customer is bringing up a new rack of servers, especially if they are going to be virtualized hosts, this is an ideal time to discuss a converged IP and storage strategy with them. It will help them reduce costs and improve management and will help to elevate you to the trusted adviser status we all seek.
There are two basic components to a converged FCoE network. The first is the switch, which most commonly will be a top-of-rack device that will have converged traffic inbound and then route out to IP or storage as needed. The second component, also needed by a converged IP-only network, is a NIC. In FCoE terminology, that will be a CNA. In both cases, as mentioned earlier, look for 10 Gb minimum bandwidth. Also look for a way to control traffic so that in virtualized environments one VM does not take all the bandwidth from the others.
Network convergence services: What can't be converged
There are two key issues with convergence: one is political reality (people); the other is technology limitations. On the political front, in most cases, there are two teams that are affected by the converged network: the networking team and the storage team. On the technology front, a limited number of tools today can manage both storage and networking infrastructures from a single software interface. Also, people don't like to be "converged." Expect to have two separate teams for the foreseeable future, and don't force the issue. Let attrition take its course over time.
While widespread adoption of converged networks is a year or two off, getting in front of convergence is critical for integrators looking to establish themselves as trusted advisers to their customers. The lessons learned now will give the integrator an advantage as these deployments pick up full steam, and it's advisable to start making that educational investment today.
About the author George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection.