Service provider takeaway: Though the best storage networking protocol choice for big businesses is still up in...
the air, storage service providers can guide their customers toward the safest protocol.
iSCSI is often portrayed as the next logical storage networking protocol for small and medium-sized businesses (SMBs). But the future for data center storage networking connectivity is not yet a foregone conclusion. The major storage networking protocols -- iSCSI, Fibre Channel over Ethernet (FCoE) and InfiniBand -- each have considerable drawbacks that make them less than ideal for large storage networks. For instance, they might require companies to deploy new network cabling, cards and gateways.
Among enterprises, FC storage area networks (SANs) are fairly well established as the preferred means for connecting high-performance servers and storage at the core of data centers, but it is estimated that as many as 80% of LAN-attached corporate servers still use direct-attached storage (DAS) in some way. As a result, your customers have to decide among iSCSI, FC or InfiniBand for use as a protocol to grant access to existing corporate FC SANs. Here's what you need to know to help them make that choice.
iSCSI is emerging as the preferred storage networking protocol for SMBs and isolated departmental SANs and will probably remain that way for at least the next decade. iSCSI is readily available, included as an option on most server operating systems (Linux, Unix, Windows) and requires minimal or no configuration on these OSes. It's also supported on inexpensive 1 GB Ethernet networks used by existing Ethernet NICs and network switches. Finally, it's supported on a large number of target storage devices. Dell (EqualLogic), EMC, Lefthand Networks and NetApp, for example, provide iSCSI-compatible storage systems, while vendors such as Overland Storage and Spectra Logic provide iSCSI-compatible virtual tape library (VTL) targets.
The problem with iSCSI is that it relies upon TCP/IP as its underlying supporting protocol. TCP/IP is a "lossy" protocol -- as TCP/IP-based iSCSI packets are sent across an Ethernet network, packets may be lost or dropped and require retransmission. This can result in packets arriving at the target in an out-of-order sequence, especially during busy periods. Congestion and out-of-order packet delivery is particularly undesirable for mission-critical applications in enterprise SANs since they need the guaranteed, in-order delivery of packets that the Fibre Channel protocol can provide.
FCoE is designed to address this problem. It functions like traditional FC rather than TCP/IP by ensuring the lossless transmission of data by encapsulating FC packets over the native Ethernet transport. Sending data this way eliminates the TCP/IP overhead on server initiators and storage targets while also facilitating the introduction of FCoE-to-FC server gateways.
That does not mean VARs should rush out and recommend FCoE to their clients. FCoE requires 10 Gigabit Ethernet (10 GigE) networks, which means companies will need to upgrade some or all of their existing network infrastructure, and 10 GigE switches still cost about $2,000 per port. There's only one switch that can currently act as an FCoE-to-FC gateway: the Cisco Nexus 7000 Series switch. Beyond that, companies need to use converged network adapters (CNAs) from vendors such as Emulex and Qlogic; those devices only recently became available, for about $1,000 each. In addition, support for server virtualization technologies such as VMware and XenServer on FCoE is still nascent. Finally, there are no FCoE target storage devices (disk or tape systems) yet available, and the FCoE standard, while well along in development, is still not completely agreed upon by the ANSI members working on it.
So is InfiniBand a viable alternative to FCoE for your enterprise customers? Yes and no. The primary advantages that InfiniBand offer over FCoE are that InfiniBand is more mature and more widely used in high-performance computing environments; it's also more economical and already has higher bandwidth capabilities (40 Gbps) than Ethernet. But like FCoE, it requires companies to upgrade their network infrastructure, introduce InfiniBand-to-FC and/or Ethernet gateways and install InfiniBand HCAs on host servers. In addition, it has limited or no support for VMware and XenServer, and only a limited number of target storage systems (such as those from DataDirect Networks and LSI) support InfiniBand.
iSCSI shows every indication of remaining the preferred storage networking protocol for storage network connectivity among SMBs, so VARs that cater to SMBs should have little hesitation about committing resources to the protocol and recommending that their clients adopt it.
However, a tipping point is coming in the next 12 to 24 months among enterprise companies. They will need to make a critical choice about how to provide block-based network storage connectivity to the majority of their LAN-attached servers. The two factors that may ultimately drive the adoption of either FCoE or InfiniBand in enterprise environments is how quickly they offer support for server virtualization software such as VMware and XenServer and how much the connectivity costs.
As many companies virtualize their LAN-attached servers, which often use DAS, they're connecting these virtualized servers to their storage networks. This puts companies in a position where they also need to upgrade their cabling infrastructure to support FCoE or InfiniBand to connect these virtualized servers to the corporate SAN. However, the cabling option they select may depend on where server virtualization technologies like Microsoft Hyper-V, VMware and XenServer are in terms of their support for FCoE or InfiniBand. At this point, no clear long-term winner has emerged, but of the two, InfiniBand is more mature and the one that VARs should choose for now if pressed by their clients to make a choice.
About the author
Jerome M. Wendt is the founder and lead analyst of The Datacenter Infrastructure Group. You can find his blog posts at www.dciginc.com.