Storage network bandwidth planning: How to avoid network latency

Storage network bandwidth must constantly be taken to account to avoid network latency and maximize your clients' application performance.

This Content Component encountered an error

Brian Peterson
Long ago we traded in the simple world of small and flat storage networking for a host of sophisticated technologies that solve more complex storage problems. As with the departure of simple storage area networks (SANs), out went simple SAN bandwidth planning. Wise architects are careful to avoid the common pitfalls found in newer technologies like native SAN extension, Fibre Channel over IP (FCiP) and iSCSI.

Multiswitch fabrics: Avoid bottlenecks where switches interconnect

From the beginning, people needed more ports in a fabric than was available in a single switch. Inter-Switch Link (ISL) solved that problem and simultaneously created a new one. Multiswitch SAN designs allowed hundreds of servers to access a consolidated pool of storage, but created bottlenecks where the switches interconnect. Most switch management interfaces record average port utilization and high watermark percentages. Pay close attention to ISLs, looking for saturation. Ensure that they don't regularly exceed an average of 50% or hit peaks of 100% utilization.

Storage network bandwidth planning
Preparing a customer's network for iSCSI: Five points to consider

Storage network latency effects on application performance

Long-distance storage networking applications

High-end SAN switches also had full line rate non-blocking ports in the past. This meant that you could plug your server into any port and have full-speed access to any other port on the switch. The downside was that switches built this way have a hard time scaling large. Fortunately some manufacturers, such as Cisco, can now cram many more ports into one switch than was previously imaginable. Beware, not all of these ports are the same. For example, in the Cisco MDS 48-port gen2 board, there is not enough bandwidth to let every port operate at a full 4 GB simultaneously. Individual ports can be forced up or down to the desired speed. Be careful which ports get the bandwidth.

Extended fabric with native Fibre Channel: Improve link utilization over long distance

Shortly after SANs came along we wanted to connect to distant datacenters and haul the packets over longer distances. Long-wave fiber and specialized channel extenders made it all possible. In perfect conditions, it takes 2 ms to send a Fibre Channel (FC) frame 100 km and back. If each I/O is waiting in line for the previous I/O to complete, the link's bandwidth capability may never be fully utilized, yet performance is still poor. When long-haul FC communication is required, it is necessary to add buffer-to-buffer (B2B) credits to the ports on each side of the link so multiple I/Os can be on the wire at once. This does not reduce the latency of a single packet, but improves the overall link utilization and throughput.

Extended fabric with FCiP: Understand IP latency effects on Fibre Channel networks

Storage technology has more recently become available to extend SAN distances beyond traditional FC capability without dedicated network links. Fiber Channel over IP protocols like iFCP and FCiP are found in many high-end switches today. They blend the flexibility and long-haul capability of IP with the simple stability of FC. IP latency is the number one enemy here. Architects must understand the parameters and effects of IP latency of FC networks. Consider this excerpt from IP Storage Networking: Straight to the Core.. While not covered directly in the article, it is important to understand that the FCiP protocol has some conversion overhead, and may add additional latency at each end. Consult the FC over IP hardware vendors for specifics.

iSCSI: Beware I/O-intensive workloads

It is no surprise to storage professionals that iSCSI is booming. Many late adopters and small-scale storage network consumers find iSCSI to be a beacon of hope in the complex land of Fiber Channel storage. The simplicity and cost effectiveness of this technology brings affordable and workable solutions to the masses; but it too has limitations and pitfalls.

When iSCSI hosts access a small number of targets, or have very limited I/O throughput demands, standard NICs do just fine. However, I/O-intensive workloads or multiple iSCSI targets on a standard NIC puts a significant burden on the server's CPU and adds latency to the hosted application's disk service times. iSCSI HBAs, also known as TCP Offload Engine (TOE) cards, offload the Ethernet and SCSI packet processing from the main CPU, keeping server cycles for the application and disk response times low.

Even with an iSCSI TOE card installed, high network latency or congestion can increase iSCSI disk service times. Anything beyond light duty I/O pushed over the IP network requires dedicated network segments or Quality of Service (QoS) enabled in the IP network. Congestion can ruin your iSCSI day.

Storage network connectivity options are broadening constantly. As the features expand, so too will the hurdles. Storage channel professionals are well served to remain aware of the most common storage network bandwidth hurdles.

About the author: Brian Peterson is an independent IT Infrastructure Analyst. He has a deep background in enterprise storage and open systems computing platforms. A recognized expert in his field, he held positions of great responsibility on both the supplier and customer sides of IT.


This was first published in May 2007

Dig deeper on Data Storage Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close