Multiswitch fabrics: Avoid bottlenecks where switches interconnect
From the beginning, people needed more ports in a fabric than was available in a single switch. Inter-Switch Link (ISL) solved that problem and simultaneously created a new one. Multiswitch SAN designs allowed hundreds of servers to access a consolidated pool of storage, but created bottlenecks where the switches interconnect. Most switch management interfaces record average port utilization and high watermark percentages. Pay close attention to ISLs, looking for saturation. Ensure that they don't regularly exceed an average of 50% or hit peaks of 100% utilization.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
High-end SAN switches also had full line rate non-blocking ports in the past. This meant that you could plug your server into any port and have full-speed access to any other port on the switch. The downside was that switches built this way have a hard time scaling large. Fortunately some manufacturers, such as Cisco, can now cram many more ports into one switch than was previously imaginable. Beware, not all of these ports are the same. For example, in the Cisco MDS 48-port gen2 board, there is not enough bandwidth to let every port operate at a full 4 GB simultaneously. Individual ports can be forced up or down to the desired speed. Be careful which ports get the bandwidth.
Extended fabric with native Fibre Channel: Improve link utilization over long distance
Shortly after SANs came along we wanted to connect to distant datacenters and haul the packets over longer distances. Long-wave fiber and specialized channel extenders made it all possible. In perfect conditions, it takes 2 ms to send a Fibre Channel (FC) frame 100 km and back. If each I/O is waiting in line for the previous I/O to complete, the link's bandwidth capability may never be fully utilized, yet performance is still poor. When long-haul FC communication is required, it is necessary to add buffer-to-buffer (B2B) credits to the ports on each side of the link so multiple I/Os can be on the wire at once. This does not reduce the latency of a single packet, but improves the overall link utilization and throughput.
Extended fabric with FCiP: Understand IP latency effects on Fibre Channel networks
Storage technology has more recently become available to extend SAN distances beyond traditional FC capability without dedicated network links. Fiber Channel over IP protocols like iFCP and FCiP are found in many high-end switches today. They blend the flexibility and long-haul capability of IP with the simple stability of FC. IP latency is the number one enemy here. Architects must understand the parameters and effects of IP latency of FC networks. Consider this excerpt from IP Storage Networking: Straight to the Core.. While not covered directly in the article, it is important to understand that the FCiP protocol has some conversion overhead, and may add additional latency at each end. Consult the FC over IP hardware vendors for specifics.
iSCSI: Beware I/O-intensive workloads
It is no surprise to storage professionals that iSCSI is booming. Many late adopters and small-scale storage network consumers find iSCSI to be a beacon of hope in the complex land of Fiber Channel storage. The simplicity and cost effectiveness of this technology brings affordable and workable solutions to the masses; but it too has limitations and pitfalls.
When iSCSI hosts access a small number of targets, or have very limited I/O throughput demands, standard NICs do just fine. However, I/O-intensive workloads or multiple iSCSI targets on a standard NIC puts a significant burden on the server's CPU and adds latency to the hosted application's disk service times. iSCSI HBAs, also known as TCP Offload Engine (TOE) cards, offload the Ethernet and SCSI packet processing from the main CPU, keeping server cycles for the application and disk response times low.
Even with an iSCSI TOE card installed, high network latency or congestion can increase iSCSI disk service times. Anything beyond light duty I/O pushed over the IP network requires dedicated network segments or Quality of Service (QoS) enabled in the IP network. Congestion can ruin your iSCSI day.
Storage network connectivity options are broadening constantly. As the features expand, so too will the hurdles. Storage channel professionals are well served to remain aware of the most common storage network bandwidth hurdles.
About the author: Brian Peterson is an independent IT Infrastructure Analyst. He has a deep background in enterprise storage and open systems computing platforms. A recognized expert in his field, he held positions of great responsibility on both the supplier and customer sides of IT.