Network troubleshooting to fix storage performance problems

Learn about three storage performance problems -- including with NAS replication, with an EMC array and in an Exchange server environment -- that were fixed with network troubleshooting.

This Content Component encountered an error

How many times have you or your customer had a storage performance problem or something simply breaks and you ask the networking people to look into it, and they say, "It's not a network issue"? Well, they're not always right. Storage systems are reliant upon the network in one way or another. EMC DMX SRDF synchronous data replication, NetApp MetroCluster node-to-node communication, data replication, iSCSI and NDMP backups all rely...

on the network to be stable and functioning. But sometimes, it's not, and network troubleshooting is required to fix the problem.

And, even if you know in your gut that the network is the source of a storage performance problem, the network is generally innocent until proven guilty. Because of that, you could spend many hours and weeks of your time trying to prove that a storage performance difficulty really is caused by a network problem. At one customer site, I was replicating data from a NAS device in Canada to Midland, Mich., and the data transmission would progress to a certain point but then abort. There was a very intelligent and knowledgeable WAN engineer working with me on the problem. I spent several weeks trying to prove to him that the problem was with the WAN and not the NAS. I was finally proved right, but getting to that point was a big effort involving a significant amount of network troubleshooting.

Here's how things unfolded: In Midland, we had several dozen NAS replication relationships coming into the site without any problems. Yet, transmission from one particular site in Canada consistently aborted before it reached Midland. The WAN engineer pointed the finger (the index finger, not the middle one) at the NAS appliance. I couldn't prove him wrong, but I really doubted that the NAS appliance was causing the problem.

To investigate the problem, the WAN engineer looked at the switch that the NAS appliance was connected to and saw the same MAC address of the NAS appliance on two switches. The NAS appliance had a merged network card. (A merged network card is when you take two physical network cards and create one virtual interface. You can aggregate the two interfaces into one or do an active/passive configuration. In this case, we had an active/passive configuration.) The WAN engineer concluded that that was the source of the storage performance problem. I'd seen in the past how merged network technology creates a virtual MAC address and that MAC address floats from one interface to another. That may have explained why the WAN engineer saw the same MAC address on two switches. To fix the issue, he asked me to swap the roles of the network interface and make the active card the passive one and vice versa. We did that, but the data replication problem persisted.

The next step in the network troubleshooting process was to replicate the data locally, to see if the problem appeared with the WAN out of the mix. Since the filer I was working with was a cluster, I replicated one volume from one filer to the next. This worked. I then started the data replication again from Canada to Midland, and from the NAS appliance I did a packet sniff, by copying the packet trace file from Midland to the Canadian site. This worked without any problem. However, when I tried copying it back, it aborted and did not complete the copy. This was very interesting.

I then copied the packet trace within the site in Canada from one server to another, without a problem. However, when I tried copying it again from Canada to the NAS appliance in Midland, it aborted again. I then tried copying the file from Canada to another host in Midland, and that also failed. As a final test, I copied the file from the NAS appliance in Midland to another host in Midland, and I had no problem.

I showed the results to the WAN engineer and he was shocked to see the strange anomaly. From Midland to Canada, we had no issue, yet from Canada to Midland we saw a problem. So the problem was caused by a network issue! For a temporary fix, he rerouted the traffic from Canada to Midland through a different WAN interface and data flowed properly once again.

There are other examples of network issues causing problems with storage performance. For instance, in one large EMC environment, users complained that the write operations were experiencing latency. The quick solution was always to split the SRDF link. When a user writes to an EMC array where the device is connected via SRDF to a remote array, both arrays need to acknowledge the write operation before the user gets the acknowledgement. When the write operation gets sent to the remote array, it will traverse an extended fabric that is connected by multiple ISL links by means of a DWDM switch. While splitting the SRDF link will address the problem on TCP/IP, with Fibre Channel, it's not so easy since Fibre Channel is not tolerant of packet loss like TCP/IP is.

In another situation, I got a call from a friend, a consultant working with Exchange at a customer site. People were complaining to him that the Exchange server was too slow. The Exchange server had LUNs presented to it via iSCSI over a gigabit interface. The storage array also had gigabit interfaces, yet he was only able to write to the array at a rate of 5 MBps -- he should have easily seen 10 to 15 times that amount. However, the customer had the exact same combination of array and Exchange working in a different data center, and everything was working fine. My consultant friend asked the network people to help troubleshoot the problem, but they kept denying it was a network issue. I suggested that he should directly connect the server to the array and bypass all network switches and try writing to the array (this was a small environment so he could do that). Sure enough, he got the throughput he was looking for and proved that the storage performance problem truly was caused by a network issue.

The bottom line is that even in storage, we're at the mercy of the network people. When end users complain that there is latency on a computer host and they think it is a storage problem, it could very well be a network issue. Latency complaints are so difficult to address because there are so many pieces to the puzzle; having a deep understanding of how products work will help you pinpoint the problem. The network is the plumbing to everything, and if the pipe is clogged, even if you have a multimillion-dollar DMX array serving up 10 gazillion IOPS, the array performance can suffer.

About the author
 

Seiji Shintaku is an independent consultant and has been focused on Post Sales support and delivery for RBS, Merrill Lynch, Credit Suisse First Boston, IBM and Morgan Stanley. Seiji can be contacted via email or LinkedIn.


 

This was first published in July 2010
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close