NAS server sizing and configuration factors

Find out what questions to ask customers when helping them configure a NAS server, including the number and speed of disk drives that the hardware will support, whether SSD will be used and user I/O demands.

Within a NAS system, there are four basic parts. The most obvious from a footprint perspective is the physical storage, while the most obvious from a user perspective is the NAS software. Then there are the I/O interface to the network and the often-forgotten NAS server. The NAS server is the engine that makes all the other components work. If it's improperly configured, the performance of the overall system may suffer. Getting the right configuration, then, is key to a NAS system sale.

In helping a customer size a NAS server with the appropriate power and performance, there are a few considerations: First, find out how many disk drives that system will need to support as well as how fast the disk drives are. Assuming for a moment that there is enough continuous storage access to keep all or at least most of the devices moving data at their fastest speed, can the NAS server move all the data to and from the storage to the network? If the storage I/O demands are high or if there are simply too many drives to be supported, can the server deliver on that request? You need enough inbound I/O requests to justify the investment in a high-performance network card, which then needs a faster processor to drive traffic through the system and onto storage.

This is a special concern if the storage system is going to leverage solid-state drive (SSD) technology to any degree. SSD by itself can deliver I/O at a pace faster than many servers can generate or process it. Assuming that the inbound I/O requirements warrant it, a NAS server with SSD as part of the configuration will need additional processing power and RAM memory.

There are several other factors that can influence the choice of size and configuration of the system: user I/O demands and the speed of the network and interface cards that connect users to the NAS server. If I/O demands are from even a few performance demanding workstations, they can generate enough of an I/O burden to warrant higher-speed network I/O and further justify those high-performance back-end storage devices such as SSD. Even without a few performance demanding systems, you could have a high I/O burden. For instance, you could have hundreds, if not thousands, of users all accessing the same NAS at the same time. All of this I/O demand then justifies a higher-speed NIC in the NAS server and will require a more powerful NAS server to handle and process all this increasingly fast I/O.

The next factor that drives the NAS server configuration is the efficiency of the NAS software. NAS software efficiency is very hard to determine, but in general the more specific the software is to a task, the better it is at that task. The only way to test NAS software efficiency is to place it on a minimally configured system and see how it performs. The more efficient the software, the better it will run.

Next comes the issue of software features: What is the software asking the server to do in addition to basic file serving? Most NAS systems have become so feature-laden that the software services have begun to affect data delivery performance. The very features that made NAS so accepted, such as snapshots, RAID protection and replication, may also require additional performance from the NAS server in order to drive all of those features and maintain adequate performance. In moderate-use environments, today's midrange processor can keep up with the software burden plus the IO to the users. If the software burden increases -- for example, when managing automated tiered storage -- the processor and memory in the NAS server needs to grow with the burden being placed.

The final set of factors relates to whether you're building a do-it-yourself NAS solution for a customer or recommending a pre-integrated one. If you are configuring your own NAS server with NAS software and your own storage, you have more direct control over what server is used and what its configuration might be. As the integrator, you control selecting how much processing power, network bandwidth and storage I/O the system is going to have. Since the cost is likely to be less than with a pre-integrated system, you have more flexibility to overshoot the performance mark a bit and provide your customer with a little cushion. As always, the best policy is to be upfront with your customers and warn them that you are aiming high. Get them to agree that an idle NAS system is better than a saturated one.

If you get an integrated NAS system from a manufacturer, on the other hand, you may have limited but potentially better support. Most NAS providers have a range of options for consideration. The challenge is trying to match up a system with the customer's needs. Whereas in the DIY scenario you can more easily build a more powerful system than a customer currently needs because the cost of a DIY system is lower than with a turnkey one, aiming too high in the turnkey space could be very expensive. Many VARs therefore recommend a more modest system to meet current budgets. But be sure to be very careful that the customer is clear on what the limitations of the system are and what they are giving up by not buying a higher-end system: Most importantly, they might need to upgrade sooner than they'd like.

About the author

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.

Dig Deeper on Primary and secondary storage