Server virtualization is driving a lot of IT infrastructure decisions, including what kind of shared storage platform to put in. The SAN vs. NAS storage debate, which raged in the first part of this decade, seems to have resurfaced in the virtual server environment (VSE), for some of the same reasons. Shared resources, like storage, have always made intuitive sense in a multi-host environment, but even more so when the number of virtual devices, or virtual machine (VM), server instances explodes, as is happening in many organizations. For a VAR, when considering storage and virtualization for a customer, the question becomes which storage platform to recommend when you could offer either a file- or a block-based solution.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In this article, we'll examine some of the characteristics of SAN, or block-level, storage (Fibre Channel and iSCSI) and NAS, or file-level, storage (NFS) systems and some of the advantages that proponents of these technologies claim. We'll also discuss what a storage VAR can do when faced with an opportunity to present a consolidated storage system to a customer with a VSE. In researching this topic, I consulted experts from BlueArc, 3PAR, NetApp and EMC.
Storage requirements for VMs
Virtual server environments generate some unique storage requirements compared with traditional physical servers. Potentially most challenging are the multiple I/O streams that the virtualization host must manage, as each VM on that host supports an application (maybe several), each with its own varying I/O demands. This is sometimes referred to as the "I/O blender."
The ease with which new VMs are set up can result in VSEs having a larger number of server instances than would be the case in an all-physical server environment. Along with faster server setup, there's also a need to provision resources, like storage, for VMs quickly as they're created and to allocate those resources effectively to keep up with this more dynamic compute environment. Data protection and the overall management of a virtualized environment can be more complex as well, especially if the backup applications and monitoring tools aren't designed specifically to deal with the virtual server environment.
These characteristics can have an impact on the storage infrastructure chosen to support a VSE. In general, storage must be more scalable to support a certain degree of VM sprawl that usually occurs, and it should have strong provisioning and reclamation features to support the dynamic allocation and decommissioning of VMs. Fast and functional administration tools can also help IT keep up with the simultaneous changes in resource allocation that accompany a large number of virtual servers.
File system requirements for storage and virtualization
VMware (as well as other virtualization vendors) encapsulates all the data files and configuration information for each server instance into a few files or images, which VMware calls VMDK files. This encapsulation helps the physical host server keep all the VMs it contains separated and provides the VM portability that server virtualization enjoys. From a storage perspective this means that all VMs need to be supplied with a file system to put images into, regardless of the applications running on the VMs. If a NAS system is used, the host ESX servers simply put these image files into the NFS file system that the NAS implements. If block storage is used, VMware provides its own file system, called VMFS.
NAS proponents tout the simplicity of its architecture, how they don't need to bother with LUN operations or SAN switches (although most do set up a dedicated LAN to support the NAS) or deal with maintaining the file system on top of block storage. And, scaling a NAS system can be easier, because the file system and the storage are expanded in the same process, whereas the SAN solution requires the underlying block storage to be scaled first and then the file system expanded on top of that. This rationale can be taken further to say that since companies are looking to put more "production" on VMware, and since IT typically spends more time on production applications, the operational simplicity of running VMware on NFS translates into op-ex savings in these kinds of environments.
While this point has merit, one should also realize that Fibre Channel SAN administration isn't the chore it once was (after all, it's been around for more than 10 years) and iSCSI doesn't use Fibre Channel at all. Also, VMware has improved its file system management with vSphere 4.0, simplifying expansion. It should also be pointed out that how easily a storage system scales is dependent upon the features and functionality of that particular system, not simply whether it's block-based or file-based.
NAS vendors make the point that since they "own" the file system and the storage hardware, they can offer advanced storage features more easily than can SAN solutions. The rationale is that they can put some functions in hardware and some can be done by the file system, whereas the SAN vendor must either put functions in hardware or rely on VMware to support them in VMFS. SAN vendors are quick to point out they do work with VMware APIs to develop advanced features and VMware is continually adding functionality to VMFS as well. More importantly, while there's merit to this claim, NAS products differ widely in the features they offer, in the administration tools and utilities they provide, even in the total capacity they support. In short, not all NFS systems are created equally. And just because a NAS appliance supports NFS doesn't mean it's better suited for a VSE (see VAR recommendations below).
SAN proponents often focus on performance in the storage and virtualization discussion. This claim is similar to claims made during the original NAS vs. SAN storage debate, that Fibre Channel is faster and involves less CPU overhead than the IP protocol does. The response to this is also the same that NAS vendors have voiced for the past decade: "It depends." It depends on the bandwidth of the network, whether protocols are software- or hardware-based, and a host of other details. Like all performance questions in a complex system, it's difficult to accurately determine the bottleneck.
In reality, there are extremely large VSEs running on NAS as well as on SAN, so performance can be had if you know how to tune the system and you're willing to pay for it. Again, the devil's in the details and mileage will vary so as a VAR, it's best to dive into those details before promising things to your customers -- standard operating procedure that good VARs do anyway.
Aside from just performance SAN users claim that Fibre Channel was designed from the beginning to carry storage traffic and is better able to support redundant, multi-path storage infrastructures than Ethernet. While bandwidth aggregation and load balancing can be more complex with Ethernet, 10 Gigabit Ethernet (GbE) largely resolves these issues, and developments with TRILL and Data Center Bridging standards are addressing some Ethernet concerns with bridging loops and multiple switch links.
In response to the ease-of-use advantage that NAS proponents claim, block storage vendors are reducing the complexity of provisioning storage through virtualization of array spindles into pools of storage, removing the challenges of RAID and LUN management. They're also adding features like thin provisioning to help keep utilization up in the dynamic VMware environment.
The SAN folks also make the point that a majority of VSEs are running on block storage, meaning that most organizations have chosen SAN over NAS for VMware. While this has some merit, it could also be due as much to the fact that block storage, specifically Fibre Channel, was the original format supported by VMware and that NFS and iSCSI support has only been around for a few years. However, block storage is for some a more "comfortable" choice, being similar to the direct-attached storage that many virtual environments were started on. SAN proponents also claim that VMware develops new products for VMFS before supporting NFS. While this has been the case in the past, the delay in these releases varies widely, and users need to determine how important it is to get version 1.0 of a new VMware product or feature.
Recommendations for VARs
At the end of the day, all VSEs put data into files and need a file system to support this. For NAS it's NFS; for a SAN, VMFS is used. As mentioned earlier, there are many environments of all sizes, running on SAN and NAS storage. For most organizations, solutions from either a NAS or SAN vendor can be implemented that meet their needs. The real differentiator for SAN vs. NAS storage in a virtualization environment may be the features and functionality of the storage system itself, not so much whether it's block- or file-based, and what potential limitations that technology choice puts on the environment down the road.
As a VAR, you'll need to spend some time understanding the customer's environment and what their future requirements may be. As is the case with every storage decision, there are no easy answers. This is actually good news for VARs because it means your customers need you to help them navigate the maze. Here's some advice for how to approach the decision.
- Don't "go cheap": Consolidation, whether it's servers or storage, can increase vulnerability, something that's often lost in the discussion about the benefits of scaling and flexibility, etc. This is especially true in a VSE where VM densities keep increasing and a storage hardware failure can affect even more applications than it did in a physical environment. There's a popular practice of using commodity hardware to reduce storage costs, but care should be taken to make sure it's reliable -- and scalable -- enough for the application.
- Consider scale-out/scale-up: Consolidated storage that is supporting a dynamic virtualized server environment could be providing storage for hundreds of server instances. To scale large enough to handle the expected storage growth, clustered or node storage solutions are available from both the NAS and SAN vendors. Also called "scale-out" or "scale-up," depending upon their specific architecture, these systems can grow to the petabyte range while maintaining performance, something that neither technology could do in the past. In addition to growth, a storage system needs to have the feature set that makes administration easy (again, it could be provisioning hundreds of VMs) and provide appropriate compliance and data protection as well. As mentioned previously, it may be better to look at the capabilities of the storage system to make sure it can support the environment and then see whether it's SAN- or NAS-based.
- Find out what's on the floor: You'll also need to find out the lay of the land. Knowing the incumbent storage manufacturer and what OS platforms are prevalent can make your job a lot easier. If a SAN or NAS system is in place (and the customer is happy with it), that may be the path of least resistance. Obviously, you need to pick the best solution, but if either SAN or NAS will work equally well, they'll probably be more interested in staying with a familiar platform or vendor.
About the author
Eric Slack, a senior analyst for Storage Switzerland, has more than 20 years of experience in high-technology industries holding technical management and marketing/sales positions in the computer storage, instrumentation, digital imaging and test equipment fields. He's spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States. Find Storage Switzerland's disclosure statement here.