VMware ESX essentials: Fibre Channel and iSCSI

This section of our chapter excerpt on storage from the book "VMware ESX Essentials in the Virtual Data Center" explains the differences between Fibre Channel and iSCSI as well as the configurations a client is able to use when working with VMware ESX.

Solution provider takeaway: iSCSI is a cost-effective storage networking protocol that has a few significant advantages over Fibre Channel. This section of our chapter excerpt, from VMware ESX Essentials in the Virtual Data Center, provides solution providers with a general overview and comparison of the differences between Fibre Channel and iSCSI.

Download the .pdf of the chapter here.

Fibre Channel SAN

When using Fibre Channel to connect to the back-end storage, VMware ESX requires the use of a Fibre Channel switch. Using more than one allows for redundancy. The Fibre Channel switch will form the "fabric" in the Fibre Channel network by connecting multiple nodes together. Disk arrays in storage area networks (SAN) are one of the main things you will see connected in a Fibre Channel network along with servers and/or tape drives. Storage processors aggregate physical hard disks into logical volumes, otherwise called LUNs, each with its own LUN number identifier. World Wide Names (WWNs) are attached by the manufacturer to the host bus adapters (HBA). This is a similar concept as used by MAC addresses within network interface cards (NICs). All zoning and pathing is the method the Fibre Channel switches and SAN service processor (SP) use for controlling host access to the LUNs. The SP use soft zoning to control LUN visibility per WWN. The Fibre Channel switch uses hard zoning, which controls SP visibility on a per-switch basis as well as LUN masking. LUN masking controls LUN visibility on a per-host basis.

The VMkernel will address the LUN using the following example syntax:

Vmhba(adapter#):target#:LUN#:partition# or Vmhba1:0:0:1

So how does a Fibre Channel SAN work anyway? Let's take a look at how the SAN components will interact with each other. This is a very general overview of how the process works.

1. When a host wants to access the disks or storage device on the SAN, the first thing that must happen is that an access request for the storage device must take place. The host sends out a block-based access request to the storage devices.
2. The request is then accepted by the HBA for the host. At the same time, it is first converted from its binary data form to optical form, which is what is required for transmission in the fiber optical cable. Then the request is "packaged" based on the rules of the Fibre Channel protocol.
3. The HBA then transmits the request to the SAN.
4. One of the SAN switches receives the request and checks to see which storage device wants to access from the host's perspective; this will appear as a specific disk, but will really be a logical device that will correspond to some physical device on the SAN.
5. The Fibre Channel switch will determine which physical devices have been made available to the host for its targeted logical device.
6. Once the Fibre Channel switch determines the correct physical device, it will pass along the request to that physical device.
7. When a host wants to access the disks or storage device on the SAN, the fist thing that must happen is an access request for the storage device. The host sends out a block-based access request to the storage devices.
8. The request is then accepted by the HBA for the host. At the same time, it is first converted from its binary data form to optical form, which is what is required for transmission in the fiber optical cable. Then the request is "packaged" based on the rules of the Fibre Channel protocol.
9. The HBA then transmits the request to the SAN.
10. One of the SAN switches receives the request and checks to see which storage device wants to access from the host's perspective; this will appear as a specific disk but will really be a logical device that will correspond to some physical device on the SAN.
11. The Fibre Channel switch will determine which physical devices have been made available to the host for its targeted logical device.
12. Once the Fibre Channel switch determines the correct physical device, it will pass along the request to that physical device.

iSCSI

iSCSI is a different approach than that of Fibre Channel. iSCSI is a SCSI transport protocol that enables access to a storage device via standard TCP/IP networking. This process works by mapping SCSI block-oriented storage over TCP/IP. This process is similar to mapping SCSI over Fibre Channel. Initiators like the VMware ESX iSCSI HBA send SCSI commands to "targets" located in the iSCSI storage systems.

iSCSI has some distinct advantages over Fibre Channel, primarily with cost. You can use the existing NICs and Ethernet switches that are already in your environment. This brings down the initial cost needed to get started. When looking to grow the environment, Ethernet switches are less expensive than Fibre Channel switches.

iSCSI has the ability to do long-distance data transfers. And iSCSI can use the Internet for data transport. You can have two separate data centers that are geographically apart from each other and still be able to do iSCSI between them. Fibre Channel must use a gateway to tunnel through, or convert to IP.

Performance with iSCSI is increasing at an accelerated pace. As Ethernet speeds continue to increase (10Gig Ethernet is now available), iSCSI speeds increase as well. With the way iSCSI SANs are architected, iSCSI environments continue to increase in speed the more they are scaled out. iSCSI does this by using parallel connections from the service processor to the disks arrays.

iSCSI is simpler and less expensive than Fibre Channel. Now that 10Gig Ethernet is available, the adoption of iSCSI into the enterprise looks very promising.

It is important to really know the limitations and/or maximum configurations that you can use when working with VMware ESX and the storage system on the back end. Let's take a look at the ones that are most important.

1. 256 is the maximum number of LUNs per system that you can use and the maximum during install is 128.
2. There is a 16-port total maximum in the HBAs per system.
3. 4 is the maximum number of virtual HBAs per virtual machine.
4. 15 is the maximum number of targets per virtual machine.
5. 60 is the maximum number of virtual disks per Windows and Linux virtual machine.
6. 256 is the maximum number of VMFS file systems per VMware ESX server.
7. 2TB is the maximum size of a VMFS partition.
8. The maximum file size for a VMFS-3 file is based on the block size of the partition. A 1MB block size will allow up to a 256GB file size and a block size of 8MB will allow 2TB.
9. The maximum number of files per VMFS-3 partition is 30,000.
10. 32 is the maximum number of paths per LUN.
11. 1024 is the maximum number of total paths.
12. 15 is the maximum number of targets per HBA.
13. 1.1GB is the smallest VMFS-3 partition you can create.
So, there you have it, the 13 VMware ESX rules of storage. The setting of the block file size on a partition is the rule you will visit the most. A general best practice is to create LUN sizes between 250GB and 500GB. Proper initial configuration for the long term is essential. An example would be, if you wanted to P2V a server that has 300GB total disk space, and you did not plan appropriately, you would have an issue. Unless you planned ahead when you created the LUN and used a 2MB block size, you would be stuck. Here is the breakdown:

1. 1MB block size = 256GB max file size
2. 2MB block size = 512GB max file size
3. 4MB block size = 1024GB max file size
4. 8MB block size = 2048GB max file size.
Spanning up to 32 physical storage extents (block size = 8MB = 2TB) which equals the maximum volume size of 64TB.

NOTE
Now would be a very good time to share a proverb that has served me well over my career. "Just because you can do something, does not mean you should." Nothing could be truer than this statement. There really is no justification for creating volumes that are 64TB or anything remotely close to that. As a best practice, I start thinking about using raw device mappings (otherwise known as RDMs) when I need anything over 1TB. I actually have 1TB to 2TB in my range, but if the SAN tools are available to snap a LUN and then send it to Fibre tape, that is a much faster way to back things up. This is definitely something to consider when deciding whether to use VMFS or RDM.

System administrators today do not always have the luxury of doing things the best way they should be done. Money and management ultimately make the decisions, and we are then forced to make do with what we have. In a perfect world, we would design tier-level storage for different applications and virtual machines running in the environment, possibly comprised of RAID 5 LUNs and RAID 0+1 LUNs. Always remember the golden rule: "spindles equal speed."

As an example, Microsoft is very specific when it comes to best practices with Exchange and the number of spindles you need on the back end to get the performance that you expect for the scale of the deployment. Different applications are going to have different needs, so depending on the application that you are deploying, the disk configuration can make or break the performance of your deployment.

Summary

So we learned that the number of spindles directly affects the speed of the disks. And we also learned the 13 VMware ESX rules for storage and what we needed to know about VMFS. Additionally, we touched on the different storage device options that have been made available to us. Those choices include DAS, iSCSI and Fibre Channel SAN. We also presented a very general overview on how a Fibre Channel SAN works.

Knowing one of the biggest gotchas is the block size of VMFS partitions and LUNs, and then combining that knowledge with the different storage options made available, you can now make the best possible decisions when architecting the storage piece of your virtual infrastructure environment. Proper planning upfront is crucial to making sure that you do not have to later overcome hurdles pertaining to storage performance, availability and cost.


VMware ESX essentials in the Virtual Data Center: Storage
  VMware ESX essentials: Virtual Machine File System
  VMware ESX essentials: Fibre Channel SAN and iSCSI

About the book
VMware ESX Essentials in the Virtual Data Center details best practices for ESX and ESXi, guides you through performance optimization processes for installation and operation, uses diagrams to illustrate the architecture and background of ESX and covers the two most popular releases, 3.0 and 3.5.

Dig Deeper on MSP technology services

MicroScope
Security
Storage
Networking
Cloud Computing
Data Management
Business Analytics
Close