Host server storage and network performance

This section of our chapter excerpt on performance optimization explains why you should split your disks across multiple channels and how to configure your physical network adapters to achieve optimal performance.

This Content Component encountered an error

Solution provider takeaway: The host server's storage system is actually the third bottleneck that should be addressed in a virtualized world, and a host server's network is probably the least likely culprit of performance problems. However, knowing how to combat issues in these two areas is vital for optimal performance with VMware ESX Server. This chapter excerpt, from VMware ESX Essentials in the Virtual Data Center, provides information on how to greatly improve the performance of your client's virtual environment with disk controllers and Gigabit Ethernet.

Download the .pdf of the chapter here.

Host Server Storage Performance

When searching for a performance bottleneck in your virtualization system, the CPU and the RAM are usually the first two suspects. But by increasing the density and usage of a physical machine with numerous virtual machines, the amount of disk I/O activity greatly increases across that server's disk subsystem. And certain workloads are also very sensitive to the latency of I/O operations. Because of that, the host server's storage system becomes the third bottleneck that needs to be addressed in a virtual world.

Storage performance issues are usually the result of a configuration problem. With ESX, you have the choice of using either local or remote storage. Either selection comes with its own set of configuration choices to be made that can affect performance. And there are also a number of other dependencies such as workload, RAID level, cache size, stripe size, drive speed and more.

Without sounding too obvious, using the fastest and highest performing disks and controllers will greatly improve the performance of your virtual environment. If you have decided to go the route of local or direct-attached disk storage in your environment, you should go with 15K RPM disks to improve the I/O performance across all of your virtual machines on that host server.

In addition to the speed of the disk drive, you should also consider the type of disk drive. While SATA drives are now possible to use with the latest version of ESX, you are better off for performance reasons spending the money and selecting either Ultra320 SCSI drives, or even better if your system supports it, SAS disks.

Many disk controllers can support multiple channels on the card, and by splitting your disks across multiple channels, you can achieve a performance improvement. For example, if you have six SCSI disks and a two-channel controller, your hardware might allow you to place three disks on each channel and configure them in a six-disk RAID-5 array. This would allow you to effectively split your I/O across two channels.

You might also be able to install multiple disk controllers and additional disks within your host server. This would allow you to split up your file system and strategically place your virtual machines according to I/O needs. In other words, if you have ten virtual machines on a host server and two of those virtual machines are running disk I/O intensive applications, you have a choice on how to configure your environment for the best possible performance. You can either locate the eight low intensive virtual machines on one controller with the two high intensive virtual machines on the other controller, or you can configure five virtual machines on each controller where only one disk I/O intensive virtual machine is allocated to each file system.

If your organization is lucky enough to afford one, it is recommended to use SAN storage rather than using locally attached disks so that you can achieve much better performance. SAN storage technology allows VMware ESX to shine with its added capabilities. Using a SAN will offload I/O operations from the ESX host server which leaves more resources available to the virtual machines. To further optimize performance, spread the I/O loads across multiple 4Gbps Fibre Channel SAN HBAs where possible. And make sure that heavily used virtual machines aren't accessing the same VMFS volume at the same time. Spread them across multiple VMFS volumes so that disk performance won't suffer.

TIP 
Make sure I/O traffic isn't queuing up in the VMkernel. To verify, monitor the number of queued commands with esxtop over a period of time.

VI3 has added new options for you to take advantage of in your remote storage. In addition to Fibre Channel SAN, you can now use iSCSI and NFS to take advantage of cheaper storage solutions using your existing IP networking technology. For iSCSI and NFS, it is important to make sure that your configuration and design does not result in an oversubscribed Ethernet link. The TCP session control will ensure that there is recovery from packet loss, but frequently recovering from dropped network packets will result in huge performance problems. Virtual machines with intensive I/O applications should not share Ethernet links to a storage device. And they will perform even better if they have multiple connection paths to its storage.

When your virtual machines share access to the ESX host server's I/O subsystem, use the I/O share allocation for each virtual machine to adjust the amount of I/O resources that the virtual machine is given. For virtual machines running applications that aren't very I/O intensive, you can set their resource shares to something low, like 500. And for the more resource intensive virtual machines that require more priority to I/O resources, set their shares to something higher, like 2000.

Host Server Network Performance

Network utilization can also present bottleneck issues in your environment much like CPU, memory and storage. But in most virtualized environments, you will find that the network is probably the least likely culprit of performance problems. However with that said, the host server still needs to be supplied with the appropriate amount of network bandwidth and network resources so that the virtual machines don't add any significant amount of network latency into the equation.

If you haven't already done so, upgrade your network environment with Gigabit Ethernet. With 10GigE waiting to take over, Gigabit Ethernet network adapters and switches should be affordable. Using Gigabit network adapters allows more virtual machines to share each physical network adapter and it greatly improves the amount of network bandwidth made available to network intensive virtual machines.

When configuring your physical network adapters, the speed and duplex settings on each card must match the speed and duplex setting used on the switch port to which it is connected. The VMkernel network device drivers start with a default speed and duplex setting of auto-negotiate. The auto-negotiate setting is fine and should work correctly with network switches that are set to auto-negotiate. This is the default and preferred setting for gigabit connections. When using 100Mbit Fast Ethernet adapters, you should set the network adapter and the switch port speed and duplex settings to match at 100/Full. If you have conflicting settings between the network adapter and the switch, it can not only cause a performance problem but in some cases a connectivity issue as well.

TIP 
If you encounter bandwidth issues, check to make sure the NIC auto-negotiated properly. If not, change the speed and duplex settings manually by hard coding them to match. Do so at the switch or the VMkernel networking device using the VI Client.

You can also increase the available network bandwidth and increase network fault tolerance by teaming multiple Gigabit network adapters into a bond. This will also simplify the number of virtual switches being mapped to physical network adapters as well. You can also use separate physical network adapter and vSwitches to avoid network contention between the service console, the VMkernel and the virtual machines. It can also be used to segment network I/O intensive virtual machines from one another.

You might want to leverage the new VMware ESX 3.5 networking enhancements that have been integrated into its networking code. Jumbo Frames are now supported. Supporting Ethernet frames up to 9000 bytes (as opposed to standard Ethernet frames supporting a Maximum Transfer Unit of 1500 bytes) allows guest operating systems using Jumbo Frames to require fewer packets to transfer large amounts of data. Not only can they achieve a higher throughput, but they also use less CPU than a standard Ethernet frame.

Another new feature in 3.5 is the support for TCP Segmentation Offload (TSO). TSO is widely used and supported by today's network cards. It allows for the expensive task of segmenting large TCP packets of up to 64KB to be offloaded from the CPU to the NIC hardware. ESX 3.5 utilizes this concept to provide virtual NICs with TSO support even when the underlying hardware doesn't have the special TSO capabilities. Because the guest operating system can now send packets that are larger than the MTU to the ESX server, processing overheads on the transmit path are reduced. TSO improves performance for TCP data coming from a virtual machine and for network traffic that is sent out from the server.


VMware ESX Server: Performance optimization
  Configuration of the host server 
  Host server processor and memory performance
  Host server storage and network performance
  Configuration of the virtual machine
  Configuration of the guest operating system
 

About the book
VMware ESX Essentials in the Virtual Data Center details best practices for ESX and ESXi, guides you through performance optimization processes for installation and operation, uses diagrams to illustrate the architecture and background of ESX and covers the two most popular releases, 3.0 and 3.5.

This was first published in November 2008

Dig deeper on Virtualization Technology and Services

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close