Problem solve Get help with specific problems with your technologies, process and projects.

Tuning vSphere 4 hardware for optimal performance

If your customers want to implement vSphere 4, this tip tells you how to advise them on hardware, achieve maximum performance per host and suggests recommendations for disk I/O.

Any mom and pop consulting shop can go into a company and implement VMware Inc.'s vSphere 4. Implementation is not that difficult, and you can get vSphere up and running in just a few hours. The difference between a mom and pop shop and a true enterprise solutions provider is that the enterprise solutions provider is going to work with its customers to make sure they get what they need. When the enterprise partner is done with a particular project, it leaves a well-tuned, high-performance system. This tip shows you how to tune your customers' vSphere 4 hardware during implementation so that your customers view you as the enterprise partner and not just a fly-by-night operation.

More resources on considering vSphere 4:
Maximizing VMware vSphere 4 performance

Cisco Unified Computing System vs. VMware vSphere 4

VARs see vSphere 4 as opening for managed services bid

VMware launches vSphere virtual infrastructure software

To tune your customer's environment for vSphere, start by looking at how your customer's existing host and guest systems have performed in the last few years. You may even want to look at servers that your customer didn't virtualize and create a new performance model to determine if the servers are now candidates for virtualization. After all, virtualization is about saving money and getting the most out of an investment.

Reviewing hardware for vSphere 4

Let's begin with the choices of hardware that you may present to your customers. All servers on the market today have processors in them. The difference between processors from Intel and those from Advanced Micro Devices (AMD) isn't generally that large and when one company comes up with one thing the other isn't far behind. But when it comes to choosing which company to go with, make sure you select the highest performing chip, and stick with that chip for at least the number of servers that your customer will put into a VSphere cluster. Make sure you also take advantage of the hardware virtualization features on modern servers. VMware has developed vSphere to take advantage of the hardware virtualization additions and those additions benefit your customers with better performing hosts and guests.

Lately, RAM for servers has been dirt cheap. From my experience working on various virtual environments, I've noticed that if disk I/O is not the limiting factor for how many guests that you can run on a host, then the limiting factor ends up being the amount of RAM that's available on a given host. With the cost of RAM declining, you can pick up a server with two quad-core chips and 48 GB of RAM for less than $15,000. Now that's pretty darn cheap, and for some of your smaller customers, that might be the way to go from a price-to-performance perspective. If you're working with larger enterprise customers and you realize it does make sense to propose a monster server, be mindful of the maximum hardware and cluster configuration limits for vSphere.

One very notable limit for VSphere is the number of guests that you can have per host when you have a cluster larger than nine nodes. If you are leveraging VMware High Availability (HA), you can only have up to 40 guests per host in a larger cluster. There are ways around that, such as not using HA or making more clusters at smaller sizes.

When you're looking to squeeze out every ounce of performance per host, try avoiding memory overcommit. Give each guest only the amount of memory it needs. If you have more memory, it goes to waste. If you don't have enough memory, the guest will start to do more paging, which increases the amount of disk I/O that a guest will consume.

The amount of I/O that each host has available is a major component for ensuring that your customers are having a good experience with their new environment. The amount of disk I/O per host helps determine the number of guests that you can have per host as well as the type of guests that you can place onto those hosts. Make sure you are proposing at least 15K RPM Ultra320 drives or faster for disks, and remember that more spindles are better than having a smaller amount of large capacity drives.

Drives aren't the only part of performance tuning and helping your customers get the most out of their investments. If possible, you should recommend disk controllers that have a large amount of write cache and work with the storage vendor to create an optimized configuration for your customer.

To achieve a higher amount of throughput and for fault tolerance configurations, look at updated switches that support 4 Gb or 8 Gb Fibre Channel (FC) cards and that use more than one FC card in your servers. If you'll be using an iSCSI configuration, make sure you have dedicated switches for your iSCSI network. Also, try to leverage network cards that have a TCP offload engine to decrease the load on your server processors and help increase throughput on the network card.

This brings us to the networking aspect of tuning vSphere hardware. Almost all of the virtual machines on hosts will be connected to a network or multiple networks in your customer's environment. The guests may be doing file copies or serving Web pages and databases. Make sure you have more than one network card for guest systems and leverage network interface card teaming for more bandwidth and fault tolerance. If your customers have multiple networks that guest systems need to be connected to and you only have a finite amount of space in your servers, you should use network trunking over multiple network cards to handle the networks and additional bandwidth that's required. If possible, use gigabit network cards and be sure that the switches have sufficient bandwidth on the backplane to support the network traffic for all of your guest systems. While you are looking at the switch configuration, make sure that the port link and duplex are set correctly for the network cards that are installed on the servers. Having gigabit cards won't help performance much if the switch port is set to 10 MB half Duplex.

About the expert
Jason Kappel is an infrastructure architect and virtualization expert at Avanade Inc. He specializes in enterprise infrastructure and data center optimization, virtualization and systems management. He has worked with some of the largest companies in the world to implement green data center solutions and has implemented several multinational server and desktop virtualization systems.

This was last published in September 2009

Dig Deeper on Server management, sales and installation

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

MicroscopeUK

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchDataManagement

SearchBusinessAnalytics

Close