Manage Learn to apply best practices and optimize your operations.

Improving Hyper-V performance: Four best practices

Solid Hyper-V performance relies on proper workload balancing and resource utilization. Follow these four best practices to optimize Microsoft Hyper-V R2.

When it comes to optimizing Hyper-V performance, in simple terms, more is better. The more resources solutions providers throw at a Microsoft Hyper-V R2 environment, the less likely it is that virtual machines (VMs) will be vying for a limited supply.

More resources on Microsoft Hyper-V R2:

Microsoft Hyper-V R2 features and comparison guide

Evaluating virtualization management software for Microsoft Hyper-V R2

Three setbacks when designing Microsoft Hyper-V R2 High Availability

But while this "more is better" approach is great for hardware suppliers (as well as your own hardware purchase quota), it goes against some of the fundamental reasons why your customers are moving to virtualization in the first place. To optimize Hyper-V performance in the correct way, you need to go beyond this approach.

To that end, let's deconstruct the "more is better" mantra for the Microsoft Hyper-V R2 platform into four best practices. These best practices still follow the guidelines that VM resource supply must exceed workload demand.

1. Boost Hyper-V performance with more processors, not more powerful processors.

While individual processor power has its place in spec'ing virtual environments, the actual count of processors is often more important. Because a processor is scheduled to process the needs of only a single VM at a time, having more processors ensures that fewer VMs are in a wait state during a particular period. In almost every case, spending money on greater numbers of processors is generally a better idea than having more powerful processors.

2. Fill Microsoft Hyper-V R2 servers to their maximum RAM capacity.

The quantity of physical RAM that is installed to the Hyper-V hosts is exceptionally important in achieving the greatest number of virtual machines per host. RAM is also exceptionally important during failover events, where one host in a cluster fails for some particular reason. During these events, VMs must be re-hosted onto surviving cluster members if you intend to keep them running. By ensuring that you always have more RAM than your customers really need, they'll be well-prepared to seamlessly survive that failover event with no effect on their operations or Hyper-V performance.

3. Give your customers room to grow.

Highly-available Microsoft Hyper-V R2 environments must always be prepared for host failures. Therefore, every Hyper-V cluster should be minimally architected in an N+1 configuration, where N represents the amount of resources needed to support today's VM needs. If your customers need to support, for example, 40 concurrently running VMs, you need to build out their environment so that they can support 50. Notwithstanding their need for future growth, reserving a percentage of cluster resources for failover events will ensure that they survive their next host failure.

4. Hyper-V performance management is an absolute requirement.

Virtual environments tend to become exceptionally complex very quickly -- so complex, in fact, that no mere human can monitor real-time resource utilization well enough to ensure optimum workload balancing. For the ongoing health of customers' environments, solutions providers should always recommend virtual environment monitoring software or services that are specifically designed for use with their hypervisor. Use this type of software to ensure that VMs are functioning properly and getting the resources they need.

About the expert Greg Shields, MVP, vExpert, is a partner with Concentrated Technology. Get more of Greg's tips and tricks at

Dig Deeper on Server virtualization technology and services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.