Problem solve Get help with specific problems with your technologies, process and projects.

Three setbacks when designing Hyper-V R2 High Availability

Deploying Hyper-V R2's High Availability feature requires the expertise of a solutions provider, and knowing about its drawbacks will save you time during the deployment process.

Microsoft Hyper-V R2 has an exceptionally easy installation process. Even the greenest solutions provider can click the few buttons required -- Next, Next, Finish -- to install. But like all technologies, the overt ease in installing R2's core services belies the true complexities of creating a workable virtualization infrastructure.

More Hyper-V R2 resources:
Microsoft Hyper-V R2 technology services opportunities

Live Migration in Hyper-V R2

Hyper-V R2 matches VMware with 64-processor support

Microsoft Hyper-V R2 administration and management guide

Becoming involved with your customers' Hyper-V R2 installations is likely to come after they've toyed with the technology. Once customers have installed it on a server or two and recognize its value, they very quickly discover that Hyper-V R2's high-availability feature isn't as simple to deploy. As such, delivering expertise in implementing a highly-available infrastructure is where solutions providers can add value.

To that end, there are three key setbacks that you must be aware of when designing Hyper-V R2 high availability for your customers.

  1. Eliminating hardware bottlenecks in Hyper-V R2

    Virtual hosts are unlike other servers in a business data center. The hosts don't necessarily run a production workload; rather, they enable other workloads to run through their processing. The consolidation of workloads atop each virtual host requires horsepower with plenty of potential to support additional virtual machines (VMs).

    When spec'ing out hosts for customers, always remember three things. First, Hyper-V R2 hosts tend to become memory bound before any other resources. Because Hyper-V R2 does not support memory overcommit, your customers' assigned virtual memory can never surpass their physical memory. Thus, maxing out memory in every Hyper-V R2 is an absolute must.

    Second, while physical processors can, and are, shared among collocated VMs, you should limit the number of assigned virtual processors so they don't exceed the number of physical processors. Doing so means limiting VMs to as few assigned virtual processors as possible, with only a single processor being the standard starting point. Spec'ing hosts ahead of time for the anticipated number of virtual machines that will run atop virtual hosts is also a good idea. If you plan to run 16 VMs, then consider 16-way servers as a rule of thumb for determining the number of VMs to run.

    The third thing to remember for bottleneck prevention is to include enough network interface cards (NICs) in the Hyper-V R2 host. For environments that use Fibre Channel storage, the advised amount is no less than four NICs. Customers that use iSCSI storage should buy no less than six NICs. In my own consultations, I recommend 10 as a starting point. This number seems extravagant at first, but the need for teamed production networking (across sometimes multiple connections) as well as teamed storage networking, cluster heartbeat, Live Migration and management connections quickly adds up.

  2. Hyper-V R2 memory

    If Hyper-V R2 had an Achilles' heel, it would be memory. Microsoft's decision to not include memory overcommit features is justified, but the company's decision is shortsighted when considering failure states.

    With Hyper-V R2 clusters, VMs can be failed over to surviving cluster nodes when a host has a problem. This occurrence is great for availability, because the loss of a Hyper-V host needn't cause a lengthy outage of its VMs. Because Hyper-V R2 does not support memory overcommit, Hyper-V clusters must reserve as unused amount of RAM that's at least equivalent to the RAM in one VM. The unused RAM supports the failure of one cluster node. That RAM needn't necessarily be all on one server, it can be spread across the other cluster nodes. You need the memory reserve in order to power the VMs when one host dies.

    Look at the situation like this: If you create a two-node Hyper-V cluster, 50% of the cluster's total memory must remain unused to fail over VMs. Anything less, and some VMs from the failed host won't be allowed to power on at the surviving host. A four-node cluster must retain 25% of memory in reserve for this event, and so on. Therefore, not only do you need high-performance servers, you also need a large number of servers to minimize the RAM reserve.

  3. Hyper-V R2 Cluster Shared Volumes lack support

    Microsoft's inclusion of the Cluster Shared Volumes (CSV) feature in Windows Server 2008 R2 is an excellent fit for every Hyper-V R2 installation. CSV enables VMs that are stored on the same logical unit number (LUN) to be failed over independently. Without this feature, the entire LUN would require failing over if a problem occurred.

    There's a relatively unknown problem with support once CSV is enabled, specifically in the realm of backups. As of now, somewhere close to zero third-party companies have products that support host-based backups of VMs on CSV-enabled partitions. Even Microsoft's Windows Server Backup can't accomplish this task. Microsoft's System Center Data Protection Manager 2010 is slated to include support, but it is only now available in beta. Products are forthcoming that include this support, but be conscious of this limitation before you enable CSV's features.

About the expert:
Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.


This was last published in October 2009

Dig Deeper on Server virtualization technology and services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

MicroscopeUK

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchDataManagement

SearchBusinessAnalytics

Close