Failover Clustering and Cluster Shared Volumes
First, it's important to know that there were two main components required for building a cluster in the previous version of Hyper-V: the Failover Clustering service and a Cluster Shared Volume. The Failover Clustering service allowed the cluster nodes to work together as a cluster. The Cluster Shared Volume was a shared disk resource that was connected to the cluster nodes using either Fibre Channel or iSCSI. The virtual machine components resided on the Cluster Shared Volume so that each cluster node had access to the VMs.
Hyper-V 3.0 still uses the Failover Clustering service, but Microsoft has made a lot of changes to the way that clustering works. Some of these changes are related to an organization's ability to scale the Windows Server 2012 Hyper-V cluster. The table below compares cluster capabilities in Hyper-V 2.0 (Windows Server 2008 R2) with those found in Hyper-V 3.0 (Windows Server 2012).
|Hyper-V 2.0||Hyper-V 3.0|
|Maximum running VMs per host||384||1,024|
|Maximum VMs per cluster||1,000||4,000|
|Maximum hosts per cluster||16||63|
|Maximum RAM per host||1 TB||2 TB|
As you can see, Windows Server 2012 Hyper-V clusters are able to handle a much larger workload than Hyper-V 2.0 clusters were able to. However, these improvements are just the tip of the iceberg. Microsoft has also made some major changes to the way that Hyper-V clusters use storage. In fact, the use of a Cluster Shared Volume is no longer required.
If you have customers who were using Hyper-V 2.0 and are considering an upgrade to Hyper-V 3.0, they will likely wonder how this design change will affect them. Although the use of a Cluster Shared Volume is no longer required, it is still supported. In fact, a Cluster Shared Volume is Microsoft's preferred type of storage for Hyper-V 3.0.
Alternatives to a Cluster Shared Volume
So, what happens if a customer doesn't use a Cluster Shared Volume? Microsoft provides two alternatives. One option is to use an SMB 3.0 file share. In other words, if your customer has a file server that is running Windows Server 2012, VM components could be placed on a share on that file server and used by the Hyper-V cluster. It is important for the customer to understand, however, that the VMs will consume resources on the file server, such as disk space, disk I/O and network bandwidth.
The other option for Hyper-V 3.0 storage is to use local, direct-attached storage within each cluster node. Rather than using shared storage, each cluster node simply uses its own internal storage to host VMs. This method works really well for smaller Hyper-V 3.0 clusters, but can be prohibitively expensive for larger clusters because of the cost of providing each cluster node with its own storage array.
Enhanced live migration
In Hyper-V 2.0, there were a few reasons for building clusters. Clusters provided high availability and fault tolerance for VMs, but they also enabled administrators to live-migrate VMs from one cluster node to another. All these capabilities still exist in Hyper-V 3.0, but Microsoft has enhanced VM migrations.
Hyper-V 3.0 offers a feature called shared-nothing live migration. Its essence is that virtual machines can be live-migrated between virtually any two Hyper-V 3.0 servers (although the process is much easier if the hosts are in a common domain). You can live-migrate VMs between cluster nodes just as you were previously able to, but you also can live-migrate them between standalone hosts, between a standalone host and a cluster, and even between clusters. This makes it really easy to bring a running VM into a Hyper-V 3.0 cluster without having to take the VM offline.
Brien Posey is a freelance technical writer who has received Microsoft's MVP award six times. He has served as CIO for a national chain of hospitals and health care companies and as a network administrator for the U.S. Department of Defense at Fort Knox, Ky.
This was first published in January 2013