Configuring VMotion SVMotion: Requirements for VARs

Learn what configuration and compatibility requirements are necessary for VMotion and SVmotion, such as at least having a Gigabit Ethernet network adapter.

Solution provider's takeaway: Browse through these lists of requirements to remember when configuring VMotion and SVMotion to avoid compatibility issues in your customer's environment.


VMotion is a powerful feature that allows you to quickly move an entire running VM from one ESX host to another without any downtime or interruption to the VM. This is also known as a "hot" or "live" migration.

How VMotion works
The entire state of a VM is encapsulated and the VMFS filesystem allows both the source and the target ESX host to access the VM files concurrently. The active memory and precise execution state of a VM can then be rapidly transmitted over a high-speed network. The VM retains its network identity and connections, ensuring a seamless migration process as outlined in the following steps.

  1. The migration request is made to move the VM from ESX1 to ESX2.
  2. vCenter Server verifies that the VM is in a stable state on ESX1.
  3. vCenter Server checks the compatibility of ESX2 (CPU/networking/etc.) to ensure that it matches that of ESX1.
  4. The VM is registered on ESX2.
  5. The VM state information (including memory, registers, and network connections) is copied to ESX2. Additional changes are copied to a memory bitmap on ESX1.
  6. The VM is quiesced on ESX1 and the memory bitmap is copied to ESX2.
  7. The VM is started on ESX2 and all requests for the VM are now directed to ESX2.
  8. A final copy of the VM's memory is made from ESX1 to ESX2.
  9. The VM is unregistered from ESX1.
  10. The VM resumes operation on ESX2.

Configuring VMotion
VMotion requires shared storage for it to function (Fibre Channel [FC], iSCSI, or NFS), and also has some strict requirements to ensure compatibility of a VM moving from one ESX host to another, as outlined in the following list.

  • Both the source ESX host and the destination ESX host must be able to access the same shared storage on which the VM is located; the shared storage can be either FC, iSCSI, or NFS. VMotion will also work with Raw Device Mappings (RDMs) as long as they are configured to work in virtual compatibility mode.
  • ESX hosts must have a Gigabit Ethernet network adapter or higher to be configured on the VMkernel vSwitch used by VMotion; slower NICs will work, but they are not recommended. For best results, and because VMotion traffic is sent as clear text, it is best to have an isolated network for VMotion traffic.
  • ESX hosts must have processors that are able to execute each other's instructions. Processor clock speeds, cache sizes, and number of cores can differ among ESX hosts, but they must have the same processor vendor class (Intel or AMD) and compatible feature sets. It is possible to override these restrictions for CPUs from the same vendor, but doing so can cause a VM to crash because it must access a CPU feature or instruction that the new ESX host does not support.

Here are some additional requirements for VMotion to function properly.

  • vSwitch network labels (port groups) must match exactly (including case) on each ESX host.
  • A VM cannot be using CPU affinity, which pins a VM to run on a specific processor(s) on an ESX host.
  • A VM cannot be connected to an internal-only (no NICs assigned to it) vSwitch.
  • Using jumbo frames is recommended for best performance.
  • The source and destination hosts must be licensed for VMotion.
  • A VM cannot have its virtual CD-ROM and floppy drives mapped to either a host device or a local datastore ISO file.

Before configuring VMotion on your host servers, you should make sure they meet the requirements for using it. Configuring VMotion is fairly simple; you must first set up the VMkernel networking stack on a vSwitch which is used for VMotion by creating a port group on the vSwitch. You can do this by editing the vSwitch that you want to use for VMotion, clicking the Add button, and selecting VMkernel. You then configure the port group properties and set the IP address for the VMotion interface. You can verify the network connectivity of the VMotion interface by using the vmkping Service Console utility to ping the VMkernel interface of other hosts.

VMotion considerations
Configuring VMotion is easy, but there are requirements and compatibility issues that you need to be aware of. Here are some considerations that you should know about when using and configuring VMotion.

  • In versions prior to ESX 3.5, VMs that had their swap file (.vswp file) not located on shared storage could not be moved with VMotion. This was because the destination host would not be able to access the .vswp file that was located on the source host's local disk. Beginning with ESX 3.5, support for using VMotion on VMs that have local .vswp files was added. If a VM with a local .vswp file is VMotioned, the .vswp file is re-created on the destination host and the nonzero contents of the .vswp file are copied over as part of the VMotion operation. This can cause the VMotion operation to take slightly longer than normal due to the added .vswp copy operation in addition to the normal CPU and memory state copy operation. Using a local swap file datastore can be advantageous, as it frees up valuable and expensive shared disk space to be used for other things, such as snapshots and virtual disks.
  • If your VMs have their CD-ROM drive mapped to either a host device or a local ISO datastore, they cannot be VMotioned, as the destination server will not have access to the drive. Additionally, if the CD-ROM is mapped to a shared ISO datastore, make sure all ESX hosts can see that shared ISO datastore. Consider using a shared ISO datastore on a VMFS volume, or alternately, on an NFS or Samba share instead.
  • Using VMotion with VMs with running snapshots is supported, as long as the VM is being migrated to a new host without moving its configuration file or disks.
  • It's very important to ensure that vSwitch network labels are identical (casesensitive) across all hosts. If they are not, you cannot VMotion a VM between two hosts that do not have the same Network Labels configured on their vSwitches.
  • CPU compatibility is one of the biggest headaches when dealing with VMotion because VMotion transfers the running architectural state of a VM between host systems. To ensure a successful migration, the processor of the destination host must be able to execute the equivalent instructions as that of the source host. Processor speeds, cache sizes, and number of cores can vary between the source and destination hosts, but the processors must come from the same vendor (either Intel or AMD) and use compatible feature sets to be compatible with VMotion. When a VM is first powered on, it determines its available CPU feature set based on the host's CPU feature set. It is possible to mask some of the host's CPU features using a CPU compatibility mask in order to allow VMotions between hosts that have slightly dissimilar feature sets. See VMware Knowledge Base articles 1991 (, 1992 (, and 1993 ( for more information on how to set up these masks. Additionally, you can use the Enhanced VMotion feature to help deal with CPU incompatibilities between hosts.
  • It is a recommended security practice to put your VMotion network traffic onto its own isolated network so that it is only accessible to the host servers. The reason for this is twofold. First, VMotion traffic is sent as clear text and is not encrypted, so isolating it ensures that sensitive data cannot be sniffed out on the network. Second, it ensures that VMotion traffic experiences minimal latency and is not affected by other network traffic as a VMotion operation is a time-sensitive operation.

Enhanced VMotion Compatibility (EVC)
Enhanced VMotion Compatibility (EVC) is designed to further ensure compatibility between ESX hosts. EVC leverages the Intel FlexMigration technology as well as the AMD-V Extended Migration technology to present the same feature set as the baseline processors. EVC ensures that all hosts in a cluster present the same CPU feature set to every VM, even if the actual CPUs differ on the host servers. This feature will still not allow you to migrate VMs from an Intel CPU host to an AMD host. Therefore, you should only create clusters with ESX hosts of the same processor family, or choose a processor vendor and stick with it. Before you enable EVC, make sure your hosts meet the following requirements.

  • All hosts in the cluster must have CPUs from the same vendor (either Intel or AMD).
  • All VMs in the cluster must be powered off or migrated out of the cluster when EVC is being enabled.
  • All hosts in the cluster must either have hardware live migration support (Intel FlexMigration or AMD-V Extended Migration), or have the CPU whose baseline feature set you intend to enable for the cluster. See VMware Knowledge Base article 1003212 ( for a list of supported processors.
  • Host servers must have the following enabled in their BIOS settings: For AMD systems, enable AMD-V and No Execute (NX); for Intel systems, enable Intel VT and Execute Disable (XD).

Once you are sure your hosts meet the requirements, you are ready to enable EVC by editing the cluster settings. There are two methods that you can use for doing this, as EVC cannot be enabled on existing clusters unless all VMs are shut down. The first method is to create a new cluster that is enabled for EVC, and then to move your ESX hosts into the cluster. The second method is to shut down all the VMs in your current cluster or migrate them out of the cluster to enable it.

The first method tends to be easier, as it does not require any VM downtime. If you choose the first method, you can simply create a new cluster and then move your hosts one by one to the cluster by first putting it in maintenance mode to migrate the VMs to other hosts. Then, once the host is moved to the new cluster, you can VMotion the VMs back to the host from the old cluster to the new one. The downside to this method is that you have to once again set up your cluster HA and DRS settings on the new cluster, which means you'll lose your cluster performance and migration history.

Storage VMotion
Storage VMotion (SVMotion) allows you to migrate a running VM's disk files from one datastore to another on the same ESX host. The difference between VMotion and SVMotion is that VMotion simply moves a VM from one ESX host to another, but keeps the storage location of the VM the same. SVMotion changes the storage location of the VM while it is running and moves it to another datastore on the same ESX host, but the VM remains on the same host. The VM's data files can be moved to any datastore on the ESX host which includes local and shared storage.

How SVMotion works
The SVMotion process is as follows.

  1. A new VM directory is created on the target datastore, and VM data files and virtual disk files are copied to the target directory.
  2. The ESX host does a "self" VMotion to the target directory.
  3. The Changed Block Tracking (CBT) feature keeps track of blocks that change during the copy process.
  4. VM disk files are copied to the target directory.
  5. Disk blocks that changed before the copy completed are copied to the target disk file.
  6. The source disk files and directory are deleted.

SVMotion does more than just copy disk files from one datastore to another; it can also convert thick disks to thin disks, and vice versa, as part of the copy process. SVMotion can also be used to shrink a thin disk after it has grown and data has been deleted from it. Typically when you perform an SVMotion, you are moving the VM location to another storage device; however, you can also leave the VM on its current storage device when performing a disk conversion. SVMotion can be an invaluable tool when performing storage maintenance, as VMs can be easily moved to other storage devices while they are running.

Configuring SVMotion
You should be aware of the following requirements for using SVMotion.

  • VM disks must be in persistent mode or be an RDM that is in virtual compatibility mode. For virtual compatibility mode RDMs, you can migrate the mapping file or convert them to thick-provisioned or thinprovisioned disks during migration, as long as the destination is not an NFS datastore. For physical compatibility mode RDMs, you can migrate the mapping file only.
  • The VM must have no snapshots. If it does, it cannot be migrated.
  • ESX/ESXi 3.5 hosts must be licensed and configured for VMotion. ESX/ESXi 4.0 and later hosts do not require VMotion configuration in order to perform migration with SVMotion. ESX/ESXi 4.0 hosts must be licensed for SVMotion (Enterprise and Enterprise Plus only).
  • The host that the VM is running on must have access to the source and target datastores and must have enough resources available to support two instances of the VM running at the same time.
  • A single host can be involved in up to two migrations with VMotion or SVMotion at one time.

In vSphere, SVMotion is no longer tied to VMotion; it is licensed separately and does not require that VMotion be configured to use it. No extra configuration is required to configure SVMotion, and it can be used right away as long as you meet the requirements outlined in the preceding list. In VI3, you needed to use a remote command-line utility ( to perform an SVMotion; in vSphere, this is now integrated into the vSphere Client. To perform an SVMotion, you select a VM and choose the Migrate option; however, you can still use to perform an SVMotion using the vSphere CLI. When the Migration Wizard loads, you have the following three options from which to choose.

  • Change Host -- This performs a VMotion.
  • Change Datastore -- This performs an SVMotion.
  • Change Host and Datastore -- This performs a cold migration for which the VM must be powered off.

Advanced vSphere Features
  Enabling VMware HA, DRS: Advanced vSphere features
  Configuring VMotion SVMotion: Requirements for VARs
  VSphere Fault Tolerance requirements and FT logging

Eric Siebert is a 25-year IT veteran whose primary focus is VMware virtualization and Windows server administration. He is one of the 300 vExperts named by VMware Inc. for 2009. He is the author of the book VI3 Implementation and Administration and a frequent TechTarget contributor. In addition, he maintains, a VMware information site.

Printed with permission from Pearson Publishing. Copyright 2010. Maximum vSphere: Tips, How-Tos, and Best Practices for Working with VMware vSphere 4 by Eric Siebert. For more information about this title and other similar books, please visit

Dig Deeper on Server virtualization technology and services