Solutions provider takeaway: A vSphere virtualized infrastructure design must take into account a customer's hardware...
and its limitations, vSphere features and editions and memory.
If designing a physical server is similar to designing a house, designing a virtualized infrastructure is almost like designing a small city. There are lots of interrelated components, and you have to make many critical design decisions to ensure that all of the residents' needs are met properly.
If you don't properly account for water, gas and electric needs, for example, your houses won't have the resources they need for basic services and peak loads. Similarly, when designing a virtualized infrastructure for customers, solutions providers need to size the storage, network, CPU and memory resources correctly, or the virtual machines (VMs) will not have the resources they need to run applications.
Besides hardware resources, you have to make other decisions when designing a vSphere virtualized infrastructure, many of which will dictate your hardware requirements. The vSphere features you'll need are often tied to the type of server hardware you use. If you do not make the correct hardware decisions when designing your customer's virtual environment, you may find that you cannot use some of vSphere's features. Therefore, it's important to understand vSphere's requirements and limitations early on in your design phase.
Choosing the vSphere features you want is pretty straightforward. Certain features are only included in specific vSphere editions. For example, VMotion is only included in the Advanced, Enterprise and Enterprise Plus editions, and certain editions have restrictions on the size of the hardware you can use. Newer CPUs are starting to support more than six cores each, so if you plan on using servers with these CPUs, you must purchase either the Advanced or Enterprise Plus licenses, which support up to 12 cores per processor. All of the other vSphere editions only support six cores per processor. If you use a processor with more than six cores with those editions, you will have to disable some of the extra cores in the server BIOS to use it with vSphere.
When it comes to memory, all editions except the Enterprise Plus edition support 256 GB of memory in a host. If you need more memory support, then you will have to use Enterprise Plus, which supports unlimited memory. If you need to assign more than four vCPUs to a VM, you will also need Enterprise Plus, which supports up to eight vCPUs per VM. The vSphere features that your customers want to use, as well as the scalability they need, will dictate the edition your customers should choose.
When designing a virtualized infrastructure, in addition to the hardware limitations for vSphere editions, solutions providers should also be aware of which vSphere features work with specific hardware. The list below outlines a few examples:
- Hardware iSCSI -- Hardware initiators help offload the CPU overhead of the iSCSI protocol from the server CPU by providing a TCP/IP Offload Engine (TOE) on the network adapter used to communicate with iSCSI targets. While this can help give the server an extra performance edge, there are very few hardware iSCSI initiators supported in vSphere. Most of the supported ones are based on QLogic adapters. Be sure and check the VMware Hardware Compatibility Guide before using a hardware iSCSI network adapter, because, if it's not listed, it's usually seen as a network adapter instead of a storage adapter, which will make the TOE feature unusable.
- Fault tolerance -- Fault tolerance (FT) is a great feature that provides continuous availability to VMs to avoid downtime caused by host failures. However, this feature requires very specific CPU families from Intel and AMD. Some server models let you choose from different CPU models, so be absolutely certain that the server you are ordering has a CPU supported by FT. There are additional requirements, such as a dedicated physical network interface card (NIC) that is used for the FT logging network traffic between hosts.
Fault tolerance has many other limitations that solutions providers should be aware of. For example, it only supports VMs with a single vCPU, and VM snapshots are not supported. For the full list of requirements and limitations, check out the vSphere Availability Guide. Make sure you're OK with the limitations imposed on the VMs protected by FT and also verify that your customer's hosts meet the CPU (and any additional) requirements.
- VMDirectPath -- This feature allows a VM to directly access host adapters and bypass the virtualization layer to achieve better throughput and reduced CPU utilization. VMDirectPath is available for specific network and storage adapter models; currently, however, only network adapters are fully supported in vSphere. Storage adapters only have experimental support; they're not ready for production use. VMDirectPath requires specific chipset technology from Intel and AMD that supports Intel VT-d or AMD IOMMU. Intel VT-d has been available for some time, but AMD was slow to release IOMMU, and it is said to be in the HP ProLiant G7 family of servers. Solutions providers should check with the server vendor to make sure it supports VMDirectPath. As an alternative, you can use the new vSphere feature -- the paravirtualized adapter -- that allows VMs to directly access host storage and network adapters. However, with paravirtualization, you cannot dedicate an adapter to a VM, and it is shared by all the VMs on the host.
- VMotion -- When dealing with VMotion, CPU compatibility is one of the biggest headaches, because VMotion transfers the running architectural state of a VM between host systems. To ensure a successful migration, the processor of the destination host must be able to execute the equivalent instructions of those from the source host. Processor speeds, cache sizes and the number of cores can vary between the source and destination hosts, but the processors must come from the same vendor (either Intel or AMD) and use compatible feature sets to be compatible with VMotion.
When a VM is first powered on, it determines its available CPU feature set based on the host's CPU feature set. It is possible to mask some of the host's CPU features using a CPU compatibility mask, which allows VMotions between hosts that have slightly dissimilar feature sets. See VMware knowledge base article numbers 1991, 1992 and 1993 for more information on how to set up these masks. Additionally, you can use the Enhanced VMotion feature to help deal with CPU incompatibilities between hosts.
- Enhanced VMotion Compatibility -- Enhanced VMotion Compatibility (EVC) is designed to further ensure compatibility between ESX hosts. EVC uses Intel's FlexMigration technology and AMD's AMD-V Extended Migration technology to present the same feature set as the baseline processors. EVC ensures that all hosts in a cluster present the same CPU feature set to every virtual machine, even if the actual CPUs differ on the host servers. This feature still will not allow you to migrate VMs from an Intel CPU host to an AMD host. Solutions providers should become familiar with VMware's list of EVC supported processors.
- Dynamic Voltage and Frequency Scaling (DVFS) -- DVFS is a new CPU power management feature that relies on the Enhanced Intel SpeedStep and AMD PowerNow CPU technologies. They allow a server to conserve power by dynamically switching CPU frequencies and voltages based on workload demands. In turn, processors draw less power and create less heat, which also allows the fans to spin slower. The capability is possible because of an interface that allows a system to change the performance state (P-state) of a CPU. To use DVFS, your customer's server must be able to support changing P-states. DVFS is typically enabled in a server's BIOS and is usually referred to as a Power Regulator. Most newer, big brand servers -- i.e., Hewlett-Packard (HP), IBM and Dell -- should support technology like SpeedStep and PowerNow, but you should confirm that with the server vendor.
- Distributed Power Management (DPM) -- With DPM, VM workloads are redistributed so that host servers can shut down during periods of inactivity, which saves power. DPM relies on several technologies to power hosts back on after they are powered off, and these include: Wake on LAN (WOL), Intelligent Platform Management Interface (IPMI) and HP's Insight Lights-Out (iLO). WOL is a feature included in some network adapters, IPMI is an industry standard for server hardware interfaces for out of band management, and iLO is HP's proprietary management controller. IPMI and iLO are the preferred methods to use with DPM, as they tend to be more reliable than WOL. To use DPM, you need NICs that support WOL, a server that supports IPMI or an HP server with iLO technology. Most newer name-brand servers (i.e., Dell and IBM) use IPMI, but it's best to verify this if you plan on using DPM. All HP servers come with iLO, and your customers only need the basic iLO functionality that comes with the server. Your customers do not have to purchase the Advanced license.
Some vSphere features can improve performance, save money and improve availability, but solutions providers must plan a customer's hardware and virtualized infrastructure properly to be able to take advantage of them. When purchasing servers for customers, always do your homework and be sure to consult VMware's Hardware Compatibility Guide. If you fail to do research before servers are purchased for virtual hosts, it can end up costing you and your customers time and money when designing their virtualized infrastructure.
About the expert
Eric Siebert is a 25-year IT veteran whose primary focus is VMware virtualization and Windows server administration. He is one of the 300 vExperts named by VMware for 2009. He is the author of the book VI3 Implementation and Administration and a frequent TechTarget contributor. In addition, he maintains vSphere-land.com, a VMware information site.