I have spent a lot of time looking at the best way to optimize server resource management tools in a virtual environment -- elements such as CPUs, memory and network bandwidth. In a recent research study on "Best Practices in Virtual Systems Management (VSM)," I looked specifically at how these resources are optimized in virtual infrastructures. The study surveyed more than 150 different organizations on how these server resources and management tools are being used. Solutions providers should use this information to ensure that they're providing the best management services for their customers' virtual infrastructures.
The most impressive server resource management and optimization can be found in network bandwidth utilization and can be measured by the average utilization on a physical network interface card (NIC) or controller. The research study showed that the average enterprise was running around 35% NIC utilization, whereas the best practice is to achieve around 80%. Eighty percent is more than double the norm, yet it still maintains some headroom for peak loads. Thirty-five percent NIC utilization means many more workloads -- but not quite double -- can run on existing servers. This usage rate also allows for the virtualization of I/O intensive workloads, such as databases, transaction servers or email servers.
Almost as impressive as the I/O benefits is the difference in average physical CPU utilization. In the average organization, we found the rate was around 45%. However, the best practice for average CPU utilization should be between 70% and 90%, which would signal an extremely efficient use of available resources. Following best practices for virtualization and management allows servers to handle more processor-intensive workloads and would also reduce CPU wait times, especially for applications that take advantage of multicore or multiprocessor systems.
Finally, physical memory utilization is another metric that shows a substantial improvement when resources are optimized. The top organizations for this metric were able to achieve a memory utilization of 80% or higher, with most of the top performers able to maintain some headroom for peak load requirements. In contrast, the below average performers in this category were not able to maintain average use anywhere above 60%, and in many cases even lower. While not as significant as CPU or NIC utilization improvements, even these incremental memory benefits could mean the difference between stacking another workload on an existing server or facing the pain of getting authorization to buy, rack, stack, cable and fire up another physical server.
Analyzing the virtual systems management software at organizations with the best overall results and comparing the software used by below average performers, there are some specific tools that solutions providers should look at to make the most out of their customers' existing resources. Solutions providers should be sure to look at management tools for:
- Change and configuration management -- Tools like Tripwire Enterprise or EMC Ionix Server Configuration Manager can help ensure that changes are properly applied to a known set of resources and configurations. Making all changes correctly will reduce accidental overloads and consequential downtime.
- Event management/console automation -- Management tools such as HP Operations Manager or IBM Tivoli Enterprise Console provide real-time visibility into aggregated or segmented customer environments. These tools immediately detect and diagnose the negative impacts of resource exhaustion and can even trigger workload balancing as an automated remediation.
- Automated backup and recovery -- Tools such as Vizioncore vRanger Pro or Veeam Backup and Replication perform critical system backups on individual VMs that accommodate unique variations of VMs. They can also minimize physical resource utilization by offloading backup processing to remote servers and even to storage devices.
For solutions providers, many of these tools can be used across a variety of customers, as they can be run ad hoc on individual servers or even off-site.
Certainly, for solutions providers with an entire customer IT infrastructure that is hosted centrally and accessed remotely, these solutions are no-brainers. The mentioned tools can be installed and activated locally and can have significant results for server optimization.
The tools can be just as useful for systems integrators that support many customer sites. Local agents reporting to centralized consoles over secured connections give solutions providers full remote management of their customers' virtual server environments.
All of these virtual systems management tools have unique benefits in optimizing resource use for solutions providers, value-added resellers and systems integrators. As trusted advisers, customers rely on your technical expertise to drive down the cost of computing and improve core capabilities for their businesses. By using best practices for server resource management, you can help ensure that your customers receive faster response times, quicker server deployment, more rapid recovery, faster provisioning, quicker system and application upgrades and more.
About the expert
Andi Mann is vice president of research with the IT analyst firm Enterprise Management Associates (EMA). Mann has more than 20 years of IT experience in both technical and management roles, working with enterprise systems and software on mainframes, midrange, servers and desktops. Mann leads the EMA Systems Management research practice and has a personal focus on data center automation and virtualization. For more information, visit EMA's website.