Problem solve Get help with specific problems with your technologies, process and projects.

How to generate new business from server virtualization projects

Learn how to generate new business from clients after they have already implemented a server virtualization project.

What do you do when you find out that one of your customers or prospects has already implemented a server virtualization project? While this is disappointing news, there are still plenty of opportunities for you to add value to the project. In fact, after the initial phase, many customers get stuck on how to extend the project. That is where you come in.

More server virtualization resources
How to enter the server virtualization market

Server virtualization security: Help customers avoid risks

Resource utilization projects add value to server virtualization 

The goal for many server virtualization projects is to better use the compute resources available to the data center. Most standalone servers spend the majority of the day totally untaxed. The problem is that after the first phase of server virtualization, total CPU utilization is actually lower than it was before the project began. And that's because most customers, as they start a virtualization project, purchase new multiprocessor servers to house the upcoming virtual machines. Soon they find out that virtualization is more efficient than they planned and that they mistakenly virtualized very basic workloads. The result is that they actually end up with more available compute resources.

It is at the start of phase 2 when a VAR armed with the right tools can bring the most value to an existing virtualization project. In this phase, the VAR must work with the customer to move workloads that are more demanding into the virtual infrastructure. These workloads also have a higher degree of risk associated with them, because they are likely to be more critical to the business. Careful planning is essential.

Create a capacity plan for server virtualization projects

The first step in phase 2 is to develop a capacity plan for the current virtualization infrastructure. This plan needs to assess the available compute capacity of the virtualization hosts as well as the compute capacity impact of the servers to be virtualized. In addition, the plan must ensure that there is enough capacity for any live migration jobs that may occur. You have to know in advance what hosts will be ideal targets in the event of a live migration. Furthermore, this planning needs to cover the disaster recovery site. If you're replicating critical workloads, the impact of activating the workloads needs to be planned for in advance.

While this data and process can be manually assembled into a spreadsheet, the dynamic nature of the virtual environment lends itself well to using tools that can help you collect the data. Companies, such as Tek-Tools Software and VKernal Corp., have software tools that automatically capture the details of both virtualized and nonvirtualized environments and in some cases actually simulate the impact of moving the standalone server into the virtual environment.

Bring workload migration into virtualization projects

The next step is the actual migration of these workloads. Again, while some vendors' virtualization tools have basic migration capabilities built in, other tools, from companies like Vizioncore Inc., allow for a smoother conversion to a virtual machine. Conversion tools enable replication of the physical server to the virtual environment. They test the server and then providing a final incremental update before decommissioning the physical server. Some tools even allow for a virtual-to-physical conversion in case something goes wrong in the virtual environment and you need to move back to a physical server for support purposes.

One of the concerns in virtualizing more critical workloads is ensuring that those workloads are able to deliver a consistent level of performance to their users. Workloads that share the physical host can impact critical workloads. As these workloads start to increase, storage or network I/O performance can become taxed.

Having the ability to isolate and prioritize workloads is critical. Today, companies such as Brocade Communications Systems, Neterion and SolarFlare Communications have that ability and allow you to give certain virtual workloads I/O priority. With I/O prioritization, you can virtualize even more mission-critical workloads by ensuring a certain level of I/O performance to those workloads. Your customers will be able to add to the virtual machine count on the host but won't have to worry about affecting the baseline performance of critical virtual machines.

Add value with storage protocol options

Finally, you can add value to the storage infrastructure by helping your customers navigate through all of the storage protocol options available and by providing advice on storage capacity management. Virtual machines are often created with a default template. In that template is a default disk image size, and sometimes the capacity is not nearly enough, or, more often, it is far too much. The result is wasted disk capacity, and this can be significant when it repeats across 100 virtual machines.

Tools like those from Vizioncore and Tek-Tools allow for a direct mapping of capacity utilization from either a virtual cluster viewpoint or a more granular virtual machine viewpoint. Once this information is captured, the capacity used per virtual machine can be easily resized for more efficient use of space. Some tools even allow you to monitor the performance aspects of the virtual machine partitions for more balanced storage I/O performance throughout the storage array.

From planning the next move to optimizing the storage that a virtual infrastructure relies upon, there are countless opportunities for a reseller to add value to a server virtualization project. In fact, after customers have completed the initial rollout, they actually need even more help. Resellers that become involved at this stage can help their customers see the full potential of their virtualization investment.

About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.

Dig Deeper on Server virtualization technology and services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.