Manage Learn to apply best practices and optimize your operations.

Managing storage for virtual servers: Customer best practices

Find out how to help customers develop best practices and policies for managing storage for virtual servers, to prevent capacity shortages and storage performance problems.

While server virtualization provides your customers with increased flexibility to respond to business needs, it also introduces some problems, especially in the storage arena. Those problems can reduce the scope of the project or delay future ones. By assisting customers to develop best practices for managing storage for virtual servers, you can help keep server virtualization projects from being reined in too quickly and keep future projects in the pipeline.

So, what storage problems are customers likely to confront in a server virtualization project?

There are several; foremost on the list is the I/O monster created when dozens of servers with modest I/O demands are consolidated on to a single physical server with limited processing capability.

Second, most server virtualization projects require shared storage to enable key features such as VM migration, but shared storage for virtual servers introduces problems around capacity management, I/O bandwidth management and storage mechanics performance. That's because shared storage brings a layer of complexity in itself, and the servers accessing the shared storage add to the storage capacity and I/O demand burden. And, virtual servers access shared storage differently than physical servers did. Now, we want the same servers to be able to get to the exact same data -- the VM image -- in some cases at the same time. Legacy shared storage was just the opposite: It was designed to make sure that no two servers could access each other's data.

Third, server virtualization projects are often initially very successful. Their success exacerbates the above problems by causing VM sprawl, which leads to more capacity problems. Why? Virtual machines are created almost for free in a virtualized infrastructure, since most IT organizations have a site license in place and almost always extra CPU capacity to handle virtual machine growth. The weak link is, again, at the storage infrastructure. Virtual machines create I/O and capacity overhead, even if they are idle.

The knee-jerk reaction to these problem is to sell the customer a quick-fix storage solution like solid-state disk -- and those very well may be the requirement. Before you do that, though, take a step back and provide your clients with a storage best practices assessment. Doing so will ultimately save your customers money and maintain your status as trusted advisor.

Managing storage capacity

Most server virtualization projects start out with the right amount of capacity, at least to meet initial demands. The problem comes when the project becomes successful, and sprawl starts. Your customers need ways to manage their capacity and make sure that all data growth is necessary. The first step is to develop a tagging method on virtual machines (within the configuration of the VM or via a third-party tool) to know when they were put into service and when they're supposed to be removed from service. Then on a weekly basis, the customer can look ahead to determine whether the virtual machines that are due for retirement are still needed. If not, those VMs could be decommissioned and archived. It's common in some environments to find that 20% to 30% of virtual machines are inactive, representing a significant amount of capacity that can be recovered.

You should also make sure that customers have a way to make sure the virtual machines in production are actually being used -- in other words, verify that the request for resources and the expiration date were legitimate. There are software tools that can visually show the resource usage of a virtual machine over time. You should create a policy, enforced manually or automatically by the above tools, to archive virtual machines when they're not being used.

Managing storage performance

Many customers haven't had to deal with performance issues until server virtualization and its accompanying consolidation of servers. As is the case with storage capacity, most environments provide adequate performance at the start of the virtualization project, but as it becomes successful, performance starts to suffer unless measures to prevent that are put in place. Unlike capacity, though, performance is a more variable metric; the best practice needs to be more dynamic. And it's not just physical storage performance that needs to be monitored; storage network performance and demands of the connecting hosts also need to be examined.

To address these issues, you should help customers design policies that detail how many VM image files can reside on the same LUN within an array, how many virtual machines will share a storage interface card to get to that storage and how many virtual machines will be assigned on a per-host basis. The numbers associated with these parameters will vary depending on the capabilities of the storage system, the infrastructure and the physical host.

To enforce these policies, the customer needs a tool to monitor storage performance from the view of the virtual machine all the way through to the storage system and from the storage system all the way up to the virtual machine. It should provide both historical data as well as immediate alerts of critical problems.

With the policies in place, as problems with capacity and performance arise, they are less likely to manifest as emergencies but instead as foreseen events. The policies won't solve all the problems related to storage for virtual servers, though; customers will still need archive systems or space optimization solutions when capacity demands get out of control, and solid-state disk or increased network bandwidth when performance walls are reached. But, with policies in place, you'll be better able to predict how effective the solutions will be upon installation.

About the author

George Crump is president of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.

This was last published in October 2010

Dig Deeper on Data Management Technology Services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

MicroscopeUK

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchDataManagement

SearchBusinessAnalytics

Close