Manage Learn to apply best practices and optimize your operations.

How to address data storage management problems caused by virtual desktop software

Find out why virtual desktop software systems cause data storage management problems – including capacity surprises and boot storms -- and learn techniques to resolve them.

Your customers are beginning to look at virtual desktop software -- also referred to as desktop virtualization or virtual desktop infrastructure (VDI) -- as a way to help manage the challenges of user desktops and laptops. Desktop virtualization typically fills two roles: to either drive down end-point costs in the case of organizations with a large number of shift workers or to lower end-point management costs for companies with users equipped with their own laptops and desktops. In either case, the load that desktop virtualization puts on data storage management -- which manifests in capacity problems and boot storms -- is unprecedented, even when compared with server virtualization.

While it is true that most traditional desktops have far less capacity than physical servers, there are generally far more desktops than servers, so when those desktops are virtualized, there's still a significant capacity impact on the data center. And while most virtual desktop software products have the capability to negate initial capacity consumption through the use of techniques such as golden masters, in which one core copy of the desktop environment feeds many users, the net changes that users make to those systems are stored separately. Clearly, golden masters help rein in capacity demands, but hundreds or thousands of users adding data to their virtual desktops (as well as making changes to their preferences) will of course increase the overall data storage management burden.

To help customers address their capacity issues, you should look for supporting storage systems that can provide deduplication and/or compression.

As you'd expect, deduplication's role with desktop virtualization is to identify and reduce potential redundancies that are introduced after imaging. The classic example is the same document being emailed and then stored on each desktop or laptop. In the past these documents were all on standalone, individual systems, so no capacity gains could be made. With a virtual desktop software infrastructure, those documents would be saved on the same storage system, and with deduplication, the redundancy can be eliminated. Another, less obvious storage gain of deduplication relates to operating system updates. While these updates can be applied to the master, not all updates are; some are applied to their resulting clones on an individual basis. Because of this, duplication can work its way into the system files, and deduplication can help address this problem.

The role of compression with VDI, on the other hand, would be to help your customers further reduce the data footprint by compressing the entire data set. Together, deduplication and compression can significantly shrink the capacity footprint of the virtual desktop software environment, in some cases by as much as 95%.

The second issue stemming from desktop virtualization related to data storage management is the boot storm, or login storm, which can occur at the beginning of the work day or during a shift change, where hundreds of users might log in to the system at about the same time. The storage system becomes overwhelmed with the number of requests, and your customer's users can experience incredibly long boot times. Quickly, they start screaming for their old desktops back.

To resolve the boot storm issue, the first step is to reduce the overall capacity requirements as discussed above. With the capacity reduced, more of the boot images can be loaded into the storage system's cache and the storage I/O process required by login can be served from the faster RAM that makes up that cache. In a virtual desktop software environment, it is advisable to add as much memory to the storage system's cache as possible.

But, even if your customer can afford all the cache that their system supports, sometimes the storage system can't support enough cache to fix the problem. In that case, traditional storage media will need to serve up the initial image loads. The faster the media the better; solid-state drives (SSD) are ideal. The problem is cost-justifying the most expensive storage possible just for a once-a-day event.

The best approach here is to recommend a solution that can automatically tier images to and from a faster SSD platform during the boot process and then move those images back to slower mechanical drives after the boot-up is complete. After the initial boot storm has passed, automated tiered storage could move performance-starved business application data to the faster tier for the rest of the day until the next morning's login.

If your customer can't cost-justify an SSD tier, another, slower option is to use a small group of 15,000 rpm drives (Fibre Channel or SAS, depending on the storage system) to auto-tier in conjunction with the normal primary storage tier, which typically consists of 10,000 rpm drives.

In our next article, we'll discuss the data protection challenges that virtual desktops bring.

About the author

George Crump is president of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection. Find Storage Switzerland's disclosure statement here.

Dig Deeper on Data Management Technology Services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.