Tip

VDI sizing and disk array controller requirements

Bill Kleyman and Steve Gilmore, Contributors

For customers that are considering a virtual desktop infrastructure (VDI), one of the key decision points relates to finding the right disk array

    Requires Free Membership to View

controller. It’s a decision that needs to be carefully considered, since making a bad choice around a controller can be a costly mistake to fix. The controller has to have the right features, specifications and number of disks, both for present and future needs. You can’t simply advise that your customers purchase a storage controller with a large set of disks, and you must plan for future growth.

So, in considering VDI sizing, how do you help them estimate their disk array controller needs? Use this tip to help make the choices.

Table of contents

Key criteria
Database sizing techniques
Understanding IOPS
Best practices and areas of caution
 

Key criteria

The most important thing to remember is that every environment is unique and therefore there is no one right answer for every customer. Focus on your customer’s specific needs and long-term business goals.

With that in mind, there are several key criteria to consider:

Capacity. You need to know how much capacity the controller will need. We explain below how to make that estimation in a VDI sizing exercise.

Performance. To determine performance requirements for a controller to be used in a VDI project, you need to consider the type of databases to be used, the type of applications stored and how many users will be accessing the environment concurrently. Large databases with high transactional rates will require higher amounts of IOPS since more users will be accessing the workload. This is the same with application workloads. VMs running highly utilized application sets will require more IOPS to support users trying to access the virtual environment. As user count increases, so does the performance impact on the SAN. We explain below how to estimate IOPS needs in a VDI sizing exercise.

Scalability. Data agility and the ability for the environment to evolve with the needs of the business are very important storage design considerations. Working with vendors capable of seamless scalability, where workloads can be migrated between controllers, is important not only for minimal downtime but for infrastructure growth as well.

Availability and reliability. In determining availability and reliability needs, ask customers: “How mission-critical are your applications?” Once you have the answer to that question, to properly size the controller for a VDI project, you need to know how well a given device can support high availability (HA). Also, know how well the device works with specific applications. Good sizing techniques will include research into an environment’s application set to see how well it performs with a specific controller.

Data protection. Carefully researched storage solutions will have thorough considerations for the protection of vital data sets. This means looking for features that help data  move between disks, backup considerations and even site-to-site recovery strategies. And, the way a controller stores and applies the data within the environment is very important. That means intelligent data deduplication, compression and snapshots.

IT staff and resources. Planning for and purchasing a controller must be one of the first steps, but you and your customers also need to understand how the controller works. In many situations, it makes sense for you to propose training the employees on how to best use the controller.

Budget concerns. Although budgeting is always a concern, saving money on storage means efficiently choosing the right components for the current environment -- and the future one as well. If considerations are not made for future virtualization or expansion, IT managers may find themselves paying more in the long term to resolve performance or scalability issues.

Database sizing techniques

Proper configuration of I/O subsystems is critical to the optimal performance and operation of a SAN-stored database. When working with VDI, administrators must create user databases that will house specific application data, user information and sometimes master images. Database sizing is one of the many important components of working with a controller. It’s important to note how large a database will be, how many users will be accessing it and how it will behave on a given system. And many major storage vendors already have tools that provide accurate system sizing recommendations based upon data collected directly from the environment.

EMC, for instance, uses a sizing spreadsheet, which helps IT organizations establish a baseline and understanding of what’s needed to properly deliver its core workloads and databases. NetApp, for its part, provides a great database sizing tool. Information such as database size, transaction log size, number of archive logs kept online, database read/write rate and block size are used to provide solid configuration recommendations. The database sizing tool should be used to provide recommended sizing configurations for all database-related opportunities involving vendor products supported by the sizing tool, regardless of size.

The most important thing to remember here is that each environment will have its own set of requirements. You need to take the time to see how many of your customer’s users will be accessing the information and how often. VDI can become resource-intensive, especially if numerous users are accessing the data all at once. This is where proper sizing techniques can alleviate problems with internal hardware performance bottlenecks, capacity and boot storms.

Understanding IOPS

The right controller will be able to deliver workloads efficiently even when under load. This is where IOPS come into play. You can gauge IOPS with tools such as Iometer; the open source IOzone Filesystem Benchmark; and FIO, an I/O tool meant to be used both for benchmark and stress/hardware verification. Among various storage systems, IOPS can vary greatly. 

To simplify information gathering, administrators can find statistics in Perfmon for Windows environments and IOstat for Linux/Unix machines.

Depending on the environment, requirements for the type of drive will vary. SSD technology has taken IOPS considerations in to a new field. For instance, the Violin Memory Violin 3200 Flash Memory Array is capable of 250,000 or more random read/write IOPS, while SATA and PCIe devices range from 5,000 to more than 100,000 IOPS.

There are two types of IOPS that engineers need to consider when sizing for VDI:

  • Front-end IOPS is the total number of read and write operations per second generated by an application or applications. These read and write operations are communicated to the disk(s) presented to the application or applications.
  • Back-end IOPS is the total number of read and write operations per second that a RAID/storage controller sends to the physical disks. The value of the back-end IOPS is higher than front-end IOPS most of the time -- simply because a certain RAID level has a certain overhead. This overhead is called the write penalty.

When working with RAID technologies, write penalties must be considered. According to EMC training material, this is how to calculate it:

(IOPS x %R) + WP (IOPS x %W) / physical disk speed

(IOPS equals users times IOPS per user; %R equals read percentage; %W equals write percentage; and WP equals write penalty [two for RAID 1/0 and RAID 1, four for RAID 5].)

For VDI environments you’ll need to determine how many desktops will be supported. IOPS for individual users can be calculated based on VDI workload size supported by the system; 8 IOPS for typical medium-workload Windows XP desktops, 12 IOPS for typical high-workload Windows 7 desktops and 16 IOPS for very high workloads.

Best practices and areas of caution

Sizing a controller will always be environment-specific. However, there are special areas of concern and some best practices to follow.

Assessment and design. This may add a week or more to a project -- but it can save a lot of money in the long run. Knowing what an environment is doing now and what it is capable of doing in the future can help IT managers make the right buying decisions. Take the time to study applications and workloads. Application requirements can be broadly divided into four categories: performance, availability, data loss and security.  These categories make up the majority of customer concerns when talking about their applications.

I/O and performance. Knowing and understanding an environment will help you quickly determine what core I/O you will require to make the environment run efficiently.

To be successful in designing and deploying storage for an organization’s VDI initiative, administrators need to have an understanding of the workload’s I/O characteristics and a basic understanding of the given environment’s I/O patterns. Windows Perfmon is a great place to capture this information for an existing workload. Look for answers to the following questions:

  • What is the read vs. write ratio of the application?
  • What are the typical I/O rates (IOPS, MB per second and size of the I/Os)?

Perfmon produces a lot of data about system performance. Here’s what to pay attention to when preparing for a VDI implementation:

  • Average read bytes per second and average write bytes per second
  • Reads per second and writes per second
  • Disk read bytes per second and disk write bytes per second
  • Average disk seconds per read and average disk seconds per write
  • Average disk queue depth

Solid-state technology. Flash memory and SSD technology are making their way into enterprise controllers, reducing or eliminating boot storms. For example, EMC’s Clariion places powerful and fast SSDs onboard its controller to remove moving parts and improve read/write speeds. And NetApp uses a device called FlashCache, which is onboard, deduplication-aware caching. In that scenario, administrators are able to move entire workloads from spinning disks and onto flash memory.

Backup and recovery. The ability to quickly and efficiently back up entire VDI workloads is crucial for organizations that need advanced data protection. Backup times for databases and applications can take hours and, in some cases, days. It’s not uncommon for restore time with conventional backup tools to take twice as long as the backup process. This is not acceptable and becomes critical after the customer has an event. Updating the test/development environment from backup data is a long process that could delay key project releases. In these situations, having a good DR and backup plan is necessary. Knowing and testing recovery strategies will help your customers be prepared for any storage-related event.

Bill Kleyman is the virtualization architect at MTM Technologies, a Connecticut-based IT solution provider. Steve Gilmore is a storage architect at MTM Technologies.

This was first published in February 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.