Implementing SSD in a cache appliance can improve performance across multiple storage systems

Learn about the benefits of using solid-state storage in a cache appliance, how the appliance compares with other SSD implementations and why storage VARs should pay attention.

With the rise of server virtualization and the general trend toward more data, at one time or another most companies need more storage performance, particularly IOPS. At the device level, solid-state storage seems to be the technology to provide it. But the devil’s in the details, which in this case means implementation. Choosing how solid-state storage is put into a storage infrastructure can determine how effectively performance is...

delivered to applications, which in the end is what really matters.

Implementation of solid-state storage can take a number of different physical forms, like drive form-factor solid-state drives (SSDs) that replace hard disk drives in a server or storage array or flash PCI Express (PCIe) boards that install into a server. Another alternative can be a dedicated flash storage array or appliance installed on the storage network. Implementation can take different logical forms as well, like creating a new “Tier 0” high-performance storage area in which to move performance-critical application data during periods of highest activity. Or it can be a cache appliance that holds a copy of these data, which are still maintained on the existing storage areas and updated when their cache “session” is terminated.

This article will focus on the latter of these, the caching appliance implementation, in which an independent storage device is installed in the environment and shared by one or more servers or storage systems -- either block or NAS. We’ll detail the advantages of caching appliances and discuss some implications for VARs selling these solutions. 

Why a cache appliance makes sense

Simply replacing hard disk drives in a server or existing storage array with SSDs can be the easiest solution. But it often means the SSDs can’t be used to their full capability because existing hard drive controller architectures typically don’t provide the IOPS or connectivity SSDs require. Also, the lack of density and cost per gigabyte of these implementations can force users to settle for less capacity than needed (or more), resulting in efficiency trade-offs and lower performance. Dedicating SSDs to specific servers or storage systems also reduces opportunities to share this high-priced resource, which results in fewer applications receiving a performance boost, fewer systems being included in the cost justification and increased management overhead.

A caching appliance isn’t a storage array but an independent high-speed device that is purpose-built for SSDs and can be shared by multiple back-end storage systems. These standalone systems can address a number of issues that the industry has had implementing solid-state storage devices in its quest to improve application performance:

Shared performance and utilization benefits. IOPS requirements of storage devices are constantly changing depending on the workloads of the servers they’re supporting. While installing SSDs into a specific NAS or block storage array can improve performance, it often results in periods of low utilization, when the servers using that individual storage system are less active. An independent caching appliance, on the other hand, can be shared across multiple storage systems, enabling higher asset utilization and improved application performance for more servers. It can also provide better ROI justification for an SSD upgrade project as the costs are spread across more applications. This can make even more specialized devices, like DRAM, cost-effective, further improving performance for the appliance. In some use cases, a caching appliance can also turn one or more midrange disk systems into a “performance” solution for less money than a comparable high-end system.

Capacity benefits. A shared cache appliance can also provide enough capacity to pin an entire data set into solid-state storage. This can result in better performance with fewer cache misses and better efficiency, as data movement between solid-state and disk storage is greatly reduced. And, the effective capacity of the cache can be extended by combining multiple storage types, like SSD and high-speed disk, into the same appliance.

Lower processing overhead than tiered storage. Compared with a Tier 0 implementation of solid-state storage, this appliance is a true cache, which means it takes a copy of the most active or most performance-critical data sets. Tiered storage solutions that typically reside on the storage controller move data into and out of the high-speed storage space, generating processing overhead and reducing efficiency. And, these automated tiering systems require a warm-up period in which usage information about new data sets is accumulated before they can move data, sometimes taking hours or days.

No impact on data protection. Since the data set is maintained on the primary storage system, data protection is not affected by the caching appliance. And, storage services -- like snapshots, replication and deduplication -- can be kept on the existing back-end storage systems and not added to the cache CPU, helping to maintain performance.

Nondisruptive implementation. Finally, implementation of the caching appliance is less disruptive, since it involves only copying data sets, not moving them from existing storage.

Sometimes referred to as a “memory array,” as opposed to a storage array, caching appliances are designed from the ground up to support solid-state storage. This means their architectures provide the IOPS required to “feed” many more solid-state devices than can a traditional storage array. This in turn produces better storage density and higher capacity points, with the benefits mentioned above. It also eliminates the potential situation of legacy disk array shelves running nearly empty because they can support only a handful of SSDs. Besides density, this results in better efficiency as more flash cells can be made available in the memory array for overhead processes like garbage collection.

Bottom line for VARs

For organizations that need better application performance, solid-state storage technologies are certainly a viable option. But given the number of SSD products available and the fact that they aren’t a straight “plug replacement” upgrade for spinning disk drives, many VARs’ customers may need some help designing a solid-state solution. This should mean opportunity for storage integrators.

Caching appliances can supply VARs with a strong solution candidate when it comes to a solid-state storage performance upgrade. These systems can be used to spread the performance of SSDs across multiple storage systems, enabling better ROI than putting SSDs into individual storage arrays or servers. They can also provide the density and capacity to support larger data sets, improving efficiency and lowering overall costs. From an implementation perspective, a caching appliance can also be less disruptive than adding an SSD Tier 0 to an existing storage infrastructure and can complement the storage services and data protection already in place. While not the only solid-state storage alternative available to storage VARs, caching appliances should certainly be on the line card.

This was first published in April 2011

Dig deeper on Data Storage Hardware

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close