LSI Corp. today updated its MegaRAID CacheCade software to support write and read solid state drive (SSD) caching via a controller, providing faster access to more frequently used data.
LSI MegaRAID CacheCade Pro 2.0 speeds application I/O performance on hard drives by using SSDs to cache frequently accessed data. The software is designed for high-I/O, transaction-based applications such as Web 2.0, email messaging, high-performance computing and financials. The caching software works on LSI MegaRAID 9260, 9261 and 9280 series of 6 GB SATA and SAS controller cards.
LSI delivered the first version of CacheCade software about a year ago with read-only SSD caching. MegaRAID CacheCade Pro 2.0 is priced at $270 and is available to distributors, system integrators and value added resellers. LSI’s CacheCade partners include Dell, which in May began selling the software with Dell PowerEdge RAID Controller (PERC) H700 and H800 cards.
“What we want to do is close the gap between powerful host processors and relatively slow hard disk drives,” said Scott Cleland, LSI’s product marketing manager for the channel. “Hosts can take I/O really fast, but the problem is traditional hard disk drives can’t keep up.”
LSI claims the software is the industry’s first SSD technology to offer both read and write caching on SSDs via a controller.
LSI lets users upgrade a server or array by plugging in a controller card with CacheCade. Cleland said users can place hot-swappable SSDs into server drive slots and use LSI’s MegaRAID Storage Manager to create CacheCade pools. The software will automatically place more frequently accessed data to cache pools.
“In traditional SSD cache and HDD [hard disk drive] configurations, the HDDs and SSDs are exposed to the host,” Cleland said. “You have to have knowledge of the operating system, file system and application. With CacheCade, the SSDs are not exposed to the host. The controller is doing the caching on the SSDs. All the I/O traffic is going to the controller.”
SSD analyst Jim Handy of Objective Analysis said it took time for LSI to build in the write caching capability because “write cache is phenomenally complicated.”
With read-only cache, data changes are copied in the cache and updated on the hard drive at the same time. “If the processor wants to update the copy, then the copy in cache is invalid. It needs to get the updated version from the hard disk drive,” Handy said of read-only cache.
For write cache, the data is updated later on the hard drive to make sure the original is still updated when the copy is deleted from cache.
LSI also has a MirrorCache feature, which prevents the loss of data if it is copied in cache and not yet updated on the hard drive.
Handy said read and write caching is faster than read-only caching.
“Some applications won’t benefit from [read and write caching],” Handy said. “They won’t notice it so much because they do way more reads than writes. For instance, software downloads are exclusively reads. Other applications, like OLTP [online transition processing], use a 50-50 balance of read-writes. In these applications, read-write is really important. ”