Solution provider takeaway: For solution providers with customers interested in solid-state disk, there are two...
basic technology choices -- flash-based solid-state disk and DRAM-based solid-state disk -- and a number of vendor-related considerations.
The storage industry is all abuzz about solid-state disk (SSD), with manufacturers like EMC and Sun announcing plans to enter the market, drawn by memory cost reductions and the advent of flash-based memory. But the SSD market is not new; companies like Solid Data and Texas Memory have been shipping SSDs for more than a decade. Now customers interested in SSD have a difficult choice to make: purchasing SSD from a traditional storage supplier or opting to buy it from one of the legacy suppliers.
Competition in the solid-state disk market shows similarities to that of the early days in the iSCSI market. When products based on iSCSI technology hit the market, traditional storage suppliers could create fear, uncertainty and doubt about investing in offerings from the newer players, like EqualLogic (now Dell), LeftHand Networks and Compellent. With SSD, however, EMC, Sun and others are entering a market already populated by offerings from well-run organizations, companies that have proven they can survive economic downturns while selling very expensive technology. Can you imagine convincing a customer to buy a few gigabytes of SSD back in 2001?
With company viability not a competitive factor in the SSD market, the first --and probably most important -- point to consider when guiding customers toward a choice of solid-state disk vendor is that traditional storage manufacturers only offer flash-based solid-state disk, while most of the standalone providers focus on DRAM-based solid-state disk. (Here's some background on DRAM vs. flash.)
There are important speed differences between the two technologies: Flash SSDs are much faster than spinning disks, but they are not as fast as DRAM-based solid-state disk when handling write I/Os. While the performance metrics are rapidly evolving in this market, today a flash SSD executes write speeds at a rate of approximately 2 milliseconds and can sustain up to 25,000 random write I/Os per second. RAM SSD, by comparison, performs writes at 15 microseconds (0.015 milliseconds) while sustaining up to 400,000 write I/Os per second. (Mechanical hard disks achieve about 4- to 5-millisecond reads and writes and can sustain about 150 to 300 random I/Os per second.)
Numbers are sometimes hard to grasp, so to put it in perspective, DRAM-based solid-state disk could copy an entire DVD movie (assuming you had a DVD device fast enough to feed it the data) in less than a few seconds. A flash-based SSD, on the other hand, would take almost a minute to write the same amount of data. For most applications, the extra speed of DRAM -- and even flash-based solid-state disk -- for that matter, may be overkill. However, applications like high-transaction databases that gain a performance enhancement (not of the disgraced athlete variety) on flash SSD will realize an even greater advantage on DRAM. Needless to say, high-transaction database environments typically are the core revenue generator for many organizations. In fact, some companies can actually correlate every dollar gained for every second they can make the application faster. Most companies that are considering flash-based solid-state disk should also explore RAM-based solid-state disk; they may find it more cost-effective per I/O.
Traditional storage suppliers have done an excellent job of selling "tiered storage in a box" to their client base. From a customer perspective, such an approach brings the perceived benefits of management simplicity, an efficient footprint and potentially lower costs. Since flash OEMs are creating modules in a hard disk drive (HDD) form factor, integration into existing drive shelves would seem to make tiering SSD (Tier 0) into a monolithic frame a seamless transition.
Upon closer inspection, however, there are serious drawbacks to integrating flash solid-state disks into supplier frames. For example, the speed of these SSDs exposes the performance weaknesses of the rest of the storage system. Latency becomes a key issue. There is also the assumption that integration of the flash SSD with the spinning hard disk drive will make things simpler; there's some simplicity in that the same software that you use to create disk-based LUNs can be used to create SSD-based LUNs, but for now that's where the integration ends.
Storage systems manufacturers are able to enter the SSD market quickly because flash OEMs are creating the modules in a hard disk drive form factor, making integration into their existing drive shelves substantially easier. The problem is that the systems are optimized for spinning disks, not SSD technology. While storage manufacturers didn't design their shelves to be slow, there was no need or pressure to "design out" latency beyond what the drives could produce.
A great example of latency inherent in existing storage systems is cache. All storage systems have it, but solid-state disks don't need it and in fact their performance is hindered by having to perform a cache search. In addition to the cache, the front-end and back-end ports add latency, as does all the additional software overhead associated with enterprise storage systems. In essence, to take full advantage of the speed of flash-based solid-state disk, manufacturers need to scrutinize every component in their storage systems. Anything slower than the flash-based volume adds latency and detracts from the system's performance.
Unfortunately, latency creates a bit of a challenge for designers of storage systems. Addressing these system bottlenecks by lowering the latency of the controllers -- creating a technique to bypass cache on flash-based LUNs -- will add a significant cost to the system. Also remember that for the foreseeable future, most systems will be 95% HDD-based; any improvements to the latency for flash-based solid-state disk will be realized by only 5% of the storage. And let's not forget that non-RAM-based storage is purchased on a dollar-per-capacity model more often than a dollar-per-performance model, so the cost to address the latency problem for flash SSD won't bring a lot of value in the sales process.
Another integration-related issue to be worked out -- even if the latency issues are addressed -- is how to identify and move data to solid-state disk. In my experience, IT organizations that are investing in SSD know exactly what subset of data belongs on the SSD. It's usually just certain hot files and, more often, files that require high write I/O (for example, Oracle Undo logs); an entire database is very rarely placed on SSD. Some storage manufacturers have the ability right now to understand at a block level if those blocks are being accessed and how frequently. They are using this information to migrate blocks of data from Tier 1 to Tier 2 disks. If additional logic were added to these controllers to analyze access patterns and whether the data could benefit by being in flash, they could facilitate a "move up" to Tier 0. But at this point, none of these suppliers have even announced plans to implement a strategy like this, citing the other latency-related issues and minimal return on investment.
The reality is that for the foreseeable future, the data on the SSD will start out there rather than be migrated to it based on access behavior. Flash-based solid-state disk can be used for high read files, and DRAM-based solid-state disk can be used for both high read and high write files. Regardless, implementing SSD is going to take some expertise, and integration with an existing storage system won't make that any less difficult. This is an ideal opportunity for storage resellers to add real value to their relationship with a customer.
At some point in the future, we'll be in a world of no spinning disks, and our children's children will look at hard drive technology much the way we look at the 5.25-inch floppy disk. During this transition there will be a point at which all active storage will be on some sort of solid-state disk technology and older data will be on some sort of lower-tier disk archive that can feed the SSD as needed. These systems will take more than a decade to design, and they will most likely be designed from the ground up for SSD, or there may be a special zone of the array system designed for SSD so the latency issues can be minimized. Until that time, SSDs are coming down in price and they'll be used for a specific set of files that are causing performance problems. The best way to optimize such an SSD investment will be through a purpose-built, standalone SDD system.
About the author
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the United States, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland, George was chief technology officer at one of the nation's largest storage integrators, where he was in charge of technology testing, integration and product selection.