The magnetic hard disk drive (HDD) has come a long way since the introduction of the 20 MB PC model. The latest...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
SATA drives are soon to reach 3 TB of capacity and future generations are expected to keep growing. However, the role of enterprise hard drives is changing, due to increasing performance requirements and drive technology developments.
Currently, enterprise hard drives come with three connectivity variations, Fibre Channel (FC), SAS and SATA. For the purposes of this discussion we'll assume SCSI drives will be replaced by one of these three types, and IDE drives are consumer products. Currently, FC and SAS are mostly being used for Tier 1 and Tier 2, primary-storage applications, essentially anything that has an appreciable performance and reliability requirement. SATA drives are used for Tier 2 and Tier 3 applications, mostly those that are more capacity- than performance-oriented, with a slightly higher acceptance of drive failure (not data loss). For high-IOPS use cases, like Web applications and OLTP databases, even the highest-performing SAS or FC drives aren't always enough, and other techniques are being used to increase performance.
HDDs need help for high-IOPS applications
As discussed in the "Visual SSD Readiness Guide," increasing the drive count of a RAID array can provide more simultaneous I/Os and give substantially higher performance. But this wide striping of data onto large numbers of disk drives, efficiently, requires a sophisticated array controller and room for a lot of spindles. Short-stroking, or formatting a drive so that data is written on only the outer-most tracks (or cylinders), can also improve performance, since the linear speed of the disk media traveling under the read/write heads is greater the farther that track is from the center of the disk. But this technique, coupled with the use of a high physical drive count, results in significant waste as large portions of each HDD goes unused. Also, these drives are typically the fastest (15,000 rpm), most expensive drives available. Said another way, this method uses only a fraction of the most expensive drive available and lots of them to meet a performance requirement -- not very cost-effective. In addition, the total cost per gigabyte of storage for the highest-performance applications, using these disk aggregation methods, is further increased by the use of more power, cooling and floor space, as well as by overall complexity.
Solid-state storage in the form of NAND flash has become an effective replacement for disk drives for many of these performance applications. IT shops are realizing that higher IOPS at a higher price can be justified in certain use cases. On a purely cost-per-I/O basis, most solid-state drives (SSDs) are less expensive than hard disk drives. For example, a high-performance enterprise hard drive typically delivers 150 to 300 IOPS; flash SSDs deliver 100,000 IOPS; that's 300x to 600x better performance on an I/O basis. Although enterprise storage performance comparisons should be made at the array level, and cost per GB is certainly pertinent to the decision, this difference is telling.
HDDs in the future
What does the future hold for magnetic disk drive technology? Efforts to increase areal density, or the number of bits that can be recorded on a square centimeter of media, are propelling new drive technologies to alleviate "supermagnetism" and the effects it has on bit error rates. Increased bit density improves capacity but only slightly increases performance, which is largely tied to the mechanical aspects of disk drives. For high-transaction use cases, access time, or the time it takes for the heads to be positioned over the right place on the disk platter, is the best determinant of performance. Access time is also affected by bit density, but more so by rotational speed. This is where there are probably bigger long-term issues, like the power and engineering complexity needed to spin disks faster. These mechanical challenges are tied to two fundamentals of physics, distance and rotational speed.
Distance refers to the area that a read/write head must travel in order to access the terabyte-plus capacities common on hard drives. Although bit densities are increasing, the heads must still search a tremendous amount of surface area on each track (average latency) and then be indexed between tracks (random seek time) in order to accomplish a read operation. The sum of these latency and seek times is access time, and it's a major limitation to disk drive random I/O performance. Increasing rotational speed would shorten latency and improve raw throughput, but 15,000 rpm seems to be somewhat of a barrier, evidenced by the fact that maximum disk speeds haven't increased in more than 10 years. The costs associated with spinning disks faster (bigger motors, power, cooling, vibration, etc.) also impact its feasibility.
Enterprise hard drives seem to be evolving into a capacity-centric storage device, as capacity increases faster than access times. Another reason for this shift is the alternative of solid-state storage. As costs for SSDs gets closer to that of enterprise hard drives, on a pure capacity basis, the motivation to stay with magnetic disk drives diminishes.
One likely scenario has highest-transaction applications moving off to SSD at the top data tiers and the lowest tiers moving to cloud storage and tape archives (see articles on LTO-5 and LTFS). This leaves the middle tier of data on large, SATA drives, which incidentally could be front-ended by SSD as well, to improve their overall performance. With drive sizes expected to keep increasing, and efficiency-based drive technologies, like deduplication, compression and MAID available, spinning disks look like a good fit for this high-capacity middle tier. Again, FC and SAS drives aren't expected to disappear overnight, but a shift to high-performance SSD and high-capacity SATA drives seems logical.
What does this mean for VARs?
- You need to get familiar with solid-state storage. The handwriting is on the wall. Primary-storage deals in the future will most certainly include SSDs. According to a recent SearchStorage.com end user survey, 21% said they would be using SSD by the end of 2010 and another 40% said they would evaluate SSD this year. The specific solutions will vary, but spinning disk won't be the only storage medium used. This is actually good news for VARs for a couple of reasons. First, technology shifts like this open the door for new solutions, and independent storage resellers are in the best position to offer them. While most traditional array vendors are putting SSDs into their products, they're not always motivated to move an existing HDD customer away from that technology. Fully leveraging solid-state storage performance often requires new controller architectures, not simply replacing HDDs with SSDs in an existing array. Also, there are some very interesting third-party SSD solutions available, like caching appliances, which VARs can offer. Plus, implementing SSDs within an existing environment usually requires some real integration since, as integrators have always known, simply dropping a higher-performance component into a system rarely delivers the expected improvements.
- You need to get familiar with tiering solutions. As companies look to SSD and start moving applications off HDDs, both up and down the storage stack, automated tiering and other similar technologies will become more common. Data movement between storage tiers never really materialized as expected on the low end -- think ILM and moving data down the stack -- since storage prices kept eroding its economic justification. But the cost delta between SSD and midtier disk, especially in capacity-centric use cases, should make this practice viable for the foreseeable future. Implementation can be automated tiering within the storage controller, a separate storage tiering appliance, a file virtualization appliance, a cloud storage solution or a combination of these technologies. With hard drive technology essentially unable to compete with solid-state on a performance level and its cost advantages diminishing, it's pretty clear that SSD is no longer just for the "lunatic performance fringe," or the applications with an insatiable performance appetite. A shift in storage practices is occurring, from magnetic disk drives to solid-state storage, beginning with the highest-performance applications. How quickly this happens and how far down the storage tiering stack it reaches will depend on costs and other factors. But, this shift can be a significant opportunity for VARs who understand the technologies involved and get out in front of it.
About the author
Eric Slack, a senior analyst for Storage Switzerland, has more than 20 years of experience in high-technology industries holding technical management and marketing/sales positions in the computer storage, instrumentation, digital imaging and test equipment fields. He's spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States. Find Storage Switzerland's disclosure statement here.