Startup SolidFire came out of stealth today with an all-solid-state drive (SSD) storage system aimed at cloud storage...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
providers who want to build public clouds for customers.
The SolidFire SF3010 is a 1U system with 10 SSD drives and native 10 Gigabit Ethernet iSCSI connectivity. The SF3010 uses mostly multi-level cell (MLC) SSDs to keep down the price with a small amount of high-performing single-level cell (SLC) drives to handle selected writes.
SolidFire claims the system can handle 50,000 IOPS per node and scales to 100 nodes for 5 million IOPS. Each system has 3 TB of raw capacity. Adding a node scales performance and capacity.
SolidFire isn’t the first vendor to hit the market with an all-SSD system. Nimbus Data began selling all-SSD systems more than a year ago. EMC has said it will deliver a Symmetrix with all SSDs this summer, and others are expected to follow. Others, like Texas Memory Systems, Avere, Violin Memory and Alacritech, have all-SSD systems to speed SAN and NAS performance.
Most storage vendors now give customers the option of installing some SSD alongside hard drives in their systems.
SolidFire CEO Dave Wright said the startup developed a system built from the ground up to run SSDs instead of taking the approach of mainstream storage vendors of putting SSDs into arrays designed for hard drives.
Wright said the SF3010 is built for primary storage for cloud providers, particularly block-based applications such as databases and email. SolidFire’s REST-based API handles storage management, automation and multi-tenant provisioning.
“We’re not a cloud on-ramp product, we’re not pushing data to the cloud,” he said. “We’re about being high performance primary storage in the cloud. Enterprise storage arrays are not well suited to that challenge. “
SSD analyst Jim Handy of Objective Analysis said having a system developed specifically for SSDs is important because hard drive controllers and adapters aren’t equipped to handle the amount of IOPS that SSDs can send through.
“Instead of taking an approach designed for hard drives and adapting it to solid-state drives, they started out with SSDs,” Handy said. “That’s different from the norm. You can take any array and stuff it full of SSDs. Would that array perform as quickly as it could if it were designed for SSDs first? Hard drive systems are not fine-tuned for speed,” he said.
Handy said one SolidFire’s advantages is it uses the speed of flash to spread data around instead of relying on RAID for data protection. “That approach will have a lot of appeal,” he said. “There’s basically no rebuild time. If you take a drive out of the array, the rest of the array automatically builds up replicas to make up for it. That’s something that conventional RAID can’t do.”
SolidFire’s Wright said service providers don’t want to manage storage through the same tools as enterprises. “They don’t want to hire storage administrators to manage storage,” he said. “They want automated control through API-type interfaces. We design a system with the mindset that you’ll have different customers, tenants, and applications that need to be isolated. You have to have quality of service and things like security and reporting.”
He said SolidFire handles those capabilities by managing arrays on a volume-by-volume bass, so providers can place volumes under specific accounts. “We can have large volumes with small performance capabilities, or small volumes with large performance capabilities, and adjust in real-time to make the volume faster or slower [for guaranteed performance levels],” he said.
The SFA 3010 will be available in an early access program Aug. 1, and Wright said he expects it to be generally available by the end of the year. No pricing is set yet, but SolidFire claims it will match hard drive storage arrays in “usable gigabytes per dollar.” That takes into account portions of hard drives that are not accessed because of short-stroking or are inefficiently used because of I/O limitations.