Data backup and recovery vendors dig into deduplication technology, aim for cloud backup
Data deduplication technology and cloud backup were hot topics of data backup and recovery vendors this week when they reported their earnings and discussed future product releases.
Symantec Corp., CommVault and FalconStor Software all spoke of initiatives in both those areas when discussing results and their 2010 roadmaps.
Symantec's storage and server management group reported an 8% revenue decline last quarter, but CEO Enrique Salem said data backup demand is strong as customers are still moving their backups from tape to disk.
"We're continuing to see demand from customers based on the migration from tape to disk-based backup, and some of the new capabilities that we are going to be delivering," he said.
Those capabilities largely center on data deduplication. Symantec added deduplication for technology VMware and Oracle in its NetBackup PureDisk 6.6 that started shipping this month, and Salem said the next version of NetBackup will allow deduplication directly on the media server. The next upgrade to Backup Exec will also have integrated deduplication.
Read about two potential inroads to cloud storage services.
Arkeia takes aim at EMC Avamar with Kadena Systems data deduplication IP buy
Arkeia Software has acquired intellectual property and engineering resources from Santa Clara, Calif.-based Kadena Systems, which will add source-based data deduplication to Arkeia's Network Backup software.
Kadena, founded in 2003, had previously sold its Delta Backup data deduplication software to OEMs -- most notably SanDisk -- for use with Flash Drives. It also sold a version directly to end users called Pocket-Cache. Arkeia CEO Bill Evans said his company will no longer develop those products, and will soon reveal plans for offering support through a third party. Instead, Kadena's IP will be directly integrated into Arkeia's software.
Evans emphasized that Kadena has a different approach to block-based data deduplication than most of the products currently on the market, which use either fixed-block or variable-block approaches to identifying duplicate data. Kadena uses a sliding-window approach, which creates a different-sized "window" depending on the application being deduplicated.
Look here for our data deduplication cheat sheet.
Data archiving reduces data backup workload prior to data deduplication
In enterprise data storage, the theme of the year for storage managers is "do more with less," and some users are staying on top of data growth without breaking the bank by archiving their inactive data before it ever enters the data backup cycle.
Robert Stevenson, TheInfoPro (TIP) managing director of storage, said that so far in TIP's Wave 13 storage study survey being conducted this fall, there has been a "dramatic" shift in the way organizations address application and email archiving of data. "It's gone from 60% of respondents a year ago saying the storage team handles archiving policy, to about 20%," Stevenson said. Application teams, server teams and business units are taking over those processes.
Stevenson said the rate of growth on the archive tier of data is expected to exceed that of tier 1 and tier 2 by the end of the year, according to 144 survey respondents to the study so far. Respondents anticipate a 38% rate of growth in archive repositories compared with 31% for tier 2 and 26% for tier 1. This relates to the more aggressive tiered storage plans users have also been putting in place to cope with data growth on static budgets.
While archiving requires management from the producers of data and those responsible for regulatory compliance within an organization, storage managers at organizations with data archiving in place have found additional benefits, particularly in data backup, another place where data growth has challenged budgets and infrastructures this year.
SAN sales boosted by need for storage efficiency
Compellent Technologies Inc. joined other storage vendors in saying the customer budgets thawed a bit last quarter.
In Compellent's case, storage efficiency features in its StorageCenter SAN provided some of the heat leading to the thaw. Those efficiency features included Compellent's automated tiered storage. However, industry analysts are waiting for Compellent to scale out and add more enterprise features.
Compellent reported revenue of $32.2 million, up 31 percent from the same quarter last year and 12 percent from the second quarter. Net income was $2.3 million, up from $1.5 million in the second quarter.
"Spending was significantly better than two quarters ago, but users are still being careful with their budgets," Compellent CEO Phil Soran said on Wednesday's earnings call. "The good news for us is that they can't quit storing data."
Compellent added 136 new customers in the quarter, up from 115 in the previous quarter.
Read the rest of this story on Compellent's SAN sales.
Symantec releases Linux version of Backup Exec System Recovery
Symantec Corp. released a new version of its bare-metal restore software that adds support for Linux servers and tightens integration for centralized management of server backups.
Backup Exec System Recovery (BESR) 2010 will ship this week, but new support for backing up and restoring entire server images including operating system or individual files from Red Hat or SUSE Linux servers won't be generally available until December.
Once that happens, it will be the first non-Windows operating system support to be available for BESR, said senior product marketing manager Susie Spencer. According to Symantec's support forums, workarounds were possible to make Backup Exec System Recovery work with some Linux file systems previously. While it would appear at first to overlap with the company's Linux-focused a Veritas NetBackup product, she said the BESR support is intended for smaller businesses with a few Linux machines that they'd like to add to an existing BESR deployment for Windows.
Read the full story on Backup Exec System Recovery 2010.
Hot sites and warm sites: Choosing the right option for your disaster recovery plan
Disaster recovery (DR) professionals often have a difficult time making a clear distinction between hot and warm disaster recovery sites. A hot site ensures critical business systems will remain available during an outage. Or at the very least, the hot site is capable of recovering critical systems within minutes of an outage. A hot site typically includes everything a company needs to quickly resume operations: computer hardware, key applications, telecommunications, peripherals, utilities and workspace for employees. A hot site is, in effect, a duplicate of your primary site.
Aside from hardware, vendors that provide hot site facilities typically offer other services as well. Those services might include provisioning or reconfiguring of software, electronic data vaulting of tapes, offsite data replication, network monitoring and management.
By contrast, warm sites may take 24 to 48 hours to recover data, making them better suited for backing up nonessential systems. Although usually partially equipped, a warm site may not provide all services. For example, users may be required to provision their applications or purchase additional servers when declaring a disaster.
Focus on data recovery needs
Although these distinctions are noteworthy, getting bogged down in buzzwords isn't helpful. The proper question to ask isn't whether you should use a hot site vs. a warm site, said Bill Hughes, director of consulting services for disaster recovery with SunGard Availability Services in Wayne, Pa.
"It's more important to know which data you need to focus on, how quickly you'll need to recover it and knowing what your options are," Hughes said. "The reality is you have to analyze this at the systems level."
Caltrol refreshes data storage infrastructure with Pillar Data Systems iSCSI SAN
Industrial process-automation equipment manufacturer Caltrol Inc. found Pillar Data Systems' Axiom iSCSI storage area network (SAN) a better fit for consolidating its newly virtualized server setup than upgrading to a new Hewlett-Packard (HP) Co. LeftHand Networks SAN because Pillar offered more data storage capacity at the same price and performance.
Las-Vegas based Caltrol was a LeftHand customer from the days before HP acquired the iSCSI vendor. When director of IT Steve Murphy joined the company a year ago, Caltrol had a nearly five-year-old LeftHand iSCSI SAN with a terabyte of capacity spread over four 250 GB Serial ATA (SATA) drives, as well as two newer LeftHand iSCSI SAN deployments totaling 1.8 TB. The rest of the Caltrol's approximately 4.5 TB of data was direct-attached to old servers, including two servers dating to 1998.
Murphy set out to consolidate, centralize and monitor Caltrol's IT infrastructure. "Our original goal was to expand the LeftHand Networks deployment," he said.
See our iSCSI SAN topics page.
Additional storage news
Check out last week's storage channel news roundup here.