Parallel NFS suffers delays due to Linux client work, 'monster' spec
UCLA's Institute for Digital Research and Education once hoped storage systems supporting the long-promised parallel Network File System technology would be the answer to its bandwidth woes. But, in April 2011, the Institute gave up the wait and purchased a proprietary system from Panasas Inc. to give its distributed scientific applications running on clustered servers the direct, parallel access to storage they needed.
More than a year later, Scott Friedman, chief technologist at IDRE, said he still hasn't found a parallel NFS (pNFS)-based storage product that he would consider using, although a limited number do exist. Panasas postponed delivery of its pNFS support, and is now targeting early next year. IDRE's other main storage vendor, BlueArc, announced prior to its acquisition by Hitachi Data Systems (HDS) that it would make pNFS available this year. Instead, an HDS product marketing manager said the specification still needed work. More recently, HDS said only that pNFS is on its roadmap.
Read the full story on the delays of pNFS.
NetBackup 5220 keeps Lotus F1 racing team data safe
The British race team has used NetBackup since 1997 when it was sold by Open Vision, and backs up to tape and a virtual tape library (VTL) in its Enstone, U.K.-based data center. Lotus began complementing that protection this year, first with a Symantec Backup Exec 3600 appliance at the track, and then with two NetBackup 5220 appliances in the data center.
Auto racing is a data-intensive business. According to CIO Graeme Hackland, Lotus F1 generates about 25 GB to 50 GB for each of its 20 races per year. It also must protect data generated at the factory during tests and simulations. One high-resolution camera inside a car's gear box tracks gear changes and generates 12 GB for six seconds of footage, for example. The team also stores back office data needed to run its business.
Read the full story on the Lotus F1 racing team's experience with NetBackup 5220.
Disaster recovery plans prove valuable in dealing with Hurricane Sandy
Hurricane Sandy, which ravaged parts of the East Coast last month, proved that companies with a good disaster recovery plan don't have to feel powerless, even when they lose power in their offices or data centers.
Prepared firms used a variety of business continuity best practices, with a particular focus on technology and personnel, to keep IT services running despite one of the most damaging hurricanes in U.S. history.
24 Seven Talent, an international staffing company for creative industries, lost power at its downtown Manhattan headquarters on the evening of Oct. 29, and did not regain it until Nov. 3. Doug Feltman, 24 Seven's director of systems and applications, said the office had closed Monday, Oct. 29 as a precaution, but he needed to perform the company's main IT job -- payroll -- the following day.
Nimble Storage: More than half of mission-critical data is virtualized
Seeking cost savings and greater flexibility, more than half of the IT professionals who responded to a survey by hybrid storage vendor Nimble Storage Inc. are virtualizing business-critical applications.
The survey showed that 55% of the 476 respondents -- whom Nimble contacted as members of the company's marketing mailing list -- have virtualized more than half of their mission-critical data, and 66% of respondents expect to do so in the next 12 months. According to the results, almost two-thirds of which were derived from U.S. businesses with fewer than 1,000 employees, databases are the most frequently virtualized workload type, with messaging systems, collaboration and business applications, in that order, following behind.
Read the full story on Nimble's survey about virtualization of mission-critical data.
Dig deeper on Network-Attached Storage (NAS)