IT channel takeaway: Keeping in mind this is a vendor Q&A, it offers several tips to help you work with customers in making smart disaster recovery investments.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
With Sean Derrington, director of storage management & CDP/R for Symantec Corporation.
Question: Disaster recovery has been getting a lot of media play lately, but studies show that most enterprises are still woefully unprepared. What is the quickest and easiest way for IT executives to get started on a fully functional system?
Derrington: IT organizations should begin by prioritizing the applications throughout the data center and the relative importance of those applications to the business. They should negotiate, with the business, the tolerance for data loss (recovery point objective — RPO) and the time to recover (recovery time objective — RTO) in the event of a failure. This negotiation should include both planned and unplanned downtime — anything that interrupts the business application. By aligning applications with business priorities, the IT organization can then implement the necessary infrastructure to support the applications from local failures (high availability), but also site failures (disaster recovery). Moreover, IT organizations need to consider not only resuming application services on another server (possibly over any distance), but also recovering the data (possibly in a secondary location). Lastly, and perhaps most importantly, organizations must test, on a regular basis, the high availability/disaster recovery infrastructure and processes to ensure both the applications and data can be recovered in the event of a disaster.
Question: What is the best way to ensure that the technology investments made today will form an adequate foundation for the future?
Derrington: Organizations should focus on advanced, market-proven solutions that are hardware-independent. Application clustering and data replication services should not be unique to one particular operating system or one particular storage array controller. IT organizations' infrastructure is complex, heterogeneous, and if one thing is for certain, it will continue to change over time. High availability and disaster recovery solutions that support heterogeneous storage, application, database and server (UNIX, Linux, and Windows) platforms will ensure the robust solutions selected today will support the new business initiatives of tomorrow. Clustering solutions that support local and wide-area clusters, and data replication solutions that support heterogeneous synchronous and asynchronous replication, will ensure that the software infrastructure can adapt to business needs as requirements change.
Question: How does the new Storage Foundation 5.0 fit into a long-term disaster recovery strategy?
Derrington: Veritas Storage Foundation is a software infrastructure that supports heterogeneous server (both physical and virtual servers), storage, application and database environments. Veritas Storage Foundation and Veritas Cluster Server enable organizations to provide the appropriate level of protection for any application over any distance by coordinating application clustering with data replication services. Additionally, by providing a software infrastructure, organizations can maximize the current IT assets (e.g., N+1 clusters, heterogeneous storage replication), but also maximize operational resources.
This 3 Questions originally appeared in a weekly report from IT Business Edge.