If you do on-site storage work for your customers, from time to time you might be called on to script backup procedures....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In many environments, Perl scripting works well. I've scripted backup jobs for Exchange and on an EMC Celerra using Perl and have a few tips to share.
For the Exchange job, my financial services client was using SnapManager for Exchange on a NetApp LUN. Although the Exchange server is obviously a Windows server, I decided to use Perl because the customer's Symantec NetBackup server was running Solaris and it's difficult to write something on Windows that will be used on Solaris. And since Perl is pervasive on Solaris and Linux is heavily used in financial institutions, the choice made sense.
In my client's backup of Exchange, we set, SnapManager for Exchange to take several important steps:
- Quiesce the database (halt IO for several seconds)
- Flush the data to the LUN
- Take the snapshot
- Replicate data to secondary array based on the snapshot
- Concatenate the log files
- Verify the database (this can be done in a separate job)
Most financial services companies choose to offload the backup job to a secondary array, allowing them to run backups during the day without interrupting service levels for the business users. The key part of doing a successful backup is to ensure that the secondary array has a good point-in-time copy of the data. In the Exchange scenario, since NetApp uses SnapMirror to replicate the data to the secondary array, to determine when the secondary array was up to date, I looked for lag time between the replicated volumes. When the lag times of the volumes for Exchange were set to "00" on the hour field, that was my cue to kick off the backup. I set a counter in my Perl script to count the total number of replicated volumes and ensured all volumes were set to "00." If those conditions were met, a "touch" file would be created.
I used the same methodology was for a Celerra backup to tape. The file systems were replicated via Symmetrix Remote Data Facility (SRDF) on an EMC Symmetrix DMX array, and a point-in-time copy, called a Business Continuance Volume (BCV), of the file system was created off of the secondary array. The Perl script would sit in a loop until the BCV was synchronized with the second copy of the file system; the script then split the BCV for a point-in-time copy. Once the volumes were split off, the script created a touch file.
In the case of both the Exchange server backup or the Celerra backup, the touch file was the cue for the backup server to initiate the backup job of the volumes. The batch processing server would have a job running for hours each night looking for this touch file. Once it saw the touch file, the batch processing server would kick off the backup of the application or file system. (If the batch processing server didn't see the touch file during the nightly backup period, the Perl script would instruct it to exit with a failed exit code and send a notification to the backup operators to let them know that the process had failed.)
About the author
Seiji Shintaku is a principal consultant for RTP Technology. Before joining RTP Technology, he was global NetApp engineer for Lehman Brothers, Celerra and DMX engineer for Credit Suisse First Boston, principal consultant for IBM, and global Windows engineer for Morgan Stanley. RTP Technology is a VAR for storage-related products and professional services for NetApp, EMC, F5, Quantum, VMware and Brocade. He can be reached at email@example.com.