Today's crop of x86 processors from Intel and AMD are delivering incredible performance and scalability at a great price, making it difficult for traditional big iron RISC server platforms like Sun Sparc and IBM PowerPC to keep up. Customers around the world are moving toward a platform change, as hardware virtualization tools like VMware running on x86 enable them to squeeze even more utility out of these new servers.
Cheaper, more powerful and better-utilized servers mean that customers will make fewer and smaller hardware purchases. Considering margins for commodity hardware are already razor thin, solution providers like yourself must do more than source and install hardware.
During a platform change from legacy Unix to x86 platforms, you will probably help rack the new servers and install the operating systems. It is after this point the real work begins -- moving applications to the new servers – and many more business opportunities are to be found.
Inter OS migrations occur at many levels. There are five fundamental ways to migrate data, as James Damoulakis points out in the Sept. 2005 Storage magazine article: Does host-based replication still make sense? In this tip I will focus on practical solutions for one-time OS independent data migrations.
Application layer migrations
Some applications, like BEA Weblogic clusters have data replication services built right in. Others, like SAP are thin, parallel app servers connecting to a centralized data source. Sometimes content management tools are already in place to keep Web server farm content in sync. In all of the above situations, the applications will be reinstalled on the new server, and brought into the farm. Once the new platforms are tested, the old servers can be removed.
Database layer migrations
Most modern relational database management systems (RDBMS) contain a replication mechanism that is cross platform. Oracle, for example, can create a standby database on a new server of any supported platform. While waiting for the cutover, the standby database is kept current with transaction logs from the source server. At the cutover point, confirm that the replicated database is in sync, shutdown the old database and start up the new one.
Host-based replication tools work well when the content to be moved is relatively small and file based. A very common and reliable method to move lots of files with permissions and directory structures in tact is rsync. It is an open source tool that leverages secure network protocols like ssh to copy files from source to destination server. This is a one-way migration tool that allows differential updates. Set up a script that runs regularly leading up to the cutover point. The first run will take the longest because it is making a full copy of the data. Subsequent runs should be faster. At the cutover point, shut down the application services on the source host and run the rsync script one more time. Next, start the application on the destination server and have a party, the migration is done.
Sometimes the migration can't happen all at once. There may be a requirement to keep the source and destination servers loosely in sync for some period; enter Unison. This open source tool, like rsync can securely move data between Windows and almost all Unix variants. Its killer feature is that it allows updates on both sides of the replication, enabling a broader cutover period.
The final solution we will discuss for host-based replication is tape backup and restores. No wait, everybody hates that solution, it is clunky and slow. Avoid it if at all possible.
Network-based migrations (LAN and SAN)
We are storage professionals, why wouldn't we consider traditional storage technology when it comes to data migrations? If the customer's data is already on the SAN, traditional array and SAN-based replication services are available. Storage professionals already know about snapshot and remote replication services like EMC's TimeFinder and SRDF tools. SAN virtualization may be an attractive option as well.
If the customer's data is on a SAN, and hosted by a Sun server, it is likely running on Symantec's Veritas Foundation Suite FileSystem and Volume Manager. Built in to the base Foundation Suite product, Symantec supports PDCs (portable data containers), which allow existing disk groups to be exported from the legacy host and imported to a new host, data in tact. Even better is that Symantec licenses are no longer node locked. This means that the Sun Veritas license can be transferred to the Linux or Windows Server.
If, during the discovery process, you learn that the customer's important data is stored on local DAS disk, it may be time to make a NAS or iSCSI play. Moving the data to a storage array not only provides better protection from failure, but it enables easier data mobility down the road.
About the author: Brian Peterson is an independent IT Infrastructure Analyst. He has a deep background in enterprise storage and open systems computing platforms. A recognized expert in his field, he held positions of great responsibility on both the supplier and customer sides of IT.
Dig deeper on Data Storage Management