With the down economy putting pressure on technology refreshes, data migration services and tools are becoming more important to customers. Their data continues to grow, but during a refresh, they may need to buy a lesser class of storage hardware than they now have installed. Add to that the fact that there's simply less tolerance for downtime than in the past -- as well as a much higher price tag attached to lengthy data migrations -- and the need for sophisticated data migration services becomes clear. Learn more about these dynamics and find out what the biggest problems are in data migration projects, and why storage virtualization hasn't solved the data migration problem, in this Project FAQ with expert Brian Peterson, storage architect for a large reseller. Read the FAQ below.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
• What kind of company is typically interested in data migration services? Are you looking for a certain size company or a certain vertical that would be more inclined to need data migration services?
It's interesting that you ask the question that way -- data migration now as opposed to data migration in the past. I think data migration has always been an issue or something that our customers have been concerned with. I think there's more turnover in storage now based on the changing economy. For example, three years ago, people may have bought larger machines or been able to afford storage arrays that were more expensive. Now in tighter times, people are interested in doing technology refreshes, maybe trying to right-size the technology type. That means that they need to move larger and larger quantities of [data] because the growth is still occurring. They need to move it to, sometimes, dissimilar storage array types -- it may not be an EMC Symmetrix to an EMC Symmetrix. They may want to go from an EMC Symmetrix to a NetApp FAS device. The mechanisms to move the data [are] really very challenging and important to the customer. Additionally, data migration is becoming a bigger issue because people obviously have less tolerance for downtime. So people want to move the data as "online" as possible. … So the question really becomes, How can I move this data from one array type to another -- larger and larger amounts of data with less and less outage. That's part of what's driving all this interest around data migration tools.
My customers are more and more interested in finding a way to do more with less. Moore's Law says storage is going to get cheaper over time, obviously, for the same amount of capacity, but I think growth rates may be outpacing Moore's Law in many cases. So that may mean that they need to go with more cost-effective storage platforms than they would have bought previously -- potentially even beat Moore's Law.
I don't think I could put it to a certain vertical. What I see is that customers who have large amounts of structured content are looking for more and more non-disruptive ways to move their data. Most of our time is spent with non-disruptive [techniques].
First of all, we're always looking for ways to speed up the data migration process because if you think about storage as it's sitting on the floor of the data center, it's costing money, whether it be taking up power -- an environmental condition -- whether or not you're making lease payments, whether or not you're making maintenance payments, or depreciation if it's sitting on the floor. Any time a migration is occurring, you're paying for the old array of storage plus the new storage. That overlap can, generally speaking, cost twice as much per month as having just one storage array on the floor.
It's like when you buy a new house before you've sold your own house and you're paying two mortgages at the same time?
Perfect example. And so the faster you can buy the new house, get your stuff moved into there and sell the old house, the better off you'll be.
One of the problems people have with data migration, especially when they require a lot of labor and a lot of effort, [they] may take [a long time]. I've seen scenarios where it's taken people as long as a year to move data off of a large storage array of, say, 50 or 100-plus TB. That's probably on the long side; they move one application at a time, things like that. People who are doing it extremely well can move -- let's talk in round numbers -- a 100 TB array in less than a month and maybe with no outages using host-level migration tools, for example.
The number of people probably depends on the amount of time and the amount of data that you want to move. People who are paying for that year overlap might be doing it with one person just as free time allows. People who are taking this overlap very, very seriously could put two or three or four people on a migration, dedicated to that process for a month, and just knock it out.
So in terms of cost, it's better to do it faster?
Yes. Another analogy would be: It's painful to do these migrations, and a lot of people just want to rip the band-aid off quickly. Without tools and planning and experience, it's very tough to get it done quickly.
We underestimate the complexity of a migration -- the number of hosts that are involved, the number of different types of hosts, compatibility between storage arrays. For example, a customer may have a host connected to a Hitachi storage array and they want to migrate to a NetApp [array] and do driver parameter settings; are they compatible? Can you connect to both arrays at one time?
We underestimate the business's tolerance for outages and the business's tolerance for change -- those are other areas where things fall apart.
One of the last areas that's so important is planning these migrations in terms of laying out the steps and being realistic about the time required so that you can add resources to shorten the window. I see it all the time. Customers, especially when they do it themselves, expect to make a migration happen in a month, and they want to do it grass-roots. They'll just start moving the data, and it quickly bleeds into months and months or a year.
Do you get called in in that kind of situation to fix something? Say a customer wanted to do it on their own and they end up with a disaster.
I can't say that we get called in, but oftentimes we'll check up on the process after we've sold new storage and check up on the success of it. Oftentimes, we run into frustrated management that the migrations aren't completed, and we get involved [then]. I've found that people don't admit defeat very easily. They're not ready to throw in the [towel]. In our case, we have to offer a better way, a quicker way -- or demonstrate the cost of not getting this done.
In my mind, storage virtualization's big promise, the very thing that storage virtualization was supposed to solve, was this non-disruptive, easy data migration from one storage pool to another, whether it be for ILM tiering or whether it be for technology refreshes, like replacing a storage array. In all honesty, I think storage virtualization has failed to deliver in that place. One of the first reasons that storage virtualization hasn't really come through for us is that we may put storage virtualization in place so that we don't have to do array replacements every six months or every year, but every three or four years, the storage virtualization technology itself is probably going to need to be refreshed or replaced. In those cases, you're still left trying to do some other migration that's outside of the storage virtualization landscape. The second [reason] is that it adds a lot of expense and complexity to the infrastructure that people have a hard time getting a return on. So if I'm going to virtualize 1 TB of storage for three years just so I don't have to migrate that every three or four years, it's harder to justify. The last [reason] is reliability, maintainability and visibility. Virtualization is designed to obscure the back-end storage from the applications. Well, through that obscurity, we kind of lose track of what's where. And SRM tools and things like that can't see through the virtualization. I don't know of very many people who've successfully deployed SAN-based virtualization and are sticking with it.
So do you have people who are backing out of it, who have implemented it and decided to abandon it?
In some cases, yes.
That's not easy.
Ironically, it's exactly like, if you move out of a virtualized infrastructure into a more traditional storage array, it's exactly like moving from Storage Array A to Storage Array B. The entire virtualized infrastructure is the source, and the standalone arrays are the destination, instead of old to new.
SearchStorage.com did a Spring 2009 Purchasing Intentions Survey, in which 70% of respondents said they have no plans to buy storage virtualization in 2009, and 52% said they have not virtualized any of their storage.
To a certain extent, we've had storage virtualization for a long time. And I think commonly we refer to storage virtualization as virtualization that exists in the storage network -- some layer between the host and the storage arrays. If you think about it closely, we've had virtualization within storage arrays for 15 years: the idea that I could have multiple tiers in a single storage array, and I may even be able to move data non-disruptively within that storage array. And that storage array obscures the fact that there's physical disk drives there; it presents logical disk drives, LUNs, the host. And at the host we've also had virtualization with volume manager. You could present a bunch of LUNs to the host, and he would slice and chop and RAID and protect those things all at the file system, and the file system didn't have to know anything about the back end. Those two types of virtualization are still alive and very, very, very helpful. In my mind, it's the storage network virtualization that hasn't paid off yet.
I can probably divide them up into three groups, with five tools. The first are host-based migration tools. The second group is array-based migration tools, and the third one would be storage network-based tools. At the host level, we have two types: You can either move the data at a file level, move individual files; and the second one would be at the block level, like at the logical volume layer, for example, or some data migration tool. File-level migrations are probably the oldest migrations. Host-level migrations have been around forever, and it started with [the copy command]. In Unix infrastructures and even, to some extent, Windows infrastructures, the open-source tool rsync is extraordinarily powerful and useful for data migrations. It allows migrations within a single host or to an external host and allows a myriad of different connection methods, through encrypted tunnels with incremental updates and things like that. It's an extremely powerful host-based file-level replication tool. On the block side, for host-based migrations, I find that the logical volume managers on the host make amazing data migration tools. Oftentimes, you can move data non-disruptively by mirroring at the logical volume layer the new storage to the old storage, and do this without an outage. Almost all operating systems now have a logical volume manager that can do a non-disruptive migration. Windows is probably the most notable exception; Microsoft Windows makes that not that easy, but there are add-on products like Symantec's Foundation Suite. I've seen customers standardizing on Symantec's Foundation Suite so that they can do a non-disruptive migration in Windows on the host.
So if a solution provider decides that they want to provide data migration services, it's not the kind of thing that you can walk into and casually offer this. You need to really understand what type of data your customer has and what tools are available.
Absolutely. You really need to spend a lot of time assessing the practice that you're going to build around data migration and what tools you want to be competent in. [And] really take some time to understand the customer's environment, obviously, to select the best method for their particular data type and their tolerance for outages and their tolerance for host manipulation. One of the most common complaints I have among people who want to do host-level migration is that the storage team doesn't have access to the operating systems or that the application group is unwilling to let any change occur at the operating system. It really turns into these political organizational boundaries that make host-level migration the most difficult, in my experience.
About the expert
Brian Peterson is a storage architect for a value-added reseller, with a background in enterprise storage and open systems computing platforms. A recognized expert in his field, Brian has held positions of responsibility on both the supplier and customer sides of IT.