Data center consolidations present solution providers with an opportunity to improve clients' operational efficiency, save money and reduce energy consumption. Erin Sanders, data center services expert at Acumen Solutions Inc., discusses the value of offering consolidation management services, why consolidation is applicable today, techniques for application migrations and even ways to avoid consolidation.
Table of contents
• Given the current economic situation, should solution providers still consider consolidation?
• How can solution providers begin consolidating clients' data centers?
• How can solution providers decide where applications should be relocated?
• How should solution providers migrate applications?
• How can virtualization help solution providers to avoid consolidations?
• About the expert
Erin Sanders: Yes, absolutely! In the past, consolidating data centers was as much an exercise in operational improvement as it was in cost reductions. These days, and given most company's access to budget dollars, operational efficiency alone is not an appropriate justification for consolidation. For companies who have a significant amount of data centers (often due to numerous mergers, acquisitions and little to no integration), the cost savings could be substantial. As a very general rule, consider consolidation if your organization has more than 15 medium to large data centers unless there a clear reason why you need that many.
Sanders: Whether or not a client forms an official, enterprise-wide project or simply designates a percentage of the application portfolio to move each year, solution providers should always start with forming a data center strategy. A strategy tells you what your environment looks like, what you want it to look like going forward and how to get there. A client's current state should include information about the data center, servers and other infrastructure, and, most important, applications. Analysis of this data should indicate which locations are to be maintained or expanded and which to decommission.
Remember, a data center strategy is primarily a financial activity and therefore the data center information should include operational costs. Retain facilities that are newer, larger and/or can be expanded in a way that is cost effective. Keep in mind that the No. 1 cost for most data centers is energy, so consider local energy rates as part of your analysis. Also, while you're gathering the infrastructure's profile data, make sure you include purchase or lease information that you will use later to determine which systems can be repurposed and replaced.
Sanders: There are two considerations for making this decision.
- The first is a facilities analysis. Where does the client have space and power? Alternatively, do they wish to obtain new data center space by either building a data center or obtaining leased or co-leased space? These questions should have already been answered as part of the data center strategy.
- The second thing to consider is application analysis. Once a client has designated future data centers, almost all subsequent decisions should be based on the application's requirements and characteristics. Some applications are highly data intensive and must be close to their user base. Technologies such as WAN accelerators and local data caching can help minimize these requirements. Some data cannot reside in certain locations or be hosted outside of specific locations because of laws or other restrictions. For example, in general, you cannot host HR data on EU employees outside the EU. For companies in defense- or military-related fields, data associated with clients or classified products and services typically cannot leave the country. Also, don't forget that major applications, such as ERPs and CRMs, often have several interfaces that generally require all dependent applications to be hosted (and moved) together, or else performance can suffer.
Sanders: In general, there are three primary migration approaches: forklift, swing and virtualization (either physical to virtual or virtual to virtual).1. Forklifts are physical moves of existing/as-is servers. It's risky to put infrastructure on trucks and drive them to their new home, particularly if it's a production server that has no counterparts. Thus, this should only be done when you are moving non-production servers or when you cannot use any other method. The most common reason to forklift is legacy applications aren't certified for virtualization and have no install media or way of logically migrating the application code and data. Two things to consider regarding forklift moves:
- Hard coded IP addresses. If the application is very old, it's possible that the application code contains the IP address, which means a developer must modify the IP within the code or you must consider network-based solutions, such as VLANs.
- Replacement hardware. It's quite rare that a catastrophic event occurs during a forklift move, but hardware failures are fairly common, particularly hard drives. Before you forklift any production servers, consider arranging a quickship agreement with the hardware manufacturer(s) or buying a few strategic replacement parts. A shipping company will have insurance, but it doesn't solve the immediate outage issue. It only repays you after the event occurs.
For applications that currently exist on their own physical hardware, the virtualization process is called P2V (physical to virtual). A file is created that contains an image of the server (or an operating system and, in some cases, application code and data). Once you have the virtual server image, you can move it with little difficulty from one virtual host to another.
There are some things to consider when virtualizing applications, particularly for a data center consolidation/migration.
First, the applications must be certified by the application manufacturers in order to be supported. Most corporate applications are certified and even those that aren't certified are likely to work despite a lack of certification. It does, however, imply that they won't be supported in a virtualized environment. You should do the research and find out before finalizing your plan.
Second, remember that virtualization software licenses aren't cheap. Some clients virtualize every application even when only a few, or even one, will be hosted on a given server. Virtualization does present a lot of operational flexibility, and it may be worth deploying -- even if the numbers don't add up.
If, however, you have a lot of relatively new servers, available space and power, it might not make financial sense. Virtualization reduces the number of servers you need to own and operate because you are reducing the total number of physical servers that are hosting applications in your environment. It also saves on power, which is the number one ongoing cost for data center operations in most companies. It also helps eliminate the need to upgrade your facilities, which may currently be at electrical capacity.
Third, identifying virtualization candidates involves monitoring system components (e.g., CPU, RAM) and selecting candidates whose combined usage of those components does not exceed the host server's capabilities. Most companies set a standard of about 20 virtual clients to a single host, but we've found that large organizations can easily get 50 to 80 clients on a single host. Start small. You can always add clients to hosts later, once you've baselined performance and use.
Finally, separate the task of virtualizing a server from the migration. With virtualization, you'll know relatively quickly whether the application works. But determining performance may take longer. Try to virtualize a month or so before the migration so that if something goes wrong, you know where to look.
You may already have an inventory of virtual servers. In this case, all you'll have to do is set up a new virtual host in the new environment and copy the images of the virtual clients over to the new host (i.e., V2V). It's really pretty simple.
Regardless of how you choose to migrate applications, testing is an important component. In organizations that have formal application development processes, there is a tendency to test more than necessary. For example, regression testing and unit testing are really not necessary. Carefully consider what is actually changing in the environment to determine how much testing you must do. In general, focus the testing on some basic application functionality, all interfaces, and a fairly thorough validation of the data.
Sanders: It's true that virtualization not only helps with consolidations, it can also help to avoid it in two ways. First, most companies do not consider consolidating data centers, particularly now, unless they absolutely have to. Usually, a company is out of power in several facilities. If capital outlay is already required, it may be an opportune time for a client to build a bigger data center and consolidate.
If you deploy virtualization in existing data centers, it's quite possible that the reductions in electrical (and physical) consumption could help significantly extend the use of existing facilities. What's more, when you use virtualization long-term, power growth in the facilities may be much lower, meaning you can avoid consolidation for a long, long time. Consider, too, that servers and storage, the two most demanding electrical devices in a data center, are getting more efficient. Two years ago, most companies were planning major data center projects because of a severe shortage of power. Now, many have shrinking electrical requirements because of the large-scale adoption of virtualization.
Erin Sanders is the data center services practice leader for Acumen Solutions Inc. She has more than 10 years of IT infrastructure experience and helps Fortune 500 companies to make the most of their technology investments. Before becoming a consultant, Sanders led numerous projects in successfully sustaining IT operations in organizations. As a consultant, she has conducted several large scale data center related projects, including data center consolidations and migration, data center and IT strategies, IT architecture and engineering and IT infrastructure integration.