A common pain point of many information technology organizations is that, over time, legacy hardware and software must be maintained. The term legacy commonly refers to either outdated and possibly unsupported computing systems or systems comprised of components of a previous version. Replacing legacy systems with new systems is always a challenge, both technically and in business terms. Legacy systems exist because of many reasons. These may include a very high cost to completely replace the system due to a large investment in capital expenditure, training, and customization of the system. Legacy systems may also be sensitive, mission critical resources that cannot easily be replaced. Sometimes legacy systems must be maintained because they cannot be replaced because it was purchased from a third-party company no longer in business or from one that no longer supports the system, or possibly because of a lack of expertise in the system's internals. In some cases, it costs less to maintain legacy systems than it does to replace them, but over time legacy systems typically become more unusable because they often are not able to scale up because of costs or technology limitations.
For example, an organization may have invested in a custom software application designed to run under the Microsoft Windows NT 3.51 Server operating system many years ago. The application in this example was written in such a way that it will not work with any other version of the Windows operating system. The application is becoming a productivity bottleneck because its utilization has grown over the years while running on the same hardware. The organization wishes to upgrade the server hardware on which the application and operating system are installed upon, but cannot because the newer hardware available today does not have the appropriate support for the older operating system. There are no available chipset and storage drivers that will work with the older operating system while allowing it to use top-of-the-line processors, memory, and disks to solve the legacy application's ability to scale up. Additionally, because the application was custom-written, there are no newer versions of the application available and the costs to switch to a different application that provides like functionality is cost prohibitive.
Server virtualization can solve legacy server and application support issues. Migrating the legacy servers to virtual machines inherently abstracts the physical hardware from the legacy software, including the operating system and the applications, allowing the legacy hardware to be discarded or reused elsewhere as needed. Because the legacy software now uses virtualized hardware, it can be moved to any host server as necessary, making the legacy server portable. Aside from its newly gained portability, the legacy server migrated to a virtual machine can be hosted on and use any hardware that is supported by the virtualization platform and the host server.
Continuing the previous example, the organization decides to migrate the legacy server to a virtual machine. They choose a virtualization platform that allows them to host the migrated server on modern, high-end server equipment, which addresses the bottleneck issues while adding the portability necessary to move the server to better hardware in the future, if the need arises. Additional savings in terms of hosting can be realized by migrating legacy servers to virtual servers in order to decommission older, inefficient hardware and by increasing overall server utilization.
Use the following table of contents to navigate to chapter excerpts, or click here to view Business cases for server virtualization in its entirety.
Advanced Server Virtualization
Home: Business cases for server virtualization: Introduction
1: Server Consolidation
2:Legacy server and application support
3: Disaster recovery
4: High availability
5: Adaptive computing
6: On-demand computing
7:Limitations of server virtualization
|ABOUT THE BOOK:|
|Advanced Server Virtualization focuses on the core knowledge needed to evaluate, implement and maintain an environment that is using server virtualization. It emphasizes the design, implementation and management of server virtualization from both a technical and a consultative point of view. It provides practical guides and examples, demonstrating how to properly size and evaluate virtualization technologies. This volume is not based upon theory, but instead on real-world experience in the implementation and management of large-scale projects and environments. Currently, there are few experts in this relatively new field, making this book a valuable resource. Purchase the book from Amazon|
|ABOUT THE AUTHORS:|
|David Marshall is currently employed as a software engineer for Surgient Inc., a software company based in Austin, Texas, that provides software solutions that leverage x86 server virtualization technologies. He holds a B.S. in finance and an Information Technology Certification from the University of New Orleans. He has been working with virtualization software for the past six years. Dave McCrory works as chief scientist for Surgient Inc. He has filed several patents around server virtualization and management of virtual machines and has worked with virtualization technology for more than five years. Wade A. Reynolds is employed as a senior consultant by Surgient Inc. He has been designing and implementing enterprise solutions based on virtualization technology on a daily basis for more than three years, including VMware ESX Server and Mircosoft Virtual Server from its pre-beta release.|