Solutions provider takeaway: This section of the chapter excerpt from "The Green and Virtual Data Center" by Greg Schulz teaches solutions providers short- and long-term techniques for transforming a client's data center to an energy-efficient model. Learn how to meet data center power, cooling, flooring and environmental objectives by using virtualization, maximizing current power usage and modifying application performance.
Given specific goals, requirements, or objectives, shifting to an energy-efficient model can either reduce costs or enable new IT resources to be installed within an existing PCFE footprint. Cost reductions can be in the form of reducing the number of new servers and associated power and cooling costs. An enabling growth and productivity example is to increase the performance and capacity, or the ability to do more work faster and store more information in the same PCFE footprint. Depending on current or anticipated future power and/or cooling challenges, several approaches can be used to maximize what is currently in place for short-term or possibly even long-term relief. Three general approaches are usually applied to meet the various objectives of data center power, cooling, floor space, and environmental aims:
- Improve power usage via energy efficiency or power avoidance.
- Maximize the use of current power -- do more with already available resources.
- Add additional power, build new facilities, and shift application workload.
Other approaches can also be used or combined with short-term solutions to enable longer-term relief, including:
- Establish new facilities or obtain additional power and cooling capacity.
- Apply technology refresh and automated provisioning tools.
- Use virtualization to consolidate servers and storage, including thin provisioning.
- Assess and enhance HVAC, cooling, and general facility requirements.
- Reduce your data footprint using archiving, real-time compression and de-duplication.
- Follow best practices for storage and data management, including reducing data sprawl.
- Leverage intelligent power management such as MAID 2.0-enabled data storage.
- Use servers with adaptive power management and 80% Plus efficient power supplies.
Virtualization is a popular means of consolidating and eliminating underutilized servers and storage to reduce cost, electricity consumption, and cooling requirements. In their place, power-efficient and enhanced-performance servers and storage, including blade centers, are being deployed to support consolidated workloads; this is similar to what has historically been done in enterprise environments with IBM mainframe systems. However, for a variety of reasons, not all servers, storage, or networking devices lend themselves to being consolidated.
Some servers and storage as well as network devices need to be kept separate to isolate different clients or customers, different applications or types of data, development and test from production, online customer-facing systems from back-end office systems, or for political and financial reasons. For example, if a certain group or department bought an application and the associated hardware, that may prevent those items from being consolidated. Department turf wars can also preclude servers and storage from being consolidated.
Two other factors that can impede consolidation are security and performance. Security can be tied to the examples previously given, while application performance and size can have requirements that conflict with those of applications and servers being consolidated. Typically, servers with applications that do not fully utilize a server are candidates for consolidation. However, applications that are growing beyond the limits of a single Energy Efficiency Incentives, Rebates, and Alternative Energy Sources 41 dual-, quad-, or multi-core processor or even cluster of servers do not lend themselves to consolidation. Instead, this latter category of servers and applications need to scale up and out to support growth.
Industry estimates and consensus vary from as low as 15% to over 85% in terms of actual typical storage space allocation and usage for open systems or non-mainframe-based storage, depending on environment, application, storage systems, and customer service-level requirements. Low storage space capacity usage is typically the result of one or more factors, including the need to maintain a given level of performance to avoid performance bottlenecks, over-allocation to support dynamic data growth, and sparse data placement because of the need to isolate applications, users, or customers from each other on the same storage device. Limited or no insight as to where and how storage is being used, not knowing where orphaned or unallocated storage is stranded, and buying storage based on low cost per capacity also contribute to low storage space capacity utilization.
The next phase of server virtualization will be to enhance productivity and application agility in order to scale on a massive basis. Combined with clustering and other technologies, server virtualization is evolving to support scaling beyond the limits of a single server -- the opposite of the server consolidation value proposition. Similarly, server virtualization is also extending to the desktop to facilitate productivity and ease of management. In both of these latter cases, transparency, emulation, and abstraction for improved management and productivity are the benefits of virtualization.
Greg Schulz is the founder of The StorageIO Group, an IT consulting firm. He has worked as a programmer, systems administrator, disaster recovery consultant and capacity planner for various IT organizations and also worked for several vendors before joining an analyst firm and later forming StorageIO. In addition, Schulz is a prolific writer, blogger and speaker who regularly appears at conferences and other events around the world. He is the author of "The Green and Virtual Data Center" and "Resilient Storage Networks".