Suggesting a data center cost containment project to your customers is a no-brainer -- everyone is interested in saving money. But before starting a project, solution providers should make sure they themselves understand all facets of data center cost containment, including energy efficiency, virtualization and power usage effectiveness (PUE). In this Project FAQ, a team of experts from PlanNet Consulting, an independent consulting firm specializing in data centers and critical IT infrastructure based in Los Angeles, will guide you through the process of a data center cost containment project and provide answers to frequently asked questions about data center cost containment. Be sure to listen to the supplemental podcast, in which our experts sit down and discuss marketing such a project and the best way to sell a green data center.
• What methods and best practices are being deployed to design the "green" data center?
• What is power usage effectiveness (PUE) and how are clients driving down costs by obtaining better PUE?
• What are the best ways to lower operational expenses in a data center?
• What are the most energy-efficient methods to cool a data center?
• What are the cost benefits of building or upgrading an in-house data center vs. moving to a colocation or hosted facility?
• How can virtualization/consolidation help in data center cost containment?
What methods and best practices are being deployed to design the "green" data center?
By their very nature, data centers themselves can hardly be considered "green" (how many industries leave their equipment running 24/7 whether or not it's being used?). However, striving for efficiency and sustainability in data center design is borne from necessity to reduce costs as much as it is out of environmental consciousness. Picking a location is the first and probably most important step in the process when bringing a new data center online. Power costs typically account for more than a third of a data center's operational costs, so many enterprises are building data centers in locales with access to cheaper, cleaner energy, like hydroelectric, and cooler climates, which can lower energy costs by taking advantage of cool outside air.
Changing the location is not always an option, but there are green approaches that can be utilized to reduce power usage, such as reducing demand loads by consolidating/virtualizing servers and implementing energy-efficient hardware, making use of close-coupled cooling and hot- and/or cold-aisle containment designs, and prime power generation using micro turbines or fuel cells that significantly reduce greenhouse gases.
PUE is a metric devised by The Green Grid (a vendor-driven consortium focused on improving energy efficiency in data centers), which is expressed by the average total facility power divided by the IT equipment power. The total facility power is everything required to support the load under normal conditions, such as UPS, switchgear, HVAC, lighting, etc. For example, a data center that consumes 1,000 watts of total facility power to support 500 watts of IT equipment power would have a PUE of 2 (1,000/500=2). The closer the PUE approaches 1, where 100% of the power is consumed by the IT equipment, the more energy-efficient is the data center, which in turn translates to lower total cost of ownership (TCO). Whereas reducing power load through strategies like virtualization can lower costs, PUE is all about proper and efficient support systems design, equipment and operation.
Short of maximizing outside-air free cooling, there are a number of ways to provide more efficient cooling and power. In general, the closer cooling equipment can be located to the heat-producing equipment, the better. This requires less energy to push air or cooling fluids by reducing the distance of heat transfer. Reducing the mixing of hot and cold air around IT equipment forces HVAC systems to operate closer to their design capacity, which is significantly more efficient. Variable speed fans, pumps, etc., allow for better targeting of cooling when and where it is needed. Line-interactive UPS systems, often with flywheels instead of batteries, offer improvements, as do more complex redundant strategies for cooling and power systems.
What are the best ways to lower operational expenses in a data center?
To start, effective management of computing capacity demand can pay huge dividends in lowering operational expenses. In our experience, it is not uncommon to observe 20% "dead" servers and 6% average server utilization in undermanaged environments. The process involves identifying and decommissioning unused systems and consolidation of underutilized systems to significantly reduce utility and space costs.
Other computing management strategies include retiring inefficient hardware, utilizing power-save features of hardware, and avoiding the use of controlled floor space for noncomputing activities such as NOC, equipment staging and storage. From a facilities standpoint, simply raising the operating temperature of the data center can result in immediate payback with no investment. New ASHRAE guidelines encourage operation as high as 77o F, from the status quo of 68o F. Not only are cooling costs reduced but humidity control costs are also reduced. Additional facility operational efficiencies can be gained by optimizing airflow, reducing hot and cold air short circuiting, and right-sizing UPS and cooling equipment.
What are the most energy-efficient methods to cool a data center?
The traditional model of forcing cold air great distances overhead or under shallow or obstructed raised floors is an inefficient and problematic way to provide cooling to IT equipment. Liquid cooling is much more efficient; however, until such time that servers routinely ship with liquid cooling supply and return couplings, there are ways to design more efficient methods using technology available today.
The two most widely deployed best practices today are close-coupled cooling (in-row or overhead) and hot- and/or cold-aisle containment. Close-coupled cooling brings the source of the cold air closer to the equipment to supplement or replace perimeter room cooling. This involves piping chilled water or refrigerant to multiple small air handlers located in close proximity to equipment. Hot- and/or cold-aisle containment is a design approach whereby the hot and/or cold aisles are completely separate and contained with either or both hot- and cold-air ducting so that hot air exhausted by IT equipment does not mix with cold air supplied by air handlers. By implementing energy-efficient cooling best practices, an 11% energy savings can be achieved in a 5,000-square-foot model data center, according to Emerson Network Power's (Liebert) "Energy Logic" whitepaper.
What are the cost benefits of building or upgrading an in-house data center vs. moving to a colocation or hosted facility?
This is a question most enterprises grapple with at one time or another. An organization may decide to outsource the facility to a hosted or colocation provider for a variety of reasons that may have nothing to do with cost, such as outages, lack of data center facility management competency or speed to market. From a cost perspective, although there is no cut-and-dried rule, small and medium-sized data centers (under 5,000 square feet) may be able to achieve lower TCO by moving their data center to a managed facility, while the opposite is usually true for medium to large data centers (greater than 5,000 square feet).
A key to the savings is the duration. The TCO will often be better over the short term in a hosted facility, especially by avoiding facility capital expense costs, whereas over the long term, an owner-operated data center of significant size will often provide a better TCO. An organization must properly assess the costs, risks and benefits of building/expanding a data center versus colocation. Organizations may also consider using targeted colocation to off-load specific server applications such as non-mission-critical or high-density workloads. This strategy can significantly extend the life of an existing facility, lowering overall operating costs.
How can virtualization/consolidation help in data center cost containment?
Server virtualization/consolidation reduces costs by lowering computing and facilities capacity and utility needs. In general, typical server utilizations range from 5% to 30%, which speaks to the opportunity to consolidate systems onto fewer physical servers. Although the consolidated systems operate at a higher utilization, the results represent significant reductions in facilities, operations, hardware and software costs. A 5-to-1 consolidation ratio is reasonably attainable, although we have seen ratios as high as 10 or 20 to 1, particularly in subproduction environments. Virtualization also has the potential to lower disaster recovery costs. Virtualization technologies such as VMotion can enable an efficient disaster recovery capability by creating an environment that automates systems recovery; reduces standby hardware, engineering and operating costs; and simplifies the entire recovery process.
About the authors Gary Davis is a senior consultant for PlanNet Consulting, an independent IT infrastructure and data center consulting services company, and has more than 20 years of management and technical experience in IT. He has an extensive background in the development and management of IT organizations and technology infrastructure.
Michael Fluegeman is a senior consultant for PlanNet Consulting. He is a registered professional engineer with 25 years of power experience. His extensive experience includes property assessments, conceptual design, project management, construction support, business development, commissioning, service-level agreements and operations assistance of electrical as well as integrated critical support systems.
Steve Miano is a co-founder and managing principal with PlanNet Consulting. Steve has consulted with global corporations and institutions for nearly 20 years, providing expertise in strategic planning, solutions architecture and program management for large-scale data center and IT infrastructure initiatives. Prior to establishing PlanNet in 2001, he held management positions with national and regional IT consultancies.
GlassHouse Technologies, an IT infrastructure and services firm based in Framingham, Mass., also collaborated on this Project FAQ.