Drive Data Center Energy Efficiency with Focus on Core IT Systems
Data center energy efficiency emerged as a serious issue five years ago this August when the US EPA issued a report that highlighted the dramatic growth of the industry – and its rising power demands – to support society’s information demands. The report estimated data center energy consumption had doubled in the years from 2000 – 2006, raising concerns about greenhouse gases and rising energy costs for business and government.
Over the past five years, the industry has made significant strides – though data center managers continue to juggle competing demands to improve energy efficiency while increasing computing power, managing costs and ensuring continuous availability. To support them in that challenge, we recently took a fresh look at Energy Logic, the holistic and vendor-neutral approach to data center efficiency introduced in 2007.
In updating to Energy Logic 2.0, we took into account how technologies have evolved and the opportunities and challenges those advances have created. Servers deliver more processing power with great efficiency, but still don’t take advantage of high efficiency components and waste too much power in idle mode. Virtualization, often accompanied by server consolidation, has been adopted more rapidly than we foresaw in 2007, but has also introduced some new management challenges. New UPS control modes and cooling technologies allow even moresavings to be squeezed out of those systems. Finally, data center infrastructure management has emerged to provide the visibility and control required to optimize data center performance.
Despite all of these changes, the core principles of Energy Logic remain as valid today as they were in 2007:
- The greatest savings are achieved by focusing on core IT systems.
- Data centers operate efficiently when energy consumption changes with demands, so systems must operate efficiently at less-than-peak times.
- You can achieve significantly reduced energy use without reverting to untried designs or technologies that may jeopardize performance.
The result is a set of ten strategies that serve as a roadmap for achieving significant savings in data center energy consumption. In our analysis, we saw savings of more than 70 percent. You can get a general sense of what you can save using the Energy Logic Cascading Savings Calculator.
Keys to Energy Logic
Energy Logic is designed as a roadmap for driving dramatic reductions in energy consumption without risk to performance. While not every organization can adopt every Energy Logic 2.0 strategy, they can benefit using these four lessons as a guide:
- Leverage the cascade effect. Support systems, such as cooling, seem like the place to start looking for energy savings because they account for a relatively high percentage of data center energy consumption but do not directly contribute to data center output. However, the greatest savings are achieved by focusing on the core IT systems that drive data center power consumption. These efficiencies cascade through the system by reducing demand on support systems, so the efficiency improvements in IT systems are amplified in the support systems. Conversely, Inefficient, non-productive parts of the IT system not only waste the power they consume – they also waste energy through the cooling and power systems required to support them.
- Don’t compromise availability and flexibility for efficiency. Data center energy consumption has created a problem for data center-dependent organizations—and an opportunity for companies seeking to market solutions to that problem. Unfortunately, many of these “solutions” put efficiency above availability and flexibility, which is both dangerous and unnecessary. Huge reductions in data center energy consumption are possible using proven technologies that do not impact the data center’s ability to deliver services.
- Higher density equals better efficiency. While many of today’s data center managers have spent the majority of their careers managing facilities with densities well below 5 kW per rack, the servers and support systems of today are not only capable of being deployed at much higher density, they are designed for it. While a density of 20 kW per rack may make some data center managers nervous, today’s data center can be designed to safely and effectively support that density—with room to grow.
- Capacity is the flip side of efficiency. Despite rising costs, electricity is still relatively cheap in some areas. That has prevented some organizations from aggressively moving to optimize efficiency. However, improving efficiency is more than a solution to rising energy costs; it is a solution to ever-increasing demand for compute capacity. Better energy efficiency can eliminate the need for expensive build-outs or new facilities by removing constraints to growth as the demand for compute and storage capacity continue to grow.
Energy efficiency remains a priority, and a new generation of management technologies that provide greater visibility and control of data center systems has arrived. The time is now for the industry to begin making large strides in reducing the overall energy consumption and environmental impact of data centers.
What is your organization doing to optimize data center energy efficiency?
- Migration to Mobile: The Evolution of EHS Management Tools
- 2015 Environmental Leader Product & Project Awards
- Mobility from the Plant Floor to the Store Door: Improve Safety, Accuracy and Productivity
- Improve Your Company's Environment and Energy Performance
- Case Study: Dassault Falcon Jet Taps Intelligent LED Lighting Solutions
- Best Practices in Electricity Procurement
- NAEM 2015 EHS and Sustainability Software Buyers Guide
- Verdantix Green Quadrant for EHS Software
- How to Thrive in Today's EHS Landscape
- NAEM Research Report: Planning for a Sustainable Future