10 Key Data Center Energy Management Trends for 2013
The ways data centers consume power will continue to undergo both subtle and substantive transformations in 2013. The first three trends identified here started in 2012, when organizations began struggling with increased power demands in the face of constrained capacity caused by both inefficient equipment and stranded power. These trends will continue into 2013 and be joined by seven new ones for the New Year.
- The move to virtualized environments and private clouds as a means to optimize resource utilization is occurring faster than most industry analysts had predicted, resulting in these architectures becoming the new norm in enterprise and service provider data centers. Organizations will continue to capitalize on these virtual server architectures in an attempt to realize their full potential to accommodate growth in a more cost-effective and energy-efficient manner.
- With virtualized architectures making it easier to manage loads within and across data centers, more organizations will take advantage of this trend to operate multiple, geographically-dispersed data centers to enhance disaster recovery preparedness and achieve other operational benefits. For example, because rates for electricity are at their lowest at night when demand is low and baseload generating capacity is under-utilized, shifting the current workload to “follow the moon” can result in considerable savings. These savings are compounded with outside ambient air temperatures also being at their lowest at night, which can substantially cut cooling costs.
- The power consumed by servers, storage and networking equipment has become an increasingly important consideration in the data center, and this has resulted in more routine use of standards like ENERGY STAR from the U.S. Environmental Protection Agency and Power Measurement (UL2640) from Underwriters Laboratories. While ENERGY STAR emphasizes the efficiency of power supplies, UL2640 focuses on the number of transactions per second per Watt (TPS/Watt) delivered by servers—a much more meaningful energy efficiency metric for capacity planning.
- To eliminate the stranded power that exists in virtually every data center, capacity planning efforts will also begin to include power distribution and actual consumption as critical design factors. The reason derives from the fact that server nameplate ratings are inevitably conservative, and using these effectively guarantees underutilizing available power. By instead using actual energy consumption under peak load (another metric provided by UL2640), organizations can potentially increase usable transactional server capacity by up to 50 percent.
- Having been proven in practice by early adopters and with the increased focus on power consumption, Data Center Infrastructure Management (DCIM) systems will become a mainstream and even a “must have” tool. Organizations will utilize DCIM systems for both capacity planning and real-time management of application workloads and environmental conditions. For example, by dynamically matching available server capacity in a virtualized cluster to the actual application load, active servers can achieve a utilization rate as high as 70 or 80 percent, and total server power consumption can be reduced by up to 50 percent.
- To eliminate the considerable overlap between the DCIM and other management systems used by the IT department and the Building Management System (BMS) used by the Facility department, organizations will begin migrating to DCIM as the primary platform for managing data centers, and will integrate other systems with it. Being purpose-built for the special needs of the data center, DCIM systems address building management considerations (e.g. environment conditions and power consumption) in the more appropriate context of the IT infrastructure, while also addressing the capacity, performance and availability needs of the servers in the data center.
- With Software-as-a-Service (SaaS) and other cloud-based applications becoming more pervasive, DCIM functionality will also become more popular as a service for monitoring and managing a data center’s environmental conditions and power consumption. Initially the arrangement is expected to be a hybrid one, with routine and real-time operations being performed on-site, while capacity planning analytics (e.g. aggregate utilization and forecasting) is performed in the cloud.
- With the advanced capabilities provided by server virtualization and load-balancing, DCIM, and other sophisticated management systems, data center automation will begin to make a comeback for the very same reason it was once deemed desirable: to mitigate against the risks caused by human error managing critical functions in the data center. For example, runbooks will be used to automate the dynamic capacity management steps involved in resizing clusters and/or de-/re-activating servers, whether on a predetermined schedule or dynamically in response to changing loads.
- Increased automation will finally make it possible for data centers to participate in lucrative energy markets, making them a direct source of revenue for the organization while mitigating against risk form outside events. Through their Demand Response (DR) programs, electric utilities provide substantial cash payments to organizations that are willing and able to reduce power consumption, or provide other grid stabilization assistance from local power sources or by active management of the computing load during periods of peak electricity demand. At a minimum, the DCIM system should be able to power-cap less critical servers, and power-down some others that are not needed to satisfy the current workload. Depending on the event’s duration, the DCIM might also be able to temporarily adjust the thermostat or reduce the cooling power consumption using pre-cooled chilled water, taking action as necessary to prevent “hot spots” from forming.
- Thomas Edison will be vindicated for his preference for direct vs. alternating current as DC power becomes more common in data centers owing to its greater efficiency. This trend will apply mostly to new data centers, where according to research conducted by Lawrence Berkeley National Laboratory, the efficiency gains of using DC have both a direct and indirect benefit owing to fewer power conversions and less generation of heat, respectively. Test results reveal about a 10 percent savings in energy for the entire data center compared to even the most efficient AC configurations.
Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.
- Alarms Management: The Future is Now
- 2014 Insider Knowledge Report
- The CFO and the Sustainability Reporting Chain
- Energy Efficiency Ratings: Benchmarks that Drive Excellence in Building Design & Operations
- Trends in Energy Management: Where Should Your Next Investment Be?
- Essential Guide to Lighting Retrofits and Upgrades
- NAEM Trends Report: Planning for a Sustainable Future
- 2014 Environmental Leader Product and Project Awards
- 6 Steps from Getting the Most From Every Lighting Retrofit
- What You Need to Know About Demand Charges
- Energy Efficiency Requires Engineering Efficiency
- Integrated Building Optimization: A Crucial Convergence of Demand-side and Supply-Side Energy Management Strategies
- Driving Productivity and Profit with Industrial Energy Management
- Energy Procurement in 2014: Products & Programs to Optimize Savings
- BUYING STRATEGIES IN A VOLATILE MARKET: What Businesses Need to Know about Retail Electricity Procurement