How to Contain Cooling Costs with Containment Strategies

June 4, 2013 By Joe Capes

Joe Capes

It’s no secret that one of the biggest energy and cost drains in data center and IT environments can be the precision cooling systems, in place to keep everything operating smoothly, and at the right temperature and humidity. One of the best tactics for saving energy, while also improving cooling system performance, is implementing an air containment strategy. Containment systems come in two flavors: cold aisle containment and hot aisle containment. Implementing a containment system in the data center, whether cold aisle or hot aisle, can help data centers realize up to a 30 percent improvement in cooling system performance and efficiency, depending upon the application, type of cooling systems used and also a variety of ambient and site specific conditions. In addition to containment strategies, data center managers can improve cooling system performance, reduce energy consumption and cut costs by right-sizing cooling unit selection, using cooling units with electronically commutated (EC) fans, and by taking advantage of economizer modes of operation.

(You can see an infographic illustrating this article here.)

Containment Strategies

In the data center, hot air streams are generated by computing equipment; when uncontained, these streams can mix with cold air supplied by cooling systems. Data center operators should prioritize containment because doing so prevents hot and cold air streams from mixing with one another. If hot and cold air streams mix, it can negatively affect a data center’s cooling efficiency and cooling capacity.

A cold-aisle containment system (CACS) encloses the cold aisle to optimize supply air delivery, allowing the rest of the data center to become a large hot-air return plenum. A hot-aisle containment system (HACS) encloses the hot aisle, collecting the hot exhaust air from IT equipment for its return to, and treatment by, precision cooling units. For individual IT racks, or small IT rack ‘pods’, the hot air may be returned to a computer room air handler (CRAH) or large remote air conditioning unit using a ducted vent (a.k.a. “chimney”) attached to the top of each individual rack.

Data centers can realize improved performance by using either a hot aisle or cold aisle containment system, as long as some system is in place to separate the air streams from mixing. The exact savings any given customer will get from containment systems depends on a number of factors; however, it’s fair to say that the typical benefit is in the range of up to 15 percent efficiency gain, and up to 30 percent performance gain.

With containment in practice, precision cooling systems can be set to a higher supply air temperature, thereby saving energy while still supplying conditioned air at safe operating temperatures. Containment strategies are also able to eliminate hot spots by allowing the cooling unit supply air to reach the front of IT equipment with minimal stratification. Without mixing of hot and cold air, the supply air temperature can be increased without risk of hot spots. Additionally, data centers implementing a containment cooling strategy can realize increased economizer efficiency. When the outdoor temperature is lower than the indoor temperature, the cooling system compressors can be cycled down, or turned completely off which saves a great deal of energy.  Compressors are one of the major energy hogs in data center cooling units, along with humidification and dehumidification. By eliminating mixing between hot and cold air, the cooling system’s supply air temperatures can be increased, allowing the cooling system to operate above the dew point temperature. When supplying air above the dew point, no humidity is removed from the air. If no humidity is removed, adding humidity back into the cooling supply air stream may not be required, saving both energy and water.

Using containment strategies reduces the need to over-cool the IT space to account for redundancy and capacity planning which in turn allows for better overall physical infrastructure utilization. This enables right-sizing of the cooling infrastructure and results in equipment running at higher efficiency. But when should hot aisle containment be implemented over cold aisle, or visa versa?

Hot aisle containment is recommended in new-build and data center expansion applications. It is very effective if a data center is using in-row, close-coupled cooling solutions, or where a mixture of traditional perimeter cooling and in-row cooling is implemented. These “hybrid” applications are typical where a mix of low and medium/high-density IT equipment is in place.  An analysis conducted in 2011 shows hot aisle containment can provide about 40 percent more savings than cold-aisle systems, but again results will depend on the unique aspects of each site, the application and the approach used. When putting a containment system into an existing data center that has perimeter cooling units, cold aisle containment is typically more flexible, and less disruptive to deploy.

Containment systems also mesh well with economizer technologies which involve using outside air to help cool a data center. There are two primary benefits: First, containment allows you to increase the supply air set points of your precision cooling units.  In nominal case studies I’ve conducted, you can extend the available number of partial or full cooling hours by several months using this approach.

The second benefit is that using hot aisle containment provides a dramatic increase in the return air temperature to your cooling system. That has the advantage of a higher delta T for both in-room and in-row cooling units. The higher the delta T (the higher the return air temperature) and return water temperatures, the better the performance and efficiency of your entire cooling loop.

How much money can containment actually save a data center facility? The CO2 savings of using a HACS in a 1,200 kW data center equals a 255 t reduction in carbon emissions a year or $15,878 savings in electricity per year. This financial savings represents the equivalent of paying over 7 years of property tax bills (as of 2010) or nearly two years of owning and operating a car in the US.


Compressorized mechanical cooling systems consume a great deal of energy when running, so it only makes sense to turn them off whenever possible and instead, run in an economizer mode as the primary mode of cooling for a data center. Many precision air conditioners and chillers offer an “economizer mode” option, which is an energy efficient feature that uses less power by cycling mechanical compressors down, or off. Depending on the ambient climate and specific environment/geographic location of the data center facility, pre-treated and filtered outside air can be used to lower energy expenditure and costs, often referred to as free air cooling. However, bringing outside air into the conditioned data center space runs the risk of introducing airborne particulates and contaminants (i.e. salt is common to coastal areas) that can harm sensitive IT equipment, leading to system failures or shutdowns.

The introduction of outside air also causes complexities in maintaining proper data center humidity. The need to provide additional humidification or dehumidification can be costly from an OPEX perspective, as both processes use large amounts of energy to operate.  Other economizer modes which entail less risk, with equivalent or better performance, include water-side economizers and indirect evaporative heat exchange.

Operating in economizer mode saves energy by leveraging outdoor air for heat rejection, or as supply air during colder months of the year, allowing refrigerant-based cooling components like chillers and compressors to be shut off or operated at reduced capacity. By implementing economization, data centers can realize improvements in PUE up to 50 percent, and potential energy savings of 4 to 25 percent. In some climates, cooling systems can operate primarily in economizer mode, allowing direct expansion cooling (i.e. mechanical cooling) to serve primarily as a backup measure. Other experts indicate that cooling systems can save over 70 percent in annual energy costs by operating in economizer mode, corresponding to a reduction of over 15 percent in annualized power usage effectiveness (PUE).

The American Society of Heating, Refrigerating and Air-conditioning Engineers (ASHRAE) has  guidelines that define a baseline data center cooling system to establish the minimum performance requirement, typically a chilled water system with the chiller bypass via fluid heat exchanger economizer mode. While ASHRAE 90.1 does not stipulate that this exact system be used, the mode of economizer implemented must meet or exceed the performance of this baseline system. Every system has its pros and cons.  For instance, because of the fouling effects of water flowing through piping, open cooling systems that use water to transport heat energy generally have a shorter life expectancy than those that use air-to-air heat exchange. That said, cooling systems that use evaporative assist are limited by the surface area of the heat exchange medium that is subjected to the water. Overall, consideration should be given that the life expectancy of any cooling system is significantly affected by the amount of maintenance performed over its life.

The footprint of different cooling systems can be normalized to the maximum IT load the data center can support, with a direct fresh air economizer solution having the smallest footprint in general. The footprint of an air-to-air heat exchange economizer solution is only slightly larger due to the addition of the air-to-air heat exchange medium. An economizer solution using a fresh air heat wheel approach has the largest overall footprint of all the air-based economizer solutions and is nearly as large as a chilled water plant with cooling tower. Use of heat wheels also generally requires a purpose-build data center structure to accommodate the heat wheels, and often incorporates a dedicated data center sub-floor to act as the air supply plenum.

“Economizer via direct fresh air” mode does not have any heat exchanger since the outdoor air is supplied directly into the data center. This approach is particularly attractive in cooler moderate climates, but does require a partially rated back-up or supplemental mechanical cooling system to account for days or hours where the ambient conditions are too hot or humid to “free cool.” The “economizer via air-to-air heat exchange mode” can operate with or without evaporative assist, providing two modes of economization. It eliminates the air quality risk and humidity control issues and thus optimizes the capital and operational expense compared to a fully rated mechanical system.

In cloud computing environments where there is off-site disaster recovery and fail-over capability of IT applications it is realistic to expect that some of these data centers may operate entirely on economizer mode with no mechanical cooling backup at all. IT equipment inlet temperature thresholds are continuing to increase, making full-time economizer mode operation even more probable under the right environmental conditions.

Nominally, by simply by raising the temperature of the data center one degree Fahrenheit, an economizer cooling system can take advantage of up to six percent more full economization usage hours and demonstrate a three percent reduction in total power consumption. According to ASHRAE TC 9.9, the lowest recommended supply air temperature to IT equipment is 64.4 degrees Fahrenheit and the highest recommended is 80.6 degrees Fahrenheit. The difference in moving the set point from 64.4 degrees F to 80.6 degrees F is an 82 percent increase in full economization hours and a 37 percent reduction in total power consumption.

Electronically Commutated (EC) Fans

Legacy perimeter cooling systems use a highly mechanized design comprised of a shaft, belts, shims and forward-curved centrifugal blowers with DC motors. These brushed DC motors have an armature that supplies current through stationary brushes that are in contact with the revolving commutator. Brushes are typically constructed of carbon and due to their contact with the motor commutator, they eventually wear out or break, requiring replacement. In extreme cases they can emit carbon dust into the supply air stream.  Legacy perimeter units using forward curved centrifugal fans need to operate their blowers at 100 percent fan speed, unless the blower motors are equipped with after-market variable frequency drives (VFDs). Since the fan power is a function of the cube of the fan speed, using forward-curved centrifugal fans with brushed DC motors has the direct result of massive fan power consumption. In general, cooling systems using this architecture require monthly or quarterly maintenance and consume vastly more energy than those cooling systems which utilize EC fans.

EC fans, also referred to as plug fans, are a brushless motor typically constructed with a backward curved motorized impeller. EC fans are DC motors which can be powered by AC current, as there is a rectifier internal to the motor which converts AC power to DC power that is then supplied to the motor drive. This architecture allows the EC fan speed to be varied from zero to 100 percent, allowing the cooling unit to match its fan speed to a variety of input parameters. This means that cooling units with EC fans are operated at the lowest possible fan speeds necessary to maintain desired set points.  Besides being compact in size, highly reliable and requiring little to no maintenance, cooling units with EC fans use approximately 70 percent less power than comparable cooling units with forward curved centrifugal fans. In the context of data center cooling, this energy savings can have an enormous impact in cost savings.

Right-Sizing Cooling Architecture

Incorrectly sized cooling equipment and under-loading are some of the key contributors to inefficiency in data centers and mission critical facilities, especially because traditional facilities devote over half of their infrastructure power to fixed energy supplies that do not vary. As IT load declines against a fixed amount of cooling capacity, efficiency also declines. Therefore it’s absolutely pertinent that data centers are implementing the right amount of cooling capacity for the IT load at a given period in time.

When data centers prioritize right-sizing cooling equipment with a modular and scalable cooling architecture, data centers can scale up and down as demand cycles change. This “right-sizing” design can achieve energy savings of 10 to 30 percent, with increased savings when used for redundant systems.

Implementing containment strategies in a data center facility is a great way to keep the cooling systems running at optimized performance levels and as efficiently as possible, while also cutting costs. A “cool” data center that follows the upper limits of recommended ASHRAE guidelines is also a safe data center. Focusing on efficient cooling strategies — whether that’s implementing a containment system, using economizers or right-sizing the cooling architecture with systems based on EC fans — helps facility managers mitigate the risks associated with sensitive IT equipment. With the right tools, and the right cooling system, data center owners and operators can bring sustainability to their facilities – sustainable ‘greening’, sustainable gains in cooling performance, and sustainable reduction and savings in energy use.

Joe Capes is Business Development Director, Cooling Line of Business – Americas for Schneider Electric.



Leave a reply