Relying on alternating current (AC) instead of switching back and forth between it and direct current (DC) can save money in equipment costs and enable a datacenter to operate far more efficiently.
While the industry is not there yet, DC power is a hot topic. Utilities deliver AC to its customers, including datacenters. End user equipment in these facilities – PCs, servers and other gear – run in DC. Several transitions often are necessary in the distribution of electricity through the building and its ultimate use by the equipment. This wastes electricity.
It is a process that can be changed. Dale Sartor, who heads the U.S. Department of Energy’s (DoE) Data Center Energy Efficiency Center of Expertise at Lawrence Berkeley National Laboratory, described a common scenario to Energy Manager Today. “In a typical scenario, high voltage comes in AC and goes through the UPS [uninterruptible power supply] and is converted to DC,” he said. “It leaves the UPS and is converted to 480 volts AC. At the PDU [power distribution unit] it is stepped down to 120 or 240 volts AC. From there it goes into the server. In the server, it is converted to 380 volts DC. It then is stepped down to lower voltages, such as 12 volts or 3 volts, depending on the components on board the equipment being used.”
The battle between AC and DC goes back to Nikola Tesla, Thomas Alva Edison and the dawn of the use of harnessing electrical energy. More efficient use of DC electricity is gaining interest. Sartor said it is one of the areas covered in an eSeries that was introduced last week by the DoE’s Federal Energy Management Program. The online series is aimed at datacenter managers and operators.
Sartor, who helped put the course together, said that finding ways to reduce datacenter energy consumption by minimizing AC/DC transitions is a hot topic. One organization that is working in this area is The Open Compute Project. Open Compute was founded in 2011 in an effort to move from a landscape in which computing systems are proprietary to one in which system use open source principles. This will allow datacenters and other facilities to more easily use equipment from multiple vendors and otherwise simplify operations.
Peter Panfil, the Vice President of Global Power for Emerson Network Power, told Data Center Frontier in September that Open Compute has made it possible for data centers to deploy platforms that use DC-based rectifiers for their main power supplies. These, he told the site, replace AC/DC power supplies “traditionally embedded in an AC-powered server.” In other words, Open Compute is succeeding in its mandate of more deeply embedding DC in data centers.
Generating efficiencies in datacenter electrical infrastructure also is being researched at the University of Arkansas. Earlier this year, the school’s Center for Grid-Connected Advanced Power Electronics Systems (GRAPES) received $300,000 as part of grant from the National Science Foundation to explore these issues.
The story at the school’s website said that 91 billion kilowatt hours of electricity are used annually in datacenters in the United States. Much of this is in AC. Converting it to DC wastes 10 percent to 20 percent. Avoiding this waste requires a good deal new equipment:
But the new system requires the development of high-efficiency DC-to-DC power converters, high-voltage switching devices, solid-state circuit breakers, and efficient power-distribution systems, as well as the ability to eventually integrate micro-grid power from renewable and/or locally generated power sources.
There is a parallel driver of the research into use of DC in datacenters. Datacenters are among the biggest consumers of solar energy, which generate DC electricity. Limiting the number of DC/AC/DC conversions – from the solar platform (CD) to the building distribution system (AC) and back to data center equipment (various DC voltages) – saves money and drives efficiency.