The (Software-Defined) Data Center of the Future

To keep pace with the relentless growth in demand for online applications, the data center of the future will need to be more power conscious and more energy efficient.  And to fulfill both of these requirements, the data center itself will need to become software-defined.

“Software-Defined Data Center” (SDDC) is a new term for an old trend: the virtualization of physical resources.  Virtually all IT resources have already been virtualized with a layer of abstraction, including the servers, storage and network.  Very little such abstraction exists, however, in the data center facility itself.  Even in facilities with a Building Management System (BMS) or Data Center Infrastructure Management (DCIM) system, the extent to which power and cooling have been abstracted is often insufficient to achieve the full benefits possible with the Software-Defined Data Center of the future.

The ability to define and, therefore, control something in software requires creating a layer of abstraction for the physical resources.  Hypervisor software, for example, creates the multiple Virtual Machines (VMs) sharing a server’s physical CPU, memory and disk input/output resources.

The virtual layer of abstraction needed for a data center’s power and cooling infrastructures can be created in a number of ways, including industry standard protocols, management applications for the power distribution units and computer room air conditioner (CRAC), or a BMS or DCIM system.  The problem is: nearly all SDDC solutions today focus exclusively on the IT resources and ignore the facility itself.  Fulfilling the promise of the Software-Defined Data Center of the future will, therefore, require the addition of Software-Defined Power.

Software-Defined Power

The primary purpose of Software-Defined Power is the same as the Software-Defined Data Center:  to improve application availability.  More than half of all application downtime today is caused by power problems, and that percentage is likely to increase as the electric grid struggles to meet a growing demand on an aging infrastructure.  Of course, the main reason power is now the primary cause of application downtime is because the virtualization of IT resources has minimized or even eliminated single points of failure in the servers, storage and network.

Software-Defined Power provides immunity from most problems on the electric grid by making it possible to shift the application workload to the data center currently experiencing the best availability, dependability and quality of electricity.  This requires having multiple, geographically-dispersed data centers, and most organizations now do to support their business continuity and disaster recovery needs.  This also requires the solution to interact across these multiple locations with the systems actually managing the workload on the virtual and/or physical servers—a capability lacking in most data centers today.

Including power and cooling (along with servers, storage and networks) as software-defined elements in the application environment makes it possible to abstract applications fully from all physical resources within any individual data center.  And it is this abstraction that enables the application workload to be shifted more intelligently between or among data centers.

While the cost savings that result from avoiding application downtime are real and substantial, they are difficult to quantify.  For this reason, IT departments find it necessary to justify the investment in Software-Defined Power in other ways.  Fortunately, the energy savings alone make for a very compelling case.

The reason for the savings is that shifting a load to a distant data center makes it possible to shed that load locally.  Powering down the local servers until they are needed again affords a reduction of up to 50 percent in the energy needed, including for cooling.  Additional savings can be achieved at the “active” data center during non-peak periods by dynamically matching server capacity with the application workload.

Yet another way to save—or more accurately make—money, is to participate in lucrative Demand Response (DR) programs, whereby electric utilities provide substantial cash payments to organizations willing and able to reduce power consumption during periods of peak demand.  During a DR event, the Software-Defined Power solution could, for example, power-cap less critical servers and/or power-down others that are not needed to satisfy the current workload.

The final area of cost savings derives from the fact that that when power is the most available and dependable, it is also the most affordable.  Because this always occurs at night, shifting the application workload to “follow the moon” means the organization will always be paying the lowest rate for electricity, and will always be using less of it based on the ability to cool the facility—in whole or in part—with outside air.

Waste Not, Want Not

Because the amount of power available in any data center is finite, accommodating growth will also require greater energy efficiency in the data center of the future.  The reason is: as servers “shrink” (with more cores in the CPUs, and higher densities in the memory and storage) every rack of equipment consumes more power.  This will require paying close attention to two factors that are often taken for granted today.  One is the work performed per Watt, and this metric is available for servers with the Power Measurement standard from Underwriters Laboratories (UL2640).

The other factor is stranded power, which is caused by using notoriously conservative nameplate ratings on equipment, especially servers.  These ratings must specify the maximum power consumption possible, and that only rarely (and usually never) actually occurs, even under peak load conditions.  Exclusive use of nameplate ratings can result in stranding as much as 50 percent of the power being distributed to server racks.  The UL2640 standard can also help here by providing accurate measurements of the actual power servers consume under peak load.

There are other ways to improve energy efficiency, of course, but all of these changes lead to an inescapable conclusion: power and energy will be of paramount importance in the (Software-Defined) Data Center of the future.

Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies. 

Financing Environmental Resiliency and a Low-Carbon Future with Green Bonds
Sponsored By: NSF International

The New Energy Future - Challenges and Opportunities in Corporate Energy Management
Sponsored By: Edison Energy

The Hidden Costs of Air Compressor Operation
Sponsored By: FS-Elliott

How Tracking/Managing Energy Consumption Drives Real Cost Savings
Sponsored By: Digital Lumens


Leave a Comment

User Name :
Password :
If you've no account register here first time
User Name :
User Email :
Password :

Login Now