How digital power technologies are helping data centres beat the heat

November 05, 2015 // By EDN
Martin Hägerdal, President, Ericsson Power Modules
As we head towards the end of 2015, there’s time for reflection. Much of central Europe endured a record-breaking heat wave during the summer, with temperatures consistently above 35°C and even reaching over 40°C in some places. The sizzling conditions gave data centre managers a problem. Keeping hard-working servers cool is energy intensive at the best of times, but cooling systems will have worked even harder over this unusually hot summer leading to higher than usual utility bills.

Operators know their cooling systems are expensive to run, and have been seeking solutions in a variety of approaches ranging from various smart controls to building their new data centres in naturally cold climates. Northern Scandinavia and Iceland are prime locations. Places such as these also have the advantage of a plentiful supply of energy from renewable sources, which can help cut costs and boost environmental credentials. The nature of ‘the cloud’ allows the flexibility to pick an optimal geographical location. On the other hand, telecom or mobile network operators have less choice over where to position their major infrastructure hubs.

In any case, cooling is just one part of the equation. The industry is also tackling the cause of the problem by ensuring equipment produces less heat. This, of course, means improving efficiency throughout the installation, and particularly the computing systems and power architecture.

As far as the computing angle is concerned, tomorrow’s processors and SoCs are likely to be fabricated using advanced finFETs : dual-gate devices that could take over from conventional planar transistors at geometries below 20nm. finFETs have a number of performance advantages, including low on-resistance, low leakage current, and switching frequencies beyond 3GHz. On the other hand, dynamic power consumption can be higher at fast clock speeds, leading to higher but short-lived peaks in current demand. In addition, increased logic density will allow designers to pack many more high-performance processors and SoCs onto their boards. The result? The peak power of blades and switch cards could rise from around 1kW today to as much as 5kW in the future.

The advent of multi-kilowatt blades will have a significant impact on power architectures, as system designers seek to minimise distribution losses. Since the losses are directly related to current and distance, it makes sense to distribute power at a higher voltage than the typical 12V intermediate bus voltage of today. Digital power allows the bus voltage to be dynamically adjusted for optimum performance according to the load conditions. This fine-tuning of the bus voltage can have a noticeable effect on efficiency at today’s power levels. However, the multi-kilowatt blades we expect to see in the future will draw currents of well over 100 amps at 15V. The industry is looking at moving to a significantly higher intermediate bus voltage in order to minimise the losses due to current over distance.

48V has a lot to offer as a suitable nominal intermediate bus voltage. It is within the IEC’s Safety Extra-Low Voltage (SELV) limit of 60V for user-accessible subsystems, and gives a good compromise between distribution loss and step-down conversion efficiency under the load conditions we expect to see. Compared to a system based on a 12V intermediate bus, the current distributed to the boards within a system can be reduced by a factor of four. In the case of a card with 5kW peak power, the current delivered will remain substantial but the number of DC/DC converters that need to be used in parallel can be reduced. In terms of efficiency, the higher intermediate bus voltage would provide