Data centre cooling, air management and upgrading to EC technology

By James Cooper – Product Manager ebm-papst (UK) Ltd - 29th September 2017

Introduction

Data centre cooling, air management and upgrading to EC technology – A look at the impact of inefficient cooling and airflow management in legacy Data centres; and how simple steps, including upgrading the fan technology to EC fans, can make a system more efficient, quieter and give a new lease of life to old equipment.

Legacy systems

Cooling computer rooms and data centres has been a hot topic since the mid 50’s, when there was a need to control the temperature and humidity of punch-cards and magnetic tape heads. Even though technology has progressed at an incredible rate, the same methods and guidelines for cooling are still adhered to in many older Data centres and Computer rooms; keeping return air temperatures in the low 20o’s, as it was deemed necessary and safe for the equipment and servers within the room.

When looking at the efficiency of Computer rooms and data centres, a standard metric that has been used for many years is PUE (Power Usage effectiveness). This compares the power needed for the IT equipment against the total facility power requirement. Ideally this should be Energy in kWh rather than power as the title suggests. The metric is also not really comparable from one installation to another, but can be used as a metric for improvement within one Data centre.

*

Power Usage Effectiveness (PUE)

*

An average legacy Data centre in the UK has a PUE of 2.5 which means that only 40% of the energy used is available for IT load

Another way of expressing this is by using the metric DCiE (Data centre infrastructure efficiency), which would indicate that a Data centre with a PUE of 2.5 is 40% efficient

Most of the energy, unless it is a modern Data centre, is put towards mechanical cooling; which in the eyes of most IT people is a necessary evil. It doesn’t have to be that way, as cooling methods and technology have also moved on significantly and there are many ways to make a Data centre more efficient. Modern Data centres designed to use ambient air directly or indirectly, are becoming increasingly popular and show much more efficient results, coming in at 1.2 or better.

If you look at the plethora of cooling options in Data centres, it is no wonder that people struggle with what is best for them. Traditional CRAC units (Computer Room Air conditioners), that tended to sit against the walls in the room or in a corridor blowing under the floor have seen a more recent influx of aisle or rack units, taking the cooling closer to the server with DX or water, suggesting higher density cooling capabilities. Direct and indirect fresh air cooling is also being seen as a viable option for the UK; since our average annual temperature spends more than 60% of the time below 12degC. Adiabatic cooling has also seen a revival recently, even though it is most efficient in hotter climates.

They say that nothing is new, just reinvented and this is true. Raised floor cooling was used by the Romans, and Adiabatic cooling can be seen in Ancient Egyptian frescos -illustrating slaves fanning large water filled clay jars- or Persia where they constructed wind catcher towers. Even Leonardo DaVinci had stab at it.

The problem is that, certainly in legacy Data centres, there are limited options in modifying the structure of the building to utilise new ideas. It is also the case that most Data centres run on partial load, and never get anywhere near their original design. Although high density racks are available to maybe get 60kW+ in a rack, in the past few years the average rack density has barely gone above 4kW/cabinet (less than 2kW/m2).

There are many views on how to improve cooling systems and save energy and a lot of talk has been about raising the temperature of the air going into the servers. Certainly this has some merit. A lot of Data centres measure return air temperature, which can be a mix of hot and cold air in the room. Temperatures at the intake to the servers are generally at the low end of ASHRAE best practice, at around 18oC with return air to if you’re lucky, around 24oC. Certainly by increasing the air on temperature to the racks will mean the upstream cooling can become more efficient. By increasing the delta T of the air going back to the cooling unit by perhaps segregating the air paths, will also increase the cooling capacity of the system. This type of strategy has obvious advantages but also concerns for the IT manager, who doesn’t want to risk his equipment over heating and failing. With modern Blade servers there is also a health and safety risk as with a high delta T, across the server it is possible to get air off temperatures at the back of a rack of 50oC.

Basic Steps

So what is a good strategy? Starting with low hanging fruit is always a good idea, and being realistic with what your infrastructure can support will help narrow the options. The first thing to bear in mind is that two critical components within the cooling system should be the focus- the compressors and the fans. If you can improve the efficiency of the cooling circuit to allow compressors to run for less time, then this will lead to a huge energy saving. If you can use the latest EC fan technology and reduce the airflow when not required, then this will also lead to bigger energy savings.

Air is lazy! If air can find an easy route it will. One of the biggest and easily fixed wastes of energy is the lack of air management. If the air can escape and bypass a server it will do, and it is therefore a waste of energy. By plugging gaps and forcing the air to go only to the front of the racks, is an easy step into improving efficiency. The Uptime Institute indicated that simple airflow management and best practice could increase PUE’s from 2.5 to 1.6.

Aisle containment is one method for restricting and segregating air paths and doesn’t have to be that expensive, even if you do have the Manhattan skyline. If possible, try to stop warm air from one rack blowing into the air intake of another, either because they are facing each other or there is recirculation.

Aisle separation can reduce temperature differential from the bottom to the top of a rack significantly.

But it is also key to not flood these areas with airflow which is not needed. The server fans have a particular capability of air volume they can deliver, so the aim is to provide just enough air into the aisle by controlling the airflow from the CRAC’s. Too much can lead to air escaping where it shouldn’t, and too little means the server fans will be starved. Even with aisle containment solutions in place, if you don’t follow best practice and seal around the cabinets, air can escape or be drawn into the aisle from the hot side.

More and more companies are offering airflow and temperature simulations, which can be very accurate at showing what is really happening within the data centre; but as with any simulation, they are only as good as the accuracy of the data going in.

EC fan upgrades

Fans are critical to the movement of air around the Data centre. Legacy units may contain old inefficient AC Blowers with belt drives that break regularly and shed belt dust through the room. They are usually patched up and kept going, because to change a complete CRAC unit can be costly and sometimes physically impossible. Typically the fans are running at a single speed; therefore due to most Data centres requiring only partial load, airflow management is controlled by shutting off air vents or switching units off completely.

Upgrading to EC fans is one way to bring an immediate saving to the problem. With modern EC fan technology there is no need for belts and pulleys, and the efficiencies of the motors are significantly higher, >90%. The main benefit is that they can be easily -and cost effectively- speed controlled, allowing a partial load datacentre to turn down the airflow to only what is needed.

The amount of turn down is dependent on the capabilities and type of unit and whether it is DX or chilled water. Significant noise reductions can also be achieved by slowing fans down as the graphs show.

A 50% reduction in airflow can mean 1/8 of the power being consumed by an EC fan and a potential reduction in noise of 15dB(A)! This added to the improvement in cooler running, maintenance free operation and longer lifetimes offers a simple and cost effective improvement to any Data centre.

EC fans are a plug and play solution without the need to setup and commission separate drives. The fans can be controlled with a simple 0-10v or PWM low current signal from a sensor or manual dial, and also have the ability to interface with other systems using Modbus, to allow even more control and monitoring possibilities.

The cost to upgrade the fans in a system is more cost effective than you think. Typical paybacks of 18-24 months can be achieved and can be as good as 12 months, depending on the operating hours of the equipment and how efficient the existing system is.

Payback is the time it takes for the energy saving to completely cover the cost of the upgrade installation including materials and labour.

The knock on effect of saving energy is the reduction in CO2 and Carbon emissions. This is a direct calculation using DEFRA guidelines that can show the kgCO2 / unit from kWh’s saved.

The effect of improving airflow within the Data centre will mean that upstream systems, Chillers, condensers etc can relax and be reduced; so even more energy can be saved. Don’t forget to consider the external equipment as well when considering the upgrading of fans and control strategies to make the overall system more efficient.

Conclusion

As technology advances at an exponential rate into the future with equipment becoming smaller and faster, the future of cooling is secure. Whatever the choice of medium, there will be a need to keep equipment cool, efficiently.

Data centres will continue to expand as demands, both social and business, for digital content and services grow and they will reinforce their position as the hubs of the internet.

IT loads will become more computational/energy efficient and, as a result, far more dynamic.

Thermal management will improve and PUEs will fall to 1.1-1.2 in Europe.

Air will take over from chilled water as the most economic cooling solution, and variable speed EC fans will become mandatory for either pressure or capacity control with the modern Data centre.