With the rise of big data analytics, machine learning, and AI over the last few years, there has been an increasing demand for high-performance computing (HPC) within data centres. These platforms are usually large numbers of GPU cores and CPU cores working in parallel, allowing complex workloads such as Artificial Intelligence (AI), machine learning for medical diagnostics or large computational fluid dynamics (CFD) models to execute many more instructions than they could on traditional server infrastructure.

What is High-Performance Computing?

HPC is moving out of the realm of dedicated ‘Supercomputers’, although they still are in high demand and if you’re interested, the Top 500 list of Supercomputers can be found here.

There is a growing demand for GPU and CPU based compute at scale among traditional enterprise as well as start-ups looking to exploit machine learning. Both high core count CPUs and GPUs are needed for building a HPC cluster, and demand for GPU based processing is increasing rapidly as they are very good at performing what’s called ‘Floating Point Calculations’. These applications require racks full of high-end AMD or NVIDIA GPUs such as the Titan-X or Tesla cards (currently NVIDIA leads the field with their CUDA language and cores).

What impact is this having on the data centre?

With requirements for higher numbers of CPU and GPU cores to fit into an individual rack footprint, the power demands are increasing significantly. Traditional data centre deployments of medium density CPU and storage are typically in the range of 3 – 7 kW, which can easily be cooled using traditional air-cooling methods.

New high-density HPC deployments are pushing these power requirements to 20 – 40 kW per rack footprint. With increased power loadings comes increased heat load, both of which need to be dealt with by the data centre operator.

At 4D Data Centres in our Gatwick facility, we’ve solved this with 3-phase delivery of power to the rack, allowing for the delivery of resilient 20 to 40 kW power in a single rack footprint. The increased heat load is handled through the use of ‘rear door chilling’ which is essentially a cooling loop on the back of the rack, connected directly into the same chilled water system which feeds the main data floor CRAC (Computer Room Air Conditioning) units. This takes the high heat load directly off the back of the equipment (called the ‘air off temperature’) and removes enough heat to reduce the temperatures down to a level we would typically see from a 3 – 7 kW rack footprint. By doing this and reducing the ‘air off temperature’, we’re able to supply high-density rack footprints for HPC while still using our standard data centre CRACs, chilled water system, and highly efficient cooling towers.

What other changes might we see in the data centre?

As power and heat loads are increasingly pushed within the data centre, more sophisticated cooling such as immersion or direct to chip will likely be required by data centres to meet those demands. 

At 4D we’re already exploring the deployment of immersion cooling for our high-density and HPC customers. Immersion cooling involves a tank filled with a fluid, such as mineral oil, which has a high degree of thermal transfer, and a radiator which links to the main data centre cooling systems. As equipment is submerged completely within the oil the efficiency of heat removal directly from the chip is improved dramatically over what is possible with just using air to remove the heat load, this allows for potentially up to 100 kW per immersion cooling unit with the correct design. It isn’t without its drawbacks, but for specialist requirements with very high power densities, it presents an exciting option.

For more information on how we can help you with HPC or to get a same-day quote, please get in touch.