Features, Insight, Interviews

Join the hyperscalers: Vertiv’s adaptable solutions raise the bar on next-gen cooling

Strategies for next-generation data centres have never been more critical – and at GITEX 2025, Veronica Martin spoke to Sam Bainborough, Vice President Thermal Business for Vertiv in EMEA, about the company’s liquid cooling solutions, efficiency-driven innovations, and strategies to address ever-increasing rack densities.

What are the key liquid cooling solutions Vertiv is highlighting at GITEX this year, and why are they so crucial for AI workloads? 

We have the coolant distribution unit (CDU) product range, which includes our Vertiv™ CoolChip CDU products and all of our liquid cooling technologies. On the stand, we’re showcasing a roughly 600-kilowatt unit, but our offerings span from about 100 kilowatts with rack-mounted units – these are particularly popular among enterprise clients – to a 2300-kilowatt unit. We also offer hybrid air-to-liquid solutions like the Vertiv™ CoolPhase Flex, that features liquid cooling on one side and air heat rejection into the room on the other. We’re seeing strong traction with this configuration, with several hyperscalers already adopting it. 

This product line is particularly important for existing facilities, as it allows you to deploy one-megawatt or even five-megawatt pods and integrate an air-to-liquid (or liquid-to-air) unit that effectively rejects heat into the room. 

We’re also seeing significant demand from large hyperscale and neo-cloud clients for full liquid-to-liquid solutions. In addition to the 600-kilowatt unit, we offer 1.35-megawatt and 2.3-megawatt units, and we’re developing modular designs that can scale even further. In the US, for example, there’s a growing interest in 5-10 megawatt CDU deployments. 

A critical part of this ecosystem is the secondary fluid network. That’s the connection between the server and the CDU. We’ve invested heavily in standardising these networks, manufacturing them offsite, and deploying them on site. This enables system cleanliness and reliability, and we’ve been collaborating closely with several hyperscalers to implement these at scale. 

How does liquid cooling enable AI and high-performance GPU clusters to operate efficiently and reliably? How quickly are enterprises and hyperscalers in EMEA adopting liquid cooling, and what’s driving this adoption? 

There are a few key aspects to this. First, when you use a cold plate rather than air cooling over a server, heat transfer is significantly more efficient. One of the main efficiency gains comes from delivering the fluid directly to the chip. 

Another critical factor is water temperature. We now offer solutions that operate across a wider temperature range. For example, the Vertiv™ CoolLoop Trim Cooler can supply water from up to 40°C down to 15-20°C, enabling a wider use of free cooling. Operating at higher temperatures increases overall efficiency, reduces mechanical cooling needs, and extends the system’s lifecycle. We’ve been developing these solutions to support high performance computing and stay multiple generations ahead of NVIDIA’s rapid GPU evolution. 

The Vertiv™ CoolLoop Trim Cooler essentially combines dry cooler efficiency with mechanical cooling back-up to effortlessly integrate liquid and air cooling, equipping customers’ data centers with the right solution for today and tomorrow. The goal is to maximise water temperature for free cooling, saving energy across the facility. Additionally, we’ve developed flexible products like the Vertiv™ CoolPhase Flex, co-designed for Compass, a colocation customer in the US, which can flexibly switch between liquid and air cooling. This provides seamless adaptability for future changes in operational requirements. 

Being as close to the chip as possible is another efficiency driver. Standard liquid-to-chip designs achieve around 75-80% efficiency, leaving 15-20% to be handled by air cooling. “All of these solutions align with NVIDIA standard reference designs and demonstrate a strong focus on efficiency, adaptability, and operational practicality, while avoiding some of the deployment challenges associated with immersion cooling. 

How quickly are enterprises and hyperscalers in EMEA adopting liquid cooling, and what’s driving this adoption? 

We’ve observed significant trends in utilisation and available space within MTDC (Multi-Tenant Data Center) colocation facilities. Many co-location clients are currently selling a large portion of the pre-built capacity, but we’re also seeing a growing shift: existing facilities are being retrofitted for liquid cooling to meet evolving customer demands. 

We’re seeing substantial interest from our customers regarding future designs. The key challenge is designing for the next two to three years while enabling solutions to remain adaptable to changing server conditions. 

Looking at NVIDIA’s roadmap, future servers and GPUs may push power densities to 400–600 kW per rack, with some discussions even around 1 MW per rack. Currently, existing spaces are being deployed for 200-300 kW configurations, but Greenfield builds will be necessary to accommodate higher-density deployments. 

The critical focus for MTDC operators is planning ahead, so that new facilities coming online in two to three years can accommodate the demands of evolving hardware, while providing scalable and marketable capacity for customers well into the future. 

Are there specific considerations or challenges in the Middle East and EMEA when deploying liquid cooling solutions? 

In the Middle East, higher ambient temperatures mean that traditionally, free cooling isn’t as viable as in other regions. However, the latest innovations in chilled water systems now enable free cooling even in harsh climates like the Middle East. We’re also seeing a strong focus on the total power consumption of units, which is just as important as efficiency. Given the region’s relatively abundant power availability, many deployments in the Middle East are being designed around available energy capacity, particularly for large-scale operations. 

In Europe, we’ve seen significant traction in the Nordics, where low temperatures and access to renewable energy make high-efficiency data centre deployments ideal. Deployments vary by use case: smaller node setups of 10-20 kW are being built in conventional data centre areas, while large AI factories are being planned in both the Nordics and other regions, including recent announcements in the UK. 

Overall, the Middle East is emerging as a prime location for large-scale AI and high-power deployments, driven by power availability and market demand. Meanwhile, the Nordics remain a top choice for energy-efficient, high-performance data centres, particularly for workloads requiring significant cooling and carbon footprint considerations. 

How do you see liquid cooling evolving over the next three to five years, particularly as AI and high-density compute continue to grow? 

We’re closely following the roadmap for chip density increases, and it’s not just about NVIDIA. Other chip manufacturers are driving similar trends. As rack power densities rise, our solutions need to keep pace and stay multiple compute generations ahead. A common question we receive concerns one-megawatt CDUs, which act as sidecars to a one-megawatt rack. While marketing teams often talk about this concept, in practice, we already have these units available. We’re deploying one-megawatt CDUs, and even 2.3-megawatt units, which can sit very close to the IT infrastructure, moving from the grey space outside the room into the white space itself. 

We also recently announced our collaboration with NVIDIA on an 800 VDC power architecture, which, like cooling, needs to be positioned very close to the rack. As we move from 124 kW to 200-300 kW racks, the infrastructure mostly stays outside the white space. But at 400-500 kW and beyond, sidecar deployment becomes necessary: power on one side, cooling on the other, directly adjacent to the rack. 

Cooling strategies are also evolving. Historically, lower chilled water temperatures have increased chip efficiency. As racks scale to 500 kW and eventually one megawatt, there’s an open question for GPU designers: should future designs optimise for high chilled water temperatures or maintain lower temperatures for maximum efficiency? This is a key consideration for long-term planning and one of the critical topics we’re addressing as we plan for the next generations of high-density infrastructure. 

Image Credit: Vertiv

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines