
Samsung Electronics recently acquired Europe’s largest Heating, Ventilation, and Air Conditioning (HVAC) company FlaktGroup. This move has been seen as a preparatory measure as data centre upgrades and new site construction will require more advanced cooling systems to keep up with the power usage of current AI chips.
Counterpoint Research associate director Gareth Owen told Mobile World Live “it’s a logical acquisition”, with Samsung seeing the HVAC market, particularly for data centres, as a major opportunity. He noted the deal enables it to acquire specialist engineering and product capability in precision cooling.
The acquisition includes three FlaktGroup subsidiaries: Woods Air Movement, a provider of ventilation and fire safety systems; Semco, a specialist in air handling and flow solutions; and SE-Elektronic, which delivers tailored advanced automation systems.
FlaktGroup set up a North America data centre team to handle growing demand in the US.
Owen pointed to additional synergies with Samsung’s existing IoT and AI building platform, which will allow the company to offer a full range of HVAC services, from residential and commercial to large-scale industrial and precision cooling for AI data centres.
Growth prospects
Samsung is no doubt tapping into a lucrative market. Just this week, Meta Platforms committed $600 billion in new investment in data centres and AI infrastructure in the US, Microsoft earmarked $10 billion for AI facilities in Portugal and Google allocated €5.5B towards an expansion in Germany.
South Korea-based tech analyst Jukan said in a post on X the deployment of new AI compute capacity will “inevitably depend on large-scale upgrades to existing facilities and a massive wave” of new data centre construction worldwide.
Jukan cited a 2024 survey by Uptime Institute that found only 5 per cent of the world’s data centres can support an average rack power density exceeding 30kW, meaning the vast majority can’t even handle Nvidia’s latest Hopper chips released last year.
A Bank of America Global Research note stated a rack of standard Nvidia Hopper H200 chips consumes 35kW, with a rack of the company’s new Blackwell B20 chips requiring 120kW. Its next-gen Rubin chips (out in H2 2026) will use 600kW per rack.
Rivals AMD and Intel’s new chip releases are following a similar trajectory.
New builds
Owen explained that while it is possible to retrofit existing data centres to some extent to handle more powerful chips, he believes most facilities deploying rack-scale systems will be new builds. He added the racks are so heavy the floors of existing centres would need to be reinforced.
A rack-scale system is a data centre architecture that treats an entire rack of hardware as a single, unified computing system rather than a collection of individual servers. They require liquid-cooled systems.
Owen noted rack-scale systems are a new paradigm in AI computing infrastructure but currently only represent a small proportion of total AI compute shipments.
He estimates tens of thousands of Nvidia’s NVL72 have been deployed since the design was launched in Q4 2024. The NVL72 is a rack-scale AI supercomputer combining 72 Blackwell GPUs and 36 Grace CPUs into a single, liquid-cooled system.
Nvidia and its partners encountered problems with integration and cooling (leaks) in the beginning, but they were resolved, he said, adding they are complex systems, which can slow down deployments. Other types of AI computing, such as small node-based servers with four to eight GPUs per server, are air-cooled and don’t face such issues.
While rack-scale systems are very expensive, Owen argued they will likely only be used for the most demanding training and inference tasks, where the cost of the compute makes sense.
Soaring energy costs
Jukan cited data from Morgan Stanley in another X post which estimated the cost of a cooling system for a single Nvidia GB300 NVL72 cabinet at about $50,000, increasing 17 per cent to nearly $56,000 for the Vera Rubin NVL144 due to denser GPU configurations and higher thermal needs.
In addition to more costly cooling components, the price of running such systems is accelerating. Chips powering generative AI models such as ChatGPT can use up to six-times more electricity than older data centre chips.
A Bank of America Institute report projects power consumption of the data centre sector in the US to more than double over the next ten years, with annual growth outpacing all other major sectors.
The authors stated: “Already big users of electricity, data centres will guzzle even more energy going forward”, with the facilities consuming 1 per cent to 2 per cent of global electricity production.
Worldwide growth in energy used by data centres is forecast by different organisations to grow annually been 11 per cent and 20 per cent until 2030.
More GPUs in play
The bank’s Global Research unit, meanwhile, expects AI-related GPU shipments to jump from around 9 million in 2024 to some 25 million in 2030. While the number of GPUs per AI server can vary greatly, power consumption is tied more to the number of active GPUs compared with the server count.
The International Energy Agency estimates power usage of AI servers will climb from 63TWh in 2024 to 300TWh by 2030, accounting for one-third of total data centre electricity consumption.
Rising power requirements, coupled with long lead times in some regions to increase power capacity, are creating challenges for data centre operators.
Two 48MW data centres in Santa Clara, California, which were recently completed, now sit idle because the local utility provider is unable to supply sufficient power.
With trillions of dollars committed to building new AI infrastructure in the US alone, the pressure on power grids will certainly increase, while boosting demand for HVAC gear.
Samsung’s move into the HVAC space certainly appears well timed. The new unit could emerge as its new cash cow, perhaps giving the conglomerate a steadier revenue stream than the mercurial semiconductor segment.
Source: Mobile World Live
Image Credit: Stock Image





