The data centre industry is talking about liquid cooling like it is the answer to everything. Direct-to-chip cooling, immersion tanks, rear-door heat exchangers. The message from many vendors is clear: air cooling is old technology, and liquid is the future.
There is truth in that trend. AI and GPU workloads generate heat densities that traditional air cooling alone cannot handle. But the conversation has skipped over a critical detail. Even in facilities that deploy liquid cooling, air still moves through the racks. Hot air still needs to go somewhere. And blanking panels still matter.
This post explains why the “liquid cooling replaces everything” narrative is incomplete, and where airflow management remains essential even in the most advanced cooling architectures.
The Liquid Cooling Pitch (and What It Gets Right)
Liquid cooling removes heat at the source. Instead of pushing cold air through a server and hoping it absorbs enough heat before exiting, liquid cooling puts a coolant (water, dielectric fluid, or a two-phase solution) directly in contact with the hottest components.
For high-density GPU clusters running AI training workloads, this approach makes sense. A single NVIDIA H100 GPU can draw over 700 watts. A rack packed with eight of those GPUs generates heat loads that no amount of cold air from a raised floor or overhead duct can manage efficiently.
Hyperscale AI companies like Nebius have adopted liquid cooling precisely because GPU heat density outpaces what air alone can deliver. The physics are straightforward: water carries roughly 3,500 times more heat per unit volume than air. For extreme thermal loads, liquid wins.
What the Liquid Cooling Pitch Misses
Here is where the conversation gets incomplete.
Most Servers Are Not GPUs
AI workloads are growing fast, but they represent a fraction of the total compute installed globally. The vast majority of data centre racks still run general-purpose servers, storage arrays, and networking equipment. These systems generate moderate heat loads (5 to 15 kW per rack) that air cooling handles effectively when the airflow is managed properly.
Replacing functional air-cooled infrastructure with liquid cooling for these workloads does not make financial sense. The capital cost of liquid cooling (piping, CDUs, manifolds, leak detection) is significant. For racks that run well within the thermal envelope of a properly sealed and contained air-cooled aisle, the investment has no payback.
Hybrid Environments Are the Reality
Most data centres that adopt liquid cooling do not convert their entire facility. They deploy liquid cooling for specific high-density zones (GPU clusters, HPC racks) while the rest of the floor continues running on air.
This creates a hybrid environment. And hybrid environments have a problem that gets very little attention: the interaction between liquid-cooled zones and air-cooled zones.
Liquid-cooled racks still have fans. Those fans pull air from the front and push it out the back. The rack still generates airflow, even if the primary heat removal happens through the liquid loop. If the open rack units around those systems are not sealed with blanking panels, bypass airflow occurs. Hot exhaust air recirculates back to the cold aisle. It mixes with the supply air feeding adjacent air-cooled racks.
The result: liquid cooling works perfectly for the GPU rack, but the neighboring racks overheat because the airflow environment around them has been disrupted.
Rear-Door Heat Exchangers Still Need Front-to-Back Airflow
Rear-door heat exchangers (RDHx) are one of the most popular liquid cooling retrofits. A water-cooled coil mounts on the back of the rack and intercepts the hot exhaust air before it enters the room.
This is a smart solution. But it depends entirely on proper front-to-back airflow through the rack. If open rack units allow cold air to bypass the servers and exit through the sides or top of the rack, the RDHx does not receive the full heat load. Its efficiency drops.
Blanking panels seal those open rack units and force all supply air through the active equipment. Without them, the rear-door heat exchanger works harder for less result.
Immersion Cooling Has Its Own Boundaries
Full immersion cooling (submerging servers in dielectric fluid) does eliminate the need for blanking panels inside the immersion tank. That is true. But immersion tanks exist within a larger facility that still has air-cooled infrastructure around them.
The tanks themselves generate heat that must be rejected. The pumps, CDUs, and heat rejection equipment associated with immersion systems still operate in the same room as air-cooled racks. The facility-level airflow dynamics still matter. The room-level cooling architecture still needs to work.
No data centre today is 100% immersion-cooled. And the infrastructure that supports the immersion systems (networking, storage, management servers) typically runs on air.
The AI Factor: More GPUs Means More Airflow Complexity
As AI workloads scale, facilities are adding GPU racks alongside existing compute. This increases the total heat load in the room, which puts more pressure on the cooling system, which makes airflow management more important, not less.
Consider a facility that adds a row of liquid-cooled GPU racks to an existing air-cooled floor. The GPU racks draw significant power. The liquid cooling system rejects that heat, but some of it dissipates into the room. The ambient temperature rises. The CRAC units work harder to compensate. Supply air temperatures climb.
In this scenario, every open rack unit in the air-cooled aisles becomes a bigger problem than it was before. The margin for error shrinks. Bypass airflow that was tolerable at lower ambient temperatures becomes a hot spot trigger when the room is already running warm.
Where Blanking Panels Remain Non-Negotiable
To summarize the environments where blanking panels remain essential, even when liquid cooling is present:
Hybrid facilities with both air-cooled and liquid-cooled zones. The air-cooled racks need proper sealing to prevent thermal interference from adjacent liquid-cooled equipment.
Rear-door heat exchanger deployments. The RDHx depends on controlled front-to-back airflow. Unsealed rack units undermine its performance.
Transition periods. Most facilities adopting liquid cooling do so gradually. During the transition, the facility runs a mix of cooling types. Airflow management during this phase is more important than in a single-modality environment.
Supporting infrastructure around immersion systems. The networking, storage, and management equipment that supports immersion-cooled compute still runs on air.
Any rack with fans. If the server has fans, it creates airflow. If it creates airflow, open rack units create bypass paths. The cooling modality upstream (air, liquid, or hybrid) does not change this physics.
The Bottom Line
Liquid cooling is a necessary technology for high-density workloads. It is not a replacement for airflow management. The two work together.
The facilities that will perform best over the next decade are the ones that deploy liquid cooling where it is needed and maintain disciplined airflow management everywhere else. That means blanking panels in every rack with open units, containment where it makes sense, and monitoring to verify that the thermal environment is behaving as designed.
The worst outcome is assuming that liquid cooling eliminates the need to think about airflow. It does not. It changes the airflow equation, and in many cases, it makes the equation more complex.
Contact EziBlank to discuss airflow management for hybrid and AI-ready facilities.




