I recently stumbled across a manufacturer of reasonably priced subminiature fluid pumps, and it got me thinking about ultra-low-flow compact cooling loops for SFF.
Current SFF systems are generally limited to either pump-on-block AIO coolers (OK for a single CPU, no good if you want to WC a CPU + GPU + other stuff), or have great difficulty trying to shoehorn a dedicated pump into the case. Watercooling as much as possible has the benefit of allowing very compact component arrangements due to centralising all component aircooling into one radiator, and allowing hot components to be placed very close to each other without clearance for airflow.
But that big pump is a big problem, and I think current pumps (The D5, DDC, etc) are vastly overspecced. Coolant temperature can also be raised from the 'norm' of chasing ambient temperatures.
One experimental verification of this is IBM's Aquasar test system. This is a cluster of BladeServer systems (33x QS22s, and 9x HS22s) that pump out just under 10kW of heat at full load (the QS22 is a 25W blade, the HS22 can take up to 2x 130W CPUs, but I'm assuming standard 95W were used). And it does this with 30L/min of water flow at 60°C loop water temperature (75° CPU core temperature).
Put another way, that's about 3mL/min per Watt of heat. For a pretty high-end ITX system (130W Haswell-E, 250W 980ti, plus another 20W thrown in for chipset and RAM for 400W total) that's a mere 1.2L/min of flow, easily doable with the linked micropumps.
Now there's loop fluid temperature. 75°C is a completely comfortable operating temperature for any electronic component. But if you want a lower temperature (e.g. if you're overclocking and running into power dissipation issues) you can use a lower temperature. This is almost entirely determined by the radiator surface area, with a higher loop temperature allowing for a proportionally lower surface area (due to the greater temperature differential allowing for a more efficient heat transfer).
The required radiator size would need to be experimentally verified, but I suspect that 120x120 would be surprisingly effective. with volume freed up by shuffling components together, more exotic radiator geometries could be used (e.g. a narrow, deep radiator).
Finally, you come to the waterblocks. Normal high-flow designs would be unsuitable, so we can again look to IBM and ETH Zurich's research. At very low flow rates, parallel microchannel blocks lower their effectiveness to about the same as branching microchannel designs, with the branching microchannel designs having the benefit of greatly reducing the impact of CPU hotspots due to even distribution of coolant. Aquasar uses these branching channel blocks. These aren't available on the open market, but the design is nicely layered so could be created by etching/stamping copper sheets and layering them (which also allows the very fine base microchannels to be formed). This would be the trickiest part in validating this setup for ITX systems. however, very compact setups would require bespoke heatsinks anyway, so for larger production runs this is more of a limit on waterblock construction methods.
One wrinkle is the power supply. No off-the-shelf compact watercooled PSUs exist, so this would require modifying an SFX/TFX/FlexATX PSU for watercooling, or to have the PSU in its own compartment with its own airflow. This wastes volume, but may be necessary due to safety issues with water and mains voltages in proximity.
tl;dr ITX systems could be effectively cooled with very low flow rates and very small pumps, using low-flow waterblock designs. Compact whole-system watercooling could allow reductions in total system volume by centralising system cooling to a single radiator.
Current SFF systems are generally limited to either pump-on-block AIO coolers (OK for a single CPU, no good if you want to WC a CPU + GPU + other stuff), or have great difficulty trying to shoehorn a dedicated pump into the case. Watercooling as much as possible has the benefit of allowing very compact component arrangements due to centralising all component aircooling into one radiator, and allowing hot components to be placed very close to each other without clearance for airflow.
But that big pump is a big problem, and I think current pumps (The D5, DDC, etc) are vastly overspecced. Coolant temperature can also be raised from the 'norm' of chasing ambient temperatures.
One experimental verification of this is IBM's Aquasar test system. This is a cluster of BladeServer systems (33x QS22s, and 9x HS22s) that pump out just under 10kW of heat at full load (the QS22 is a 25W blade, the HS22 can take up to 2x 130W CPUs, but I'm assuming standard 95W were used). And it does this with 30L/min of water flow at 60°C loop water temperature (75° CPU core temperature).
Put another way, that's about 3mL/min per Watt of heat. For a pretty high-end ITX system (130W Haswell-E, 250W 980ti, plus another 20W thrown in for chipset and RAM for 400W total) that's a mere 1.2L/min of flow, easily doable with the linked micropumps.
Now there's loop fluid temperature. 75°C is a completely comfortable operating temperature for any electronic component. But if you want a lower temperature (e.g. if you're overclocking and running into power dissipation issues) you can use a lower temperature. This is almost entirely determined by the radiator surface area, with a higher loop temperature allowing for a proportionally lower surface area (due to the greater temperature differential allowing for a more efficient heat transfer).
The required radiator size would need to be experimentally verified, but I suspect that 120x120 would be surprisingly effective. with volume freed up by shuffling components together, more exotic radiator geometries could be used (e.g. a narrow, deep radiator).
Finally, you come to the waterblocks. Normal high-flow designs would be unsuitable, so we can again look to IBM and ETH Zurich's research. At very low flow rates, parallel microchannel blocks lower their effectiveness to about the same as branching microchannel designs, with the branching microchannel designs having the benefit of greatly reducing the impact of CPU hotspots due to even distribution of coolant. Aquasar uses these branching channel blocks. These aren't available on the open market, but the design is nicely layered so could be created by etching/stamping copper sheets and layering them (which also allows the very fine base microchannels to be formed). This would be the trickiest part in validating this setup for ITX systems. however, very compact setups would require bespoke heatsinks anyway, so for larger production runs this is more of a limit on waterblock construction methods.
One wrinkle is the power supply. No off-the-shelf compact watercooled PSUs exist, so this would require modifying an SFX/TFX/FlexATX PSU for watercooling, or to have the PSU in its own compartment with its own airflow. This wastes volume, but may be necessary due to safety issues with water and mains voltages in proximity.
tl;dr ITX systems could be effectively cooled with very low flow rates and very small pumps, using low-flow waterblock designs. Compact whole-system watercooling could allow reductions in total system volume by centralising system cooling to a single radiator.