Cooling Ultra-low flow watercooling

EdZ

Virtual Realist
Original poster
Gold Supporter
May 11, 2015
1,578
2,107
I recently stumbled across a manufacturer of reasonably priced subminiature fluid pumps, and it got me thinking about ultra-low-flow compact cooling loops for SFF.
Current SFF systems are generally limited to either pump-on-block AIO coolers (OK for a single CPU, no good if you want to WC a CPU + GPU + other stuff), or have great difficulty trying to shoehorn a dedicated pump into the case. Watercooling as much as possible has the benefit of allowing very compact component arrangements due to centralising all component aircooling into one radiator, and allowing hot components to be placed very close to each other without clearance for airflow.
But that big pump is a big problem, and I think current pumps (The D5, DDC, etc) are vastly overspecced. Coolant temperature can also be raised from the 'norm' of chasing ambient temperatures.

One experimental verification of this is IBM's Aquasar test system. This is a cluster of BladeServer systems (33x QS22s, and 9x HS22s) that pump out just under 10kW of heat at full load (the QS22 is a 25W blade, the HS22 can take up to 2x 130W CPUs, but I'm assuming standard 95W were used). And it does this with 30L/min of water flow at 60°C loop water temperature (75° CPU core temperature).
Put another way, that's about 3mL/min per Watt of heat. For a pretty high-end ITX system (130W Haswell-E, 250W 980ti, plus another 20W thrown in for chipset and RAM for 400W total) that's a mere 1.2L/min of flow, easily doable with the linked micropumps.

Now there's loop fluid temperature. 75°C is a completely comfortable operating temperature for any electronic component. But if you want a lower temperature (e.g. if you're overclocking and running into power dissipation issues) you can use a lower temperature. This is almost entirely determined by the radiator surface area, with a higher loop temperature allowing for a proportionally lower surface area (due to the greater temperature differential allowing for a more efficient heat transfer).
The required radiator size would need to be experimentally verified, but I suspect that 120x120 would be surprisingly effective. with volume freed up by shuffling components together, more exotic radiator geometries could be used (e.g. a narrow, deep radiator).

Finally, you come to the waterblocks. Normal high-flow designs would be unsuitable, so we can again look to IBM and ETH Zurich's research. At very low flow rates, parallel microchannel blocks lower their effectiveness to about the same as branching microchannel designs, with the branching microchannel designs having the benefit of greatly reducing the impact of CPU hotspots due to even distribution of coolant. Aquasar uses these branching channel blocks. These aren't available on the open market, but the design is nicely layered so could be created by etching/stamping copper sheets and layering them (which also allows the very fine base microchannels to be formed). This would be the trickiest part in validating this setup for ITX systems. however, very compact setups would require bespoke heatsinks anyway, so for larger production runs this is more of a limit on waterblock construction methods.

One wrinkle is the power supply. No off-the-shelf compact watercooled PSUs exist, so this would require modifying an SFX/TFX/FlexATX PSU for watercooling, or to have the PSU in its own compartment with its own airflow. This wastes volume, but may be necessary due to safety issues with water and mains voltages in proximity.


tl;dr ITX systems could be effectively cooled with very low flow rates and very small pumps, using low-flow waterblock designs. Compact whole-system watercooling could allow reductions in total system volume by centralising system cooling to a single radiator.
 

QinX

Master of Cramming
kees
Mar 2, 2015
541
371
I would like to respond with an equally long reply, but I'm a bit busy, so you'll have to forgive me ;)

Basically what you say is what I've already done to a certain extent with H2O-Micro.
Yes it does work, but the extra system design you would need to do to actually get a smaller system doesn't outweigh the costs. Also one of the issues I ran into, but that can be resolved, is that a centralized cooling system can cause problems if not everything is loaded equally, this can be solved by using water temperature based fan control instead of CPU temperature fan control.
 
  • Like
Reactions: EdZ

EdZ

Virtual Realist
Original poster
Gold Supporter
May 11, 2015
1,578
2,107
I was definitely thinking of how the H2O-Micro could be shrunk further!
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
5,178
4,516
Nice write-up and thought-provoking. One issue someone already mentioned in another thread was that not all tubing can withstand those high temps so make sure that it is specced accordingly.

Most server-environments are air-conditioned and often set at ambient temps way below 21°C, sometimes even below 16°C. Many are now just waking up to the idea that 21-25°C is just as good, but saves a lot of power and I can bet technicians also love it.
 

QinX

Master of Cramming
kees
Mar 2, 2015
541
371
I was definitely thinking of how the H2O-Micro could be shrunk further!

Shrinking H2O-Micro any further would require custom PCBs and even more custom waterloop parts. In new design I had a slim dual 92mm radiator which was a minimum size in order to reduce fan noise, I think of the 3 liters that the case was maybe 0,5 liters was air.

The biggest thermal issues are the motherboard VRMs and the CPU itself. I've found that between 40C and 50C is the water temperature limit, any higher and you run in the CPU throttling issues.

One of the main benefits to Thin ITX boards besides being more low profile is that they have a fairly standardized layout, I wish GPUs would do that more, that would make waterblock design so much more viable for this kind of system.

Edit:
Here is some of my test data with my dual slim 80mm radiator. It performs between 80%-90% of what the Hardwarelabs GTX M160 performs but at a much slimmer size, 54mm vs 28.6mm!

You can see that some high fan speeds are required to keep run a certain heat load through it, this is with a 20C and 30C delta to air.

So the bottom left results show that you can run a 4790K + GTX970/R9 Nano at full load with 2000 to 2500 RPM fan speed, that is loud, I can confirm that ;)

Also, yes my testing chamber was chilly to say the least.
 
Last edited:
  • Like
Reactions: EdZ

EdZ

Virtual Realist
Original poster
Gold Supporter
May 11, 2015
1,578
2,107
The biggest thermal issues are the motherboard VRMs and the CPU itself. I've found that between 40C and 50C is the water temperature limit, any higher and you run in the CPU throttling issues.
That's the main reason to use the more complex branching microchannel design rather than a more traditional parallel microchannel or bed-of-pins waterblock design: by ensuring inlet water is distributed evenly rather than flowing across the die, the effects hotspots have on overall die temperature can be greatly reduced.
One of the main benefits to Thin ITX boards besides being more low profile is that they have a fairly standardized layout, I wish GPUs would do that more, that would make waterblock design so much more viable for this kind of system.
With any luck, the move to HBM from both GPU vendors will greatly simplify board design down to a single core block plus very nearby VRMs.
One wrinkle may be GPU energy efficiency though: the Fury X gains it's efficiency from a low chip operating temperature (the purpose of it's overspecced cooler) reducing internal resistance. This may mean that high loop temperatures are more effective with modified air-cooled cards rather than cards designed with watercooling in mind.


That data is very useful. It confirms the ~5°C coolant temperature drop through the radiator from the Aquasar system is achievable with a compact (nearly) CoTS radiator. Aquasar was using completely passive cooling (using the hot water for building heating through passive radiators) so that's a low bar to jump with a forced-air-cooled radiator, but experimental confirmation is always good. Hopefully the use of increased coolant temperature will make a (small) improvement to radiator thermal efficiency allowing for lower fan speeds, but sacrificing silence under load for a reduction is system volume is acceptable.
 

QinX

Master of Cramming
kees
Mar 2, 2015
541
371
That's the main reason to use the more complex branching microchannel design rather than a more traditional parallel microchannel or bed-of-pins waterblock design: by ensuring inlet water is distributed evenly rather than flowing across the die, the effects hotspots have on overall die temperature can be greatly reduced.
True, but I've also done some research into the Aquasar system, it requires an extremely complex waterblock design, they went as far as to etch the waterblock like how TSV(Through Silicon Vias) are made.
Another side note is what I noticed, cooling a 4C 4790K with a 100Wish power consumption is harden than a 160W 18C Xeon, the Xeon has the heat more spread out so each core has a lower heat output.
4C @ 25W each
18C @ 9W each

With any luck, the move to HBM from both GPU vendors will greatly simplify board design down to a single core block plus very nearby VRMs.
One wrinkle may be GPU energy efficiency though: the Fury X gains it's efficiency from a low chip operating temperature (the purpose of it's overspecced cooler) reducing internal resistance. This may mean that high loop temperatures are more effective with modified air-cooled cards rather than cards designed with watercooling in mind.
Cooling GPUs in terms of temperature management is easy, the respond extremely well, Aqua-Computer mentions only 8C rise in GPU temperature compared to water temperature so at 60C the GPU would only be 66C.

It is all the "slightly" different PCBs and component layouts that are the main issue, as said motherboard have the same problem with only Thin ITX being standardized.

http://techreport.com/news/29023/aqua-computer-water-block-makes-r9-nano-a-chilly-one-slot-card
German accessory manufacturer Aqua Computer is showing off its latest graphics card water block, which can turn a Radeon R9 Nano into a single-slot solution. The company claims the water block can chill the Fiji GPU down to 35ºC while running Furmark (with the coolant at 27º C), which isn't really impressive—merely typical German efficiency.

That data is very useful. It confirms the ~5°C coolant temperature drop through the radiator from the Aquasar system is achievable with a compact (nearly) CoTS radiator. Aquasar was using completely passive cooling (using the hot water for building heating through passive radiators) so that's a low bar to jump with a forced-air-cooled radiator, but experimental confirmation is always good. Hopefully the use of increased coolant temperature will make a (small) improvement to radiator thermal efficiency allowing for lower fan speeds, but sacrificing silence under load for a reduction is system volume is acceptable.

I would want to make the sacrifice after experiencing it first hand, the best thing that you can do is use a low TDP part for the CPU, a Core i7-4785T has similar performance to the Core i5-4670, but the core temperatures stay a lot lower giving you more thermal headroom, and in return a quieter system or a smaller one.
 
  • Like
Reactions: Phuncz and EdZ

EdZ

Virtual Realist
Original poster
Gold Supporter
May 11, 2015
1,578
2,107
True, but I've also done some research into the Aquasar system, it requires an extremely complex waterblock design, they went as far as to etch the waterblock like how TSV(Through Silicon Vias) are made.
It's definitely more complex than a monolithic milled waterblock, but not that complicated! The branching channel paths can be cut into copper shims that can be stacked into the waterblock (think a copper 'bucket'), with the bottom of the bucket either thin and flat, or etched with the lowermost microchannel array. The stack is compressed to keep the shims in good contact, then a cap applied and sealed on with the threaded ports and gaskets for hooking it up.
If you really wanted to overachieve you could etch the final microchannel pattern into the IHS itself and seal the branching distribution array to that, but that's a bit extreme!

For anyone else interested, ETH's publications page has lots of information on the research that led up the the Aquasar design.
Another side note is what I noticed, cooling a 4C 4790K with a 100Wish power consumption is harden than a 160W 18C Xeon, the Xeon has the heat more spread out so each core has a lower heat output.
4C @ 25W each
18C @ 9W each
true, but the hotspot-minimising branching channel design should be even more effective (compared to parallel flow) with a more concentrated hotspots. If there are any thermal maps of specific cores available the channel design could be tailored to a specific core layout and workload, but that's not really suitable for a general use home PC.

For GPUs using HBM on an interposer, the GPU and RAM are both sitting under a flat IHS, so a universal block can cool any GPU design. The remaining heat-producing component on the board are the MOSFETs, which are more temperature tolerant (~150°C to 175°C before failure, with some tolerating temperatures hot enough to melt their solder!), so even a simple copper pipe plumbed into the loop (or heatpipes routed to the primary waterblock) should be sufficient even with their high thermal output. The more widely spaced design of a RAM-on-PCB design makes this less practical and requiring of a full-coverage block.
It's not a truly standardised design, but PCIE-power on the far edge of the card and display connectors on the near edge means that power-VRM-GPU-connectors is a fairly obvious component arrangement.
 
  • Like
Reactions: Phuncz

EdZ

Virtual Realist
Original poster
Gold Supporter
May 11, 2015
1,578
2,107
Remember this article? http://www.tomshardware.com/news/aquacomputer-r9-fury-x-waterblock,29569.html

The HBM stacks were noted as slightly taller than the core itself. I'm not sure if this would throw off certain universal coolers.
Probably not. The heatsink base on both the R9 Nano and R9 Fury X is flat. If future HBM2 stacks are taller enough than current HBM stacks so that die height difference does become an issue, I suspect GPU manufacturers are more likely to use an IHS with internal shims (or stamped or milled hight differences) than to add very fine (sub-mm) machining steps to the heatsink or waterblock base.
Images of Aquacomputer's R9 Fury and Nano waterblock also appear to show a flat contact surface, so I think their claims "precision machining" may be mere marketing fluff.