Log Thunderbolt Mini #1 - 5liter server with 48 cores

rokabeka

network packet manipulator
Original poster
Jul 9, 2016
248
268
Hello,

please let me show you my latest build: Thunderbolt Mini #1.
List of components:
Mobo: AsrockRack ROMED4ID-2T
CPU: AMD EPYC 7642 @190W
Heatsink: Dynatron T17
DRAM: 4x Micron 16G 2666MHz
Case: LZmod A24-V5
PSU: HDPlex 250W GaN
NIC: Mellanox ConnectX-5 (2x100G)
SSD: Samsung 970 Evo Plus

Just in case of my previous builds I mostly need a strong CPU, enough memory and beefy network connections. Typically logging in remotely, frequently using the server in the lab where noise level is not really important. So I did not even try to make this little beast silent. There is a #2 under construction, that fits ID-Cooling SE-207-TRX well, just sticking out from the case a teeny tiny bit :cool:
Not sure whether it could be a worthy opponent in the performance per liter threads :) should not be too bad at regular CPU compute.

This motherboard is a 'deep mini-ITX' one, so it is longer than the usual 170mm. But trading the space of the Flex ATX PSU sounded like a good idea.
As the HDPlex 250W is a perfect fit in size, I did not go with the 500W version, rather chose to reduce the power consumption (cTDP) of the EPYC from default 225W to 190W, so fully loaded box draws ~242W from the wall. On the other end with the 500W PSU the max cTDP could be configured to 240W and so when all cores loaded that still means a little boost in clock freq.

Still work in progress: cable management (yeah, sure ;)) and drilling holes for HDPlex on the front panel will come but I needed the box up and running so it is already in use (applied a thin layer of insulation on the bottom of PSU in order to not to shortcut the motherboard accidentally). Between the front panel and the HDPlex there will be a 0.5mm thin thermal pad mostly to avoid scratching the case, I hope the extra cooling surface will help the HDPlex live long enough without doing the fan mod at constant load above 200W. The I/O plate is not missing but sacrificed by thermal management.
And I have not yet decided whether I want to make the box standing with those feet or rather will go layed down using them. Due to the bolts of the feet I can not use the two regular fan positions on the bottom but need to go with the single one in the middle instead. But it is pretty lucky that by using a single slot PCIe card I can have the luxury of utilizing regular 80x25 fans, not only the slim ones.
Maybe it will just be a vertical stand, so can keep 2 80x25 fans on the bottom. And maybe a handle on the top. I love handles.

 

aromachi

Cable-Tie Ninja
Dec 18, 2019
150
137
Likin this. I'd go for lower profile feet, though. Maybe smaller rubber feet like you see on laptops. link They'd still keep the bottom off the desk and let some air through. I need a pc this size for my TV upstairs. Pretty cool!
 
Last edited:
  • Like
Reactions: rokabeka

rokabeka

network packet manipulator
Original poster
Jul 9, 2016
248
268
Likin this. I'd go for lower profile feet, though. Maybe smaller rubber feet like you see on laptops. link They'd still keep the bottom off the desk and let some air through. I need a pc this size for my TV upstairs. Pretty cool!
thank you. I am also hesitating to keep those feet as the box with them looks like the love child of a Jawa sandcrawler and a Minion :D
I have lower rubber feet here at home just wanted to try these ones.

for a tv you can go even smaller than this especially if an APU is enough.
 

rokabeka

network packet manipulator
Original poster
Jul 9, 2016
248
268
So essentially the performance of nearly a 13900k in a rather small form factor. Works :)

cu, w0lf.
it really depends on the workload. for a single process - or up to 16 threads I guess - the 13900k is the king. but for my purposes 48 pretty performant core is much better than 8 beasts the 13900k has. also, my workload significantly benefits from the huge L3 cache.
 

hrh_ginsterbusch

Master of Cramming
Nov 18, 2021
441
169
wp-devil.com
it really depends on the workload. for a single process - or up to 16 threads I guess - the 13900k is the king. but for my purposes 48 pretty performant core is much better than 8 beasts the 13900k has. also, my workload significantly benefits from the huge L3 cache.
Isnt the 13900k a 16 Core / 32 thread system? Albeit it probably works different under Windows (which I dont use), so YMMV.

Cost-wise, I'd probably go with the 7742 instead, because at least here in Germany, its about half the price of the 7642, while offering 16 more CPU cores (64 Cores / 128 Threads).

On that topic: What IS its purpose?

cu, w0lf.
 

hrh_ginsterbusch

Master of Cramming
Nov 18, 2021
441
169
wp-devil.com
Semi-OT: While I was looking at that hardware specs, I tried putting a list on Geizhals together, called "Insanity Check", because what I need for work most is CPU power, not GPU power.

Turns out: Water cooling options look quite nice, including the Alphacool 80mm Quad, which I already pondered about using in the S400 (10L).

But whats looks the most enticing to me is the Alphacool Eiswand 360, ie. external radiator / AIO kit. There was a post by someone doing a mixed version of this on, I think r/sffpc; ie. having both a built-in rad with the option to attach to an external rad using Quick Disconnects (could've been Mo-Ra, but also just a 420 with res/pump combo, dont remember it exactly), which feels like a very practical approach.

Sorry for the infodump, I just got a bit exited :D

cu, w0lf.
 
  • Like
Reactions: rokabeka

rokabeka

network packet manipulator
Original poster
Jul 9, 2016
248
268
Isnt the 13900k a 16 Core / 32 thread system? Albeit it probably works different under Windows (which I dont use), so YMMV.
to me 13900K looks like a 8P+16E CPU with 8 P cores and 16 E cores where only P cores have HT, that is why it has total of 32 threads :)

this is a multi-purpose build: profiling and optimizing high core-count firewall both in native mode and virtualized mode on this or booting regular linux and then using it as a traffic generator (t-rex, flent, iperf, wrk, etc). that is why the 2x100G NIC needed.

water cooling is nice and exciting indeed, I just would absolutely be freeked out to put a home-built water-cooled system on top of a server rack in a lab :D but please do not worry about sharing your excitement here about your plans. I guess you are not referring to Coolermaster's S400 but rather KXRORS S400 or its clones. I love the idea of placing a large radiator on the GPU side. EDIT: your options with an EPYC CPU regarding AIO's are pretty limited. for desktop CPUs it is much easier to find something to your taste.
for my purposes I try to avoid using a PCIe riser this time as bandwidth is crucial.

the Thunderbolt Mini #2 is a 64 core EPYC :cool: as it uses a Dell 330W and a HDPlex 400W DC-ATX there is no need to hold back the horses, it can run @240W