NYTN: Not Your Typical NAS or 20Tb of SSDs in SFF case.

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Finally decided to rebuild personal storage and configure it proper way. Planned this project long time ago and acquired few components over past year or two. Might sound a bit ambitious, but I hope it will work out the way I planned.

Plan: NAS, HTPC, Application server. Reliable and future proof. All in one: bit faster and bit more complicated then your typical NAS or typical HTPC in a sleek case.

1. NAS: I've been backing up my data recently and calculated that I have 6-8Tb of data. Half if important, other half is less so. It can easily grow to 10Tb or more if I upgrade my camera to something better, but 10Tb sounds like a good start.
2. Some NAS boxes can run docker containers, but fail if you want to run something but docker container: Low-end CPUs, not enough memory and limited OS support. Need to support all operating system and be able to run them all at once.
3. HTPC: While you can do lost of stuff with Chromecast or Nvidia shield sometimes you wish you got normal PC for certain stuff: casual games, or specific applications. Casual games, xbox controllers and etc.
4. Futureproof: 1Gbps network is too 2019. Need something faster then that in 2020.
5. Reliable. Box will be running 24/7. Enterprise equipment over consumer stuff. You'll see few cool parts below.

In order to run multiple OS we'll need hypervisor. I'm going with Vmware ESXi: I know it a bit better and using it for years. It's supported by big company and well documented. You'll be able to run multiple OS at the same time, and I'm looking at 5-10 virtual machines.
Network: If looking at 10Gbps network. It's not very common, but it's the way to go. My home switch supports 10Gbps already.
Data: If we want to run NAS at 10Gbps speed we need to have bunch of hard drives in RAID array. Or some SSDs in RAID. I decided to go crazy and ended up with Nytro 3330 in 15.36Tb. It's designed for data centers and rated for 1DWPD (Daily write per day). With 5 years of warranty that gives us: 15Tb x 365 days x 5 years: 27Pb of endurance. Consumer hard drive (950 PRO) gets you 400Tb of endurance for example. Drive is pretty fast, and rated for 2100MBPS/1200MBPS seq read/write. Faster then 10Gbps network. Two cons: Unlike SATA or NVME drives SAS will require additional controller. Price.
VM storage: In order to store Virtual machines I'm going with enterprise NVME drive. Intel P3600 series or Samsung PM 1725 in so-called AIC (Add-in Card) format. Motherboard that I'm going with doesn't support M.2 NVME drives. Slower Intel P3600 drive is rated at 2700/2100MBPS, while Samsung doing around 6000/4000MBPS. Normally those drives are used in high-performance databases. Samsung is rated for 55PB of data written.
Motherboard: There're some motherboards with integrated 10Gbps network. There re also few boards with integrated SAS controllers. My mobo comes with both of those: Asrock rack EPC612D4U-2T8R. Board is based on C612 chipset and designed to work with Intel Xeon E5-v3 and E5-v4 processors. There's no M.2 port on that board, but with three PCIe slots I can live without one. Another issue is Narrow-ILM cooler mount: it limits available CPU coolers: Noctua makes some air coolers and Asetek have Narrow ILM mount for AIO.
CPU and memory: That's easy part: there're lots of Xeon processors available, from 4 core CPUs under $50 to powerful 18-core chips. Intel Xeon E5-2000 processors are disgned for dual-CPU systems, while Intel E5-4000 were designed for 4-CPU systems. Both work perfectly fine in single CPU boards. Motherboard can support up to 128Gb of DDR4 memory that I might max out. I have few 16Gb sticks around.
CPU Cooler: Currently I have Asetek 550 AIO with narrow ILM ring. No issues and this is what I gonna start with. I'm not so sure about AIO for 24/7 system, and will likely end with Noctua cooler. Narrow ILM from Noctua is coming my way. L12S for low-power CPU and better compatability or C14S for max performance/low noise. Looking at top-down CPU coolers to cool 10Gbe chip that lists next to CPU.
Videocard: Most likely Radeon. No idea which one. Consumber NVidia cards doesn't work with passthrough and existing workaround are not that stable. I'm not going after fps and 4k here. I'd like to have 5700XT, but there're some issues, see below.
Case: Trying to fit all that in Cerberus case.
PSU: Corsair SF450/600/750. Tested system with existing HX850i PSU on my table: with E5-2683v3 (120W, 12 out of 14 Cores were assigned) and Intel P3600 I was able to max it out at 205W. Not very scientific test, just wanted to get initial numbers. I'm fairly sure I'll be okay with 2618Lv3 and SF450

Below is two projected builds: one is reasonable build and other is maxed out version.

Components:

Small buildBig build
CPU:Intel Xeon E5-2618Lv3 (8C/16T, 3.0Ghz)Intel Xeon E5-2683v3 (14C/28T, 2.0Ghz)
Intel Xeon E5-4650v3 (12C/24T, 2.1Ghz)
Intel Xeon E5-4669v3 (18C/36T, 2.1Ghz)
Cooler:Noctua L12s.
Asetek 550LC (Single 120)
Noctua C14S
Asetel 650LC (Dual 120)
Memory:4x16Gb DDR4 2133 ECC Reg.4x32Gb DDR 2400 ECC Reg
Storage:Nytro 3330 15.36Tb SAS SSD.
Intel P3605 1.6Tb NVME
Samsung 850 PRO SATA SSD
Nytro 3330 15.36Tb SAS SSD
Samsung PM1725 6.4Tb NVME SSD
Samsung 850 PRO SATA SSD
Videocard:Radeon 5500XTRadeon 5700XT.
PSU:Corsair SF450.Corsair SF600 or SF750.
Case:Sliger CerberusSliger Cerberus
 
Last edited:

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Problems:
1. Drive partitioning. While I can install hypervisor on NVME drive it's usually a good idea to keep hypervisor and VM separate. Most likely I'll end up with extra sata SSD just for ESXi installation. Might use that drive for Windows and Linux as well. ESXI can be installed on USB stick and that's how it's done in servers but...
2. USB controller. Board works bit different compared to my existing workstation (Asus Z10PE-D16 WS). I had to disable Intel USB 3.0 controller to pass both controllers to Virtual Machine. Also it seems that both type of ports (2.0 and 3.0) are connected to same controller. With my ASUS I could pass USB 3.0 to "NAS VM) and usb 2.0 ports to Windows VM. I can always get additional USB card, but prefer not to.
3. Videocard options. If videocard installed in top slot I blocks 2nd PCIe slot. So far it looks like I can get away with it, but I prefer to have all three slots available. If long videocard is installed in bottom slot (most logical option IMHO) it blocks onboard SAS ports. I have to use short (ITX) cards or mod PCIE extender. I haven't decided yet, but I will likely start with HP OEM RX580 4Gb: it's the only ITX RX580 that I'm aware of. Other options include RX570 ITX, Vega 56 Nano and Powercolor 5700 ITX that's only available in Japan.
4. Cooling: I'm not worried about CPU cooling: i have a few options there. Boards like that usually designed for good front-to-back airflow and it's not a problem in datacenters. I've seen my LSI SAS at over 90C in my workstation. I'm going with top-down cooler for CPU to cool 10Gb NIC and considering copper hearsinks for C612 and SAS chips. Not a top priority, but something that's on my list.

Configuration:

Photos:

Benchmarks:
 
Last edited:

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32


Xeon 8/16 4x32 uddr4 ecc HP Samsung 512 Gb м.2 Intel 905 480 Gb м.2 Supermicro 128 Gb ssd
I bought but did not install the 1030 video card. MIcro PSU 250 w, power supply brick 150 watts
That looks like supermicro board with Xeon 1521 or 1541 processor.
In fact, I was thinking about Intel NUC or something similar. There's no expansion and it's getting pretty expensive very fast.
I'm also aware there's Supermicro/Asrock boards with Intel Xeon 1541 (8Cores/16Threads) with integrated 10G and SAS controller but those run for $400-500.
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Currently build looks as follows:


I ran two VMs (Desktop and Storage) and did quick test.
1 drive - system drive (VMDK Stored on Intel P3600 NVME)
2nd drive - Same drive via Network (Another VMDK connected to NAS VM)
3rd drive: Nytro connected to NAS VM. VIa Network of course.

Quite interesting results. I expected it will be cut off at 10000mbps (10gbps), but it was better then that. I intentionally used 16Gb files in crystaldiskmark.

This is how it looked form VMware:

I got cables that should be dual-ported, but can't confirm if I'm using dual-port SAS or not.
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Decided to go with Visiontek 5500XT videocard over old OEM RX 580. It's the only "short" Navi card besides that unicorn 5700 from powercolor. Was able to get one for reasonable price from Dell. Performance wise it's about the same, but it's newer and come with warranty. Turned out to be a good idea, card overlaps PCH radiator: any card longer then that and SAS ports will be blocked.

Ordered Noctua C14S cooler. Still waiting for Noctua mounts.
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Turned out that there's such thing as "Amd Navi reset bug". It affects 5700XT and 5500XT and everything in between. Host cannot reset videocard if you reboot virtual machine and you have to reboot host itself. I'm not going to reboot all the my storage every now and then, right?
Decided to give nvidia a try. Nvidia cards require some workarounds, but I knew it could be done. After some attempts it finally working now.


Shortest 2070 RTX besides Gigabyte Mini ITX 2070 and MSI Aero ITX 2070. Both of those cards are $520+.
Asus is bit longer then Visiontek 5500XT or both 2070 ITX versions at 197mm. Card covers one SAS port, but I can live with that now, and fix it later if needed.

I was running Assasins creed Odyssey with 5500XT and CPU was used at 30% (2683v3, gaming in VM). With 2070RTX I'm running ultra high graphics and CPU is way more utilized at 50-60%.

5500XT:

2070RTX:

I'm using ESXI monitoring that doesn't show GPU temps. But after 3 hrs of gaming temps are below 55c. Hottest components are memory sticks @53-54C and PCH @50C.
 

smitty2k1

King of Cable Management
Dec 3, 2016
967
492
That's crazy, love it! Need more SFF NAS around here.
That SAS drive sure is something else!
 

Sligerjack

Caliper Novice
Jul 29, 2019
26
47
www.sliger.com
I have no clue how I didn't see this thread earlier, but good god man, this thing is beautiful!

My personal Cerberus serves as my main system and my NAS, but this thing just put mine to shame! All of the kudos!
 

hsolo505

Cable Smoosher
Apr 22, 2018
11
28
This looks great! How do you (or plan to) handle data/storage redundancy needs in case your ssd's kick the can?

I'm super curious about the nVidia GPU Passthrough on ESXi. Last time I attempted was with a 750ti. Do you mind linking any resources you used to make it work with the 2070?
 

catawalks

Average Stuffer
Dec 17, 2019
62
88
Decided to go with Visiontek 5500XT videocard over old OEM RX 580. It's the only "short" Navi card besides that unicorn 5700 from powercolor. Was able to get one for reasonable price from Dell. Performance wise it's about the same, but it's newer and come with warranty. Turned out to be a good idea, card overlaps PCH radiator: any card longer then that and SAS ports will be blocked.

Ordered Noctua C14S cooler. Still waiting for Noctua mounts.

Just a thought, have you tested moving the GPU up one slot to the x8 middle slot? You should lose 0 performance, maintain cooling from the fans below, and gain your other SAS port back. The only drawback might be the proximity of the PCIe NVMe SSD to the GPU back plate, but that would most likely not be an issue with the airflow provided by all the fans and the air path configuration.
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Just a thought, have you tested moving the GPU up one slot to the x8 middle slot? You should lose 0 performance, maintain cooling from the fans below, and gain your other SAS port back. The only drawback might be the proximity of the PCIe NVMe SSD to the GPU back plate, but that would most likely not be an issue with the airflow provided by all the fans and the air path configuration.

I thought about it, but don't see big reason. It works pretty good as is, and it runs cool and quiet. I'm playing console-like games (Assassins Creed) on this machine.
There's no need for extra SAS drive, unless I get another deal of course. I'm considering backup solution, but it will be bunch of spinning drives only. May be even offsite.
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Quick update on the build.

1. I'm anticipating new videocards coming soon and I've sold my card for a good price. I guess fact that's the only card you can get to Intel NUC adds up.
2. Got nice CPU instead: Intel Xeon E5-2698v3. Basically, doubled down: 2x cores and threads. 2x power and cache: I've been asked a few times about performace in VM, so I decided to run some benchmarks. For some reason I was unable to boot into Windows with old cpu, so I gave up quickly, and dropped in new one. It's not gonna break any records, it will be 2-5% better then in VM.

CPUConfigR15R20R23 (single core)R23 (Multi core)
2618Lv38C/16T in VM
937​
2121​
652​
4474​
2618Lv38C/16T Bare metal
2698v38C/16T in VM
1709​
3561​
652​
8849​
2698v316C/32T in VM
2058​
4357​
623​
11178​
2698v316C/32T Bare metal
2048​
4467​
614​
11502​

I'm happy to see highest temp during R23 Multicore run at 57C. It will be higher with GPU, but still, might swap for smaller CPU cooler.
 
  • Like
Reactions: dealda

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Haven't posted updates in past few months, but I've sold my 2070 Dual Mini card and purchased 3060Ti from EVGA. The only 3000-series card under 21cm at the moment. Paid about as much as I sold my 2070 for, so I'm very happy about it. Still using my 450W PSU, even though I have 750W Corsair on hand.

System drive was updated to same PM1725 but in 6.4Tb version. New drive doesn't support NVME namespaces, so I have to repurpose my Samsung drive for hypervisor. Not a big deal to me, but still.

It's pretty much maxxed out: no reason going after v4 CPU or trying to install bigger card vertically. Might add some USB controller (for wifi adapter) in available PCIe slot though
 

hsolo505

Cable Smoosher
Apr 22, 2018
11
28
Very significant flash upgrade! Surprised you are even able to passthrough the new GPU as well on ESXi. What version are you running?
 

TLN

Average Stuffer
Original poster
Mar 9, 2020
57
32
Very significant flash upgrade! Surprised you are even able to passthrough the new GPU as well on ESXi. What version are you running?
I got that flash in my main PC for a pretty long time. But I consolidated everything down to this little server and it works perfectly.
I'm running 6.7.0U3. I was able to pass-through RTX2000, but it required more efforts: need to add some nvidia entries in /etc/passthrough and such. RTX3000 works great out the box. The only card that didn't work for me was Radeon 5500XT that I turned in back to Dell.