Help - PC + NAS + eGPU all-in-one build

Wahaha360

a.k.a W360
Original poster
SFFLAB
NCASE
SSUPD
Feb 23, 2015
2,131
10,697
I would like the following in one box for my upcoming trips:
- Synology NAS motherboard + 4x 2.5" Drives
- Gigabyte Aorus eGPU motherboard
- Intel CPU + PC motherboard
- Nvidia Geforce GPU
- Nvidia Quadro GPU
- Video Capture Card
- SFX or custom power supply


I have a MacBook, I use my MacBook as a monitor when I need my PC for CAD and Gaming, which is much better on Windows. Sometimes, I need eGPU to accelerate stuff on OS X too. Ideally, the NAS helps me manage and transfer files between my MacBook and PC. In small team meetings, it allows me to share data to other computers.

Here is how I use my MacBook + PC, but now I need the PC to also function as a NAS and eGPU occasionally.


Here is what I have figured out so far:
- I have to use PCIe flex riser for both GPUs, which allows me to toggle between the GPUs between the PC mobo or eGPU mobo (GPUs are held in place by brackets, easier to move flexible riser around).
- I need a PC motherboard with 10gb ethernet and Thunderbolt 3.0, so it can connect to the NAS directly or over the network, the current best candidate is ASRock Fatal1ty Z370 Gaming-ITXac.

I don't know anything about NAS, the heat it will generate, or how to make a DIY NAS, so my questions are:
1) which Synology NAS motheboard should I get?
2) should I do a DIY NAS? If yes, what kind of cooling do I need?
 
Last edited:
  • Like
Reactions: Soul_Est and Phuncz

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
paging @Soul_Est, he's the one providing me with RAID insights previously o_o

EDIT: just to ask: would you mind using the main PC as the NAS system? (like you have to turn on the entire thing just to access files)
 
  • Like
Reactions: Soul_Est

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Crazy whack option: use a hypervisor, and run the user OS and NAS OS on the same physical hardware as close-to-the-metal VMs.

Pros:
- PCIe passthrough to VMs & dedicated CPU cores so minimal performance impact
- No need for 10GigE hardware for internal run, potentially crazy bandwidth between NAS VM and gaming VM. NAS VM can still expose itself externally too.
- Choice of NAS OS (e.g. NAS4Free, FreeNAS, Windows Server, UnRAID, etc)
- Capture card could be hosted by a separate VM to compartmentalise capture & broadcast from gaming VM in case of a freeze/crash
- Potentially cheaper and more compact

Cons:
- Hardware passthrough devices need to be dedicated to a VM. For example, if you want to run two gaming VMs with a GPU each, you need two GPUs, they can't share. Or if you want to operate the NAS VM directly with a USB keyboard and mouse (rather than 'remoting in' from the gaming VM) you need to dedicate a USB controller to that VM exclusively, which may be a problem if the motherboard has a limited number of USB host controllers
- That also counts for the network hardware, but you can 'network internally' so the NAS also acts as an internal router/switch and owns the hardware ethernet port, with the gaming VM connecting via that. If you have a motherboard with two ethernet ports, you can give one to each VM.
- Choice of CPU hardware leans towards more cores (cores dedicated to VMs), which for ITX leans towards to X99/X299 platform (might get away with Coffee Lake, 4 cores for gaming, one for NAS VM, one for hypervisor, but it's a stretch). Eats into cost saving for avoiding dedicated hardware
- Running a 'software NAS' means you lose out on the 'plug and play' usability of a NAS box, though gaining a lot more configurability
- Some power management oddities. VMs can be 'slept' and resumed from power off by the hypervisor, but from cold boot you now have an additional delay while the hypervisor boots before all other VMs can boot. Single/multi-core turbo affected by several cores always being live due to the hypervisor and other VMs running even when idling.
- Setup is nonstandard, not a hugely popular setup so not a huge knowledge base on how to set it up.

LinusTechTips' "X gamers 1 CPU" builds ([1] [2] [3] [4] [5]) are examples with multiple gaming VMs, but you can use the same setup with a single VM and one NAS VM. They use UnRAID as the hypervisor, which means it acts as the NAS as well as hosting other VMs.
 
Last edited:

jtd871

SFF Guru
Jun 22, 2015
1,166
851
@EdZ said it better than I could. IIRC, Puget Systems has done builds along the 1 CPU, X gamers VM theme. I don't know how responsive they'd be if you didn't purchase at least some hardware from them, though.
 
  • Like
Reactions: VegetableStu

Wahaha360

a.k.a W360
Original poster
SFFLAB
NCASE
SSUPD
Feb 23, 2015
2,131
10,697
would you mind using the main PC as the NAS system? (like you have to turn on the entire thing just to access files)

I would prefer to have the NAS separate fro the PC. I like to have the NAS on all of the time, PC some of the time.[/user]
 
  • Like
Reactions: VegetableStu

Wahaha360

a.k.a W360
Original poster
SFFLAB
NCASE
SSUPD
Feb 23, 2015
2,131
10,697
Crazy whack option: use a hypervisor, and run the user OS and NAS OS on the same physical hardware as close-to-the-metal VMs.

Pros:
- PCIe passthrough to VMs & dedicated CPU cores so minimal performance impact
- No need for 10GigE hardware for internal run, potentially crazy bandwidth between NAS VM and gaming VM. NAS VM can still expose itself externally too.
- Choice of NAS OS (e.g. NAS4Free, FreeNAS, Windows Server, UnRAID, etc)
- Capture card could be hosted by a separate VM to compartmentalise capture & broadcast from gaming VM in case of a freeze/crash
- Potentially cheaper and more compact

Cons:
- Hardware passthrough devices need to be dedicated to a VM. For example, if you want to run two gaming VMs with a GPU each, you need two GPUs, they can't share. Or if you want to operate the NAS VM directly with a USB keyboard and mouse (rather than 'remoting in' from the gaming VM) you need to dedicate a USB controller to that VM exclusively, which may be a problem if the motherboard has a limited number of USB host controllers
- That also counts for the network hardware, but you can 'network internally' so the NAS also acts as an internal router/switch and owns the hardware ethernet port, with the gaming VM connecting via that. If you have a motherboard with two ethernet ports, you can give one to each VM.
- Choice of CPU hardware leans towards more cores (cores dedicated to VMs), which for ITX leans towards to X99/X299 platform (might get away with Coffee Lake, 4 cores for gaming, one for NAS VM, one for hypervisor, but it's a stretch). Eats into cost saving for avoiding dedicated hardware
- Running a 'software NAS' means you lose out on the 'plug and play' usability of a NAS box, though gaining a lot more configurability
- Some power management oddities. VMs can be 'slept' and resumed from power off by the hypervisor, but from cold boot you now have an additional delay while the hypervisor boots before all other VMs can boot. Single/multi-core turbo affected by several cores always being live due to the hypervisor and other VMs running even when idling.
- Setup is nonstandard, not a hugely popular setup so not a huge knowledge base on how to set it up.

LinusTechTips' "X gamers 1 CPU" builds ([1] [2] [3] [4] [5]) are examples with multiple gaming VMs, but you can use the same setup with a single VM and one NAS VM. They use UnRAID as the hypervisor, which means it acts as the NAS as well as hosting other VMs.

I'm noob, so I need to learn a little more on the VM implementation, but on high level, given my ability to break most of my gadgets, I don't trust myself implementing any sophisticated solutions. There is also concern of managing multiple drivers, stability and etc.

I'm going try the hardware route first, b/c I need at minimum the NAS and PC to independent.
 

Wahaha360

a.k.a W360
Original poster
SFFLAB
NCASE
SSUPD
Feb 23, 2015
2,131
10,697
This is one possible configuration for Gear 1. 280mm radiator (EK SE 280), adapter, 120mm fans (particularly Noctua Sterrox).

3lfk1ng has the prototype, so I can't really test (and I'm not good at it tbh), anyone ever done this?

Curious has much is sacrificed in terms of performance.

 
Last edited:

Soul_Est

SFF Guru
SFFn Staff
Feb 12, 2016
1,536
1,928
paging @Soul_Est, he's the one providing me with RAID insights previously o_o

EDIT: just to ask: would you mind using the main PC as the NAS system? (like you have to turn on the entire thing just to access files)
I have done so but it is annoying. This was when my battlestation hosted three machines.

Crazy whack option: use a hypervisor, and run the user OS and NAS OS on the same physical hardware as close-to-the-metal VMs.

Pros:
- PCIe passthrough to VMs & dedicated CPU cores so minimal performance impact
- No need for 10GigE hardware for internal run, potentially crazy bandwidth between NAS VM and gaming VM. NAS VM can still expose itself externally too.
- Choice of NAS OS (e.g. NAS4Free, FreeNAS, Windows Server, UnRAID, etc)
- Capture card could be hosted by a separate VM to compartmentalise capture & broadcast from gaming VM in case of a freeze/crash
- Potentially cheaper and more compact

Cons:
- Hardware passthrough devices need to be dedicated to a VM. For example, if you want to run two gaming VMs with a GPU each, you need two GPUs, they can't share. Or if you want to operate the NAS VM directly with a USB keyboard and mouse (rather than 'remoting in' from the gaming VM) you need to dedicate a USB controller to that VM exclusively, which may be a problem if the motherboard has a limited number of USB host controllers
- That also counts for the network hardware, but you can 'network internally' so the NAS also acts as an internal router/switch and owns the hardware ethernet port, with the gaming VM connecting via that. If you have a motherboard with two ethernet ports, you can give one to each VM.
- Choice of CPU hardware leans towards more cores (cores dedicated to VMs), which for ITX leans towards to X99/X299 platform (might get away with Coffee Lake, 4 cores for gaming, one for NAS VM, one for hypervisor, but it's a stretch). Eats into cost saving for avoiding dedicated hardware
- Running a 'software NAS' means you lose out on the 'plug and play' usability of a NAS box, though gaining a lot more configurability
- Some power management oddities. VMs can be 'slept' and resumed from power off by the hypervisor, but from cold boot you now have an additional delay while the hypervisor boots before all other VMs can boot. Single/multi-core turbo affected by several cores always being live due to the hypervisor and other VMs running even when idling.
- Setup is nonstandard, not a hugely popular setup so not a huge knowledge base on how to set it up.

LinusTechTips' "X gamers 1 CPU" builds ([1] [2] [3] [4] [5]) are examples with multiple gaming VMs, but you can use the same setup with a single VM and one NAS VM. They use UnRAID as the hypervisor, which means it acts as the NAS as well as hosting other VMs.
Well put. *tips his hat*
 
  • Like
Reactions: VegetableStu

jtd871

SFF Guru
Jun 22, 2015
1,166
851
Looks like the Synology DS-416 Slim models will be the most compact Syno you can get with 4 bays - 5.6 x 4.2 x 4.7 inches finished dimensions, 1G ethernet x 2 with aggregation. Tweaktown review here: https://www.tweaktown.com/reviews/7899/synology-ds416slim-four-bay-2-5-nas-review/index2.html

So for your other needs (2 video cards + capture card - expect that you could set up 2 windows 10 installations + macos in multiboot), you could maybe use a mATX board (AsRock Z370M Pro?). The following link talks about setting up 2 boots of Win10 - each with the appropriate video card disabled and appropriate video card driver. http://sourcedaddy.com/windows-10/configuring-multiboot-system.html

Probably easiest to put the GPUs on the same loop, since only one will be drawing power above idle at a time.
 
  • Like
Reactions: Soul_Est