Part Component (and OS) recommendations for DIY NAS?

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
As is my habit, this post turned into a bit of a novella, so I'll put the Tl;dr version first: current NAS build seems to be dying, need a relatively cheap replacement motherboard+CPU+RAM, preferably on an AMD platform. Needs: stability, 4 SATA ports. Decent performance, longevity, Windows file sharing, PCIe slot for a HBA AIC. Wants: ECC support, nGbE networking, decent CPU performance. Also looking for OS recommendations.


For the more patient among you:
I've been getting intermittent resets from my trusty old NAS PC recently, with the event log ominously reporting a fatal hardware error from the WHEA-logger service, but no details about which hardware has failed or how. This makes it a bit hard to troubleshoot, obviously, though I'm planning to test different RAM at the very least (I have some lying around). I was planning to upgrade this PC soon anyway, so I guess I might as well start figuring things out. Having your backup PC crash regularly is hardly an ideal situation.

The PC is currently running an AMD A8-7600, ASRock FM2A88-ITX+ motherboard, 16GB of DDR3 (1600 IIRC, some no-name AliExpress brand), no GPU, three HDDs (2x4GB WD RED mirrored for backups, a 6TB Seagate for media storage) and a SATA boot SSD. It's powered by a Silverstone SX500LG, and it's all living in a Fractal Node 304. There's room for another three HDDs. I want to keep the PSU, SSD (possibly for caching?) and HDDs.

The future use of the PC is pretty simple: network storage and backups, running headless, stuffed in a closet. Minimal interaction, controlled over the network as much as possible. Storage duties include backups to my cloud storage service which has a command line tool compatible with Linux, BSD, and so on. Useful folders from the network share are mapped as network drives to the various (Windows) PCs around the house, so that obviously needs to still be possible. Some CPU power for transcoding would probably be useful down the line, if for no other reason than space savings (making all those H264 videos into H265 or similar), though that's not a big priority. Still, the A8 has been struggling noticeably lately even just logging into Windows and running various background tasks. I don't want a (d)GPU - it won't have any use, and the PCIe slot has other potential uses. The PC will run headless, in a closet, touched only when necessary. I was planning to move my Ryzen 5 1600X + Biostar X370GTN to this PC when I upgrade my main PC, but given these errors I might need to do this earlier than that. ECC memory support would definitely be nice to have given the use case, likely with some used server DDR4 off Ebay.

The uses for the PCIe slot are worth mentioning: I'll probably want to add an HBA card at some point to go beyond the 4 SATA ports of most motherboards these days, and I want some form of nGbE networking, though 2.5GbE is probably more than enough for our needs + budget (10GbE switches aren't likely to be available to mortals in the next few years).

I saw a good deal on the Asus B550i Strix the other day, and for a second considered getting that for this build (even if it's utter overkill for this use), mainly because of the built-in 2.5GbE (despite it being the faulty Intel type - I'd just need to be careful about which switch I get). But then I realized it's not compatible with 3000-series APUs, meaning I couldn't use an Athlon 3000G like I was thinking, and instead would need to get something like a Ryzen 3 3100 at twice the price. There's also the question of compatibility with non-Windows OSes with that NIC, though given that it's an Intel NIC I'd expect it to at least work. Still, that doesn't seem like the best solution, but anything below B550 rules out integrated nGbE, forcing me to either get a USB NIC (not really suited for long-term use IMO) or necessitating a bifurcation riser to fit both that and a HBA (which definitely wouldn't be cheap, and would require a motherboard with bifurcation support). Current motherboards having just four SATA ports stresses the importance of the HBA, as the current drive layout already uses four ports, and we'll be needing more capacity long before these drives need replacing. So I'm in a bit of a bind. What would you recommend? I don't have a fixed budget for this, but I'd like to keep it as cheap as possible.


I'm also looking for OS recommendations - so far this PC has been running Windows 10, but that's far from ideal for this use case. I'm not interested in overly complex systems, VMs, etc., but I don't want a completely closed-off system either, and as I said I need to be able to run my cloud provider's command line tool.
 

elvendawn

Average Stuffer
Nov 12, 2020
60
24
Hi Valantar,

I have a similar situation. There really isn't a best of all worlds unicorn that I've found so far, that meets my budget expectations. I've been considering something like SuperMicro-X10SDV-4C-TLN2F SoC MB. Has few more Sata ports, and duel 10G NICs, could help avoid an HBA if 6+1m.2 is sufficient for you. It also has IPMI for headless management if you are having issues remotely connecting to it. Then you can also consider something like a GTX 1650S for Turning NVENC encoding support, which will significantly improve your transcoding times, however it might contend for space in the last set of disk trey in the Node 304 (I can't really tell but I don't think that disk trey + a single fan 1650S will fit at the same time - Currently use a Node 804 so bit more room). Budget would definitely be a concern for these options though.

As far as OS, if you are comfortable with Linux, Ubuntu server is a decent option, lots of support in the user communities if you get stuck or come upon issues, and you have host of filesystem and disk management options, handbrake works great, but Samba might require bit more learning curve for some advanced features than W10 and getting all your Windows ACLs setup right is a bit more of chore.

FreeNAS (based on FreeBSD) is another good option, with support for Dockers of anything you would really need, definitely simplifies the web management and NAS feature sets too, and plenty stable.
 

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
Hi Valantar,

I have a similar situation. There really isn't a best of all worlds unicorn that I've found so far, that meets my budget expectations. I've been considering something like SuperMicro-X10SDV-4C-TLN2F SoC MB. Has few more Sata ports, and duel 10G NICs, could help avoid an HBA if 6+1m.2 is sufficient for you. It also has IPMI for headless management if you are having issues remotely connecting to it. Then you can also consider something like a GTX 1650S for Turning NVENC encoding support, which will significantly improve your transcoding times, however it might contend for space in the last set of disk trey in the Node 304 (I can't really tell but I don't think that disk trey + a single fan 1650S will fit at the same time - Currently use a Node 804 so bit more room). Budget would definitely be a concern for these options though.

As far as OS, if you are comfortable with Linux, Ubuntu server is a decent option, lots of support in the user communities if you get stuck or come upon issues, and you have host of filesystem and disk management options, handbrake works great, but Samba might require bit more learning curve for some advanced features than W10 and getting all your Windows ACLs setup right is a bit more of chore.

FreeNAS (based on FreeBSD) is another good option, with support for Dockers of anything you would really need, definitely simplifies the web management and NAS feature sets too, and plenty stable.
Thanks for the response, but I think something like that is way overkill for my use - I would love IPMI support, but it's not worth paying that kind of premium for given that I'd probably need it 1-2 times a year, and the other features of boards like that are ... well, better suited for other uses. Especially when the 10GbE ports sadly don't support 2.5GbE or 5GbE, which is all too common on any but the newest 10GbE controllers, that would essentially lock me into 1GbE until I could afford to spend >$500 on a switch, which ... well, isn't happening. I've spent some time drooling over some of these server ITX boards, but they're mostly way out of my budget range. And to be honest, I mostly don't need what they're offering, given that $150 B550 boards now offer EEC support and 2.5GbE.

As for Ubuntu Server, again I'd likely need something made more specifically for my use case. There's no doubt that's configurable into a really good setup given enough time and effort, but that'd quite simply be too much work given my (lack of) familiarity with Linux overall. I've considered both FreeNAS, Unraid, Amahi, Open Media Vault, Xpenology and a bunch of others, I'm really mostly looking for anyone with experience with these in use.
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
5,187
4,520
Would 10Gb SFP+ be sufficient ? You can find plenty of 24-48 port switches with 2x 10Gb SFP+ ports second hand for around 100€/$, as well as a few small 10Gb SFP+ only switches like the Mikrotik CRS305. 10Gb SFP+ is common on server hardware and used parts (cards and switches) are cheap, especially with just a few ports and using Direct Attach Cables (DAC). RJ45 10GbE is expensive and will likely be for a while.

I can also support FreeNAS (now is named TrueNAS), as well as CentOS and ESXi to get you started. All of these allow virtualisation in some form or another to efficiently use resources. I use ESXi as a hypervisor (the base OS) and have virtual machines of a NAS OS, PiHole and CentOS on top of that on 8 core Ryzen, 32GB RAM, SSD and HDD storage tiers. Link to my build.

If you need to buy all new parts and are interested in Ryzen 3000 or 5000 series CPUs, consider the ASRockRack X570D4I-2T, it seems to fit most of your demands. It has RJ45 for the 10GbE outputs though so not the SFP+ sockets.
You'll want to consider a server board with IPMI or Out Of Band Management, as most Ryzen CPUs don't have an integrated GPU for display output. This is also an option: ASRockRack EPYC3521D4I-2T.

But I would start with the motherboard as that will be the central point of the build. Determine the CPU (cores, performance, power efficiency) and RAM requirements you need as that'll depend greatly on the stuff you need and plan to do.
With 10G networking and SATA/SAS on the board, you can keep the PCIe free for other potential use (quad M.2 PCIe card ?) or upgrade the network. Dual 40Gb QSFP+ is really cheap (requires cooling !) but switches aren't. 25Gb SFP28 is also worth considering for more futureproof use with QSFP28 (100Gb) --> 4x SFP28 (25Gb).

Some good jumping off points for motherboards:
ASRockRack mITX boards
SuperMicro mITX boards
Gigabyte Server mITX boards
 

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
Would 10Gb SFP+ be sufficient ? You can find plenty of 24-48 port switches with 2x 10Gb SFP+ ports second hand for around 100€/$, as well as a few small 10Gb SFP+ only switches like the Mikrotik CRS305. 10Gb SFP+ is common on server hardware and used parts (cards and switches) are cheap, especially with just a few ports and using Direct Attach Cables (DAC). RJ45 10GbE is expensive and will likely be for a while.

I can also support FreeNAS (now is named TrueNAS), as well as CentOS and ESXi to get you started. All of these allow virtualisation in some form or another to efficiently use resources. I use ESXi as a hypervisor (the base OS) and have virtual machines of a NAS OS, PiHole and CentOS on top of that on 8 core Ryzen, 32GB RAM, SSD and HDD storage tiers. Link to my build.

If you need to buy all new parts and are interested in Ryzen 3000 or 5000 series CPUs, consider the ASRockRack X570D4I-2T, it seems to fit most of your demands. It has RJ45 for the 10GbE outputs though so not the SFP+ sockets.
You'll want to consider a server board with IPMI or Out Of Band Management, as most Ryzen CPUs don't have an integrated GPU for display output. This is also an option: ASRockRack EPYC3521D4I-2T.

But I would start with the motherboard as that will be the central point of the build. Determine the CPU (cores, performance, power efficiency) and RAM requirements you need as that'll depend greatly on the stuff you need and plan to do.
With 10G networking and SATA/SAS on the board, you can keep the PCIe free for other potential use (quad M.2 PCIe card ?) or upgrade the network. Dual 40Gb QSFP+ is really cheap (requires cooling !) but switches aren't. 25Gb SFP28 is also worth considering for more futureproof use with QSFP28 (100Gb) --> 4x SFP28 (25Gb).

Some good jumping off points for motherboards:
ASRockRack mITX boards
SuperMicro mITX boards
Gigabyte Server mITX boards
Thanks for the info! A lot of stuff to look into there, for sure. I remember reading your build log a while back, and I'll definitely be going back to it for reference - it's pretty close to the kind of setup I want.

I've considered going SFP+, but running that kind of cabling through the apartment doesn't really seem feasible - with concrete walls everything needs to be run in plain sight, which doesn't quite work with those thick cables, and the inability to terminate cables myself is a major drawback - I don't have a feasible place to store coils of excess wiring next to the router/switch or connected PCs. Not to mention the horrendously noisy fans of most enterprise switches, of course. That Mikrotik switch looks pretty nice (and it's even passively cooled!), but if I understand the product page correctly the Ethernet jack is only for management, meaning I'd need another Ethernet switch with an SFP+ jack to actually get this connected to the internet - and now we're looking at stuffing a lot of hardware next to my fiber modem. If I lived in a house with lots of room for a server closet (and didn't have the damn fiber modem mounted in the middle of a hallway, with connected devices in all directions) I'd likely be going for an SFP+ setup, but sadly it's not feasible here. And given the usage I'm not likely to see any benefit from faster SFP+ - even 10GbE is a bit overkill to be honest, but nice to have when editing photos off the NAS.

That ASRock Rack motherboard is pretty much my dream NAS motherboard - with the possible exception of unclear nGbE support on the 10G ports (Intel's X550-AT2 product page says "nGbE only on Linux", ASRock's product page only lists 10GbE). Still, if I were able to hook it up to a 2.5GbE switch, that would save me getting a NIC, 8 SATA through OCulink means I could skip the HBA, there's room for an NVMe caching drive should I want one, IPMI support, and the board just seems excellent in most ways. It even supports Picasso APUs (though no mention of the Athlon 3000G). So yeah, that's pretty much perfect - it's just too bad that the board alone costs more than I'm likely to spend for this entire build. If I could reasonably afford it I would definitely go that direction, but it's just too expensive for now.

As for performance, as I said my needs are pretty simple - I might do some video transcoding down the line, but mostly this box will be serving files, acting as a file history location for the W10 PCs in the house + a backup target, and not much else. It might run some torrents if I find a convenient way to do that remotely, though that's not very high on the list of priorities. For now I honestly think an Athlon 3000G would work perfectly for my needs, and it would of course leave the door open for future upgrades. 16GB of RAM is the baseline (from what I understand ZFS essentially requires 1GB of RAM/TB of storage, meaning I currently need at least 14GB), though I might go for 32 just for the sake of longevity - those used server ECC UDIMMs are quite cheap after all.

What has your experience running ECC memory on a B350 board been like, by the way? Any issues?
 
  • Like
Reactions: Phuncz

vlad1966

Caliper Novice
Sep 21, 2017
28
14
I highly recommend Open Media Vault - it's been rock-solid stable for me, requires low resources (I run it with 8GB RAM with a Pentium 5900), though all I use it for is storing files & playing movies off my OMV NAS over my 1GBe network to my 4K TV - no issues, works great.

And the Microtik switch - I used to have 1 & may get another. They're great little units & very affordable. I had the RJ45 port connected to my $20 unmanaged GbE switch, which was connected to my router, which was connected to my cable modem, so I could have internet access for each PC connected to the Microtik switch with SFP+.
 
  • Like
Reactions: Valantar

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
5,187
4,520
What has your experience running ECC memory on a B350 board been like, by the way? Any issues?
No issues, though ECC is not enabled (just supported) on consumer Ryzen. I was just able to get Samsung B-die RAM for a good price, which is important for Ryzen 1st gen because of limited RAM overclock support.
 

eedev

SFF Lingo Aficionado
Apr 23, 2020
123
225
Hello!

For the OS, I know it won't fit most use cases people have so it's not really a recommendation but here's my experience after using Ubuntu Server, FreeNAS Coral, and some other on the way...

I use Proxmox (without any kind of subscription) for Virtual Machines that I sometimes need, for example my girlfriend plays on a windows Virtual Machine with GPU Passthrough on that server (proxmox oversimplifies this process).
Sometimes I try stuff like Steam Proton (on Pop Os!) or Lutris, Android VMs, Mac VMs, VM in a VM for double VPN encapsulation, ...

I use ZFS On Linux for my drives, I have 2 pools, one for Nextcloud stuff (my photos, various PDFs, my audio files, ...) with 1 drive backed by 2 other drives, the other pool for my medias.
ZFS allows you to share a datastore with NFS and/or SMB (Windows shares) with a simple command.

I also use Docker (via a docker-compose.yml) directly on the host for better performance. I use it for Nextcloud, Plex, Wallabag, FreshRSS, Lychee, Gitlab, ...
 

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
Hello!

For the OS, I know it won't fit most use cases people have so it's not really a recommendation but here's my experience after using Ubuntu Server, FreeNAS Coral, and some other on the way...

I use Proxmox (without any kind of subscription) for Virtual Machines that I sometimes need, for example my girlfriend plays on a windows Virtual Machine with GPU Passthrough on that server (proxmox oversimplifies this process).
Sometimes I try stuff like Steam Proton (on Pop Os!) or Lutris, Android VMs, Mac VMs, VM in a VM for double VPN encapsulation, ...

I use ZFS On Linux for my drives, I have 2 pools, one for Nextcloud stuff (my photos, various PDFs, my audio files, ...) with 1 drive backed by 2 other drives, the other pool for my medias.
ZFS allows you to share a datastore with NFS and/or SMB (Windows shares) with a simple command.

I also use Docker (via a docker-compose.yml) directly on the host for better performance. I use it for Nextcloud, Plex, Wallabag, FreshRSS, Lychee, Gitlab, ...
Thanks for the input :)

I've heard of Proxmox, but kind of dismissed it for my use as I kind of see it as overkill - managing a bunch of separate VMs seems clunky for what will be 99% a file server. I've considered setting up pi-hole for the home network, so I guess that's a possible second use case, but beyond that as I said the system will essentially just be hosting files for the various HDD-less PCs in the house. If there are reasons to go this route that I haven't thought of I'm all ears, of course.

I've pretty much settled on ZFS for the storage, as it seems practical and flexible, even if it's not the most intuitive system. My current thinking is two pools, one consisting of a single media storage drive, and the other with mirrored drives for backups + an SSD for caching (for photo editing, mainly). When you say ZFS on Linux, does that mean you're running some Linux VM with OpenZFS? What made you go for that setup rather than something like FreeNAS? My familiarity with Linux is ... I won't say nonexistent, but close enough to not make a difference, so I definitely want GUIs where I can get them.
 

elvendawn

Average Stuffer
Nov 12, 2020
60
24
I use OpenZFS with Ubuntu Server, and I have Docker setup for few things other things like Pi-hole, I also use Handbrake, Deluge server, Gimp, etc on that system. As I mentioned Samba takes a bit of a learning curve if you want anything above and beyond a basic CFS share but it wasn't too bad.

If you really aren't feeling comfortable with linux, TrueNAS (FreeNAS) uses ZFS based filesystem on freeBSD, and is all managed through a webgui that helps you through setting everything up, as well as it supports Dockers for things like Pi-hole, Handbrake, and many others.


Both options have a lot of information and tutorials on the web if you go looking.
 
  • Like
Reactions: Valantar

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
I use OpenZFS with Ubuntu Server, and I have Docker setup for few things other things like Pi-hole, I also use Handbrake, Deluge server, Gimp, etc on that system. As I mentioned Samba takes a bit of a learning curve if you want anything above and beyond a basic CFS share but it wasn't too bad.

If you really aren't feeling comfortable with linux, TrueNAS (FreeNAS) uses ZFS based filesystem on freeBSD, and is all managed through a webgui that helps you through setting everything up, as well as it supports Dockers for things like Pi-hole, Handbrake, and many others.


Both options have a lot of information and tutorials on the web if you go looking.
Does TrueNAS/FreeNAS have Docker support? I didn't know that. Definitely makes that sound like the way to go, if I can run a base NAS OS with the option for additional functionality down the line. Thanks for pointing that out!
 

eedev

SFF Lingo Aficionado
Apr 23, 2020
123
225
I've heard of Proxmox, but kind of dismissed it for my use as I kind of see it as overkill - managing a bunch of separate VMs seems clunky for what will be 99% a file server. I've considered setting up pi-hole for the home network, so I guess that's a possible second use case, but beyond that as I said the system will essentially just be hosting files for the various HDD-less PCs in the house. If there are reasons to go this route that I haven't thought of I'm all ears, of course.

If you don't see any use case for it Debian/Ubuntu Server would be more than enough.

When you say ZFS on Linux, does that mean you're running some Linux VM with OpenZFS? What made you go for that setup rather than something like FreeNAS? My familiarity with Linux is ... I won't say nonexistent, but close enough to not make a difference, so I definitely want GUIs where I can get them.

My bad I had to install OpenZFS on Ubuntu Server couple years ago but ZFS comes with Proxmox out the box ;)

I don't use GUIs often, Proxmox is the exception because it makes Single GPU Passthrough so much easier.

I tried using FreeNAS couple years ago it was FreeNas Coral and they discontinued it. I didn't like that lol.
At that time the regular FreeNAS was ugly and overcomplicated. Coral was looking good.

If you want GUIs I think FreeNAS/TrueNAS or Unraid is the way to go...
 
  • Like
Reactions: Valantar

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
Guess I'm bringing this thread back to life! I haven't really had time to deal with the NAS over the past few months, but it's become unstable enough to force my hand the past few days. It's been crashing enough that I've taken it entirely offline for the moment, which is ... not all that comfortable. Guess we'll be relying solely on cloud backups for a while. Thankfully I've upgraded my main system by now, so the NAS can inherit my Ryzen 5 1600X + Biostar X370GTN. I really wish I had a better motherboard (literally any other X370 or X470 board would be better, as Biostar's BIOS is utter crap), but it is what it is, and replacing a fully funcitonal motherboard isn't worth it. I'll be adding an HBA for storage, likely something LSI 2008-based, as well as a 2.5GbE NIC when I get around to wiring up the apartment and buying a switch. The NIC will be connected either through one of C_Paynes ingenious x8+x8 risers or an m.2 riser. The HBA is getting ordered today, probably alongside 2x16GB of used server ECC UDIMMs.

Software wise I've landed on TrueNAS, mostly because it seems like the easiest to set up and use. I've got a test setup going on the motherboard and RAM, and while I can barely make sense of it for now, I think I've figured out enough to make it work. The trickiest part will be getting data onto it - I'm reusing my old drives, but thankfully I had just enough spare storage that I could make a copy of everything. I don't quite trust one of the spare drives though, so I'm very happy I have online backups as well. The drives will need to be wiped for TrueNAS to use them, after all. It seems like the only workable solution for getting data off the spare drives and onto the array once it's set up is to connect the drives to another PC and transfer it all across the network, which is mind-bogglingly dumb, but I guess I have no choice.
 
  • Like
Reactions: lozza_c and Phuncz

Valantar

Shrink Way Wielder
Original poster
Jan 20, 2018
1,683
1,571
Ordered an HBA yesterday, found a pretty good Ebay store called The Art of Server that had a great selection + great explainer videos on the various differences between models (which I definitely wouldn't have figured out by myself - there are so damn many!). Reportedly excellent support too, and I like that they repair and sell damaged parts also - actively working to reduce e-waste is a great thing in my book.

Anyhow, I ordered an IBM M1115 (LIS 9210-8i) HBA plus two SFF8087-4xSATA cables. They even had pretty short cables, and I absolutely love how thin they are compared to most SATA cables. Should be a marked improvement in cable management. The case only supports six HDDs, but at least this way I can plan to keep the HBA if/when I get/build a better NAS case (I'm really seeing the appeal in hot-swappable drives!). Thanks to the store, I also know to not run my boot/cache SSDs off the HBA, as apparently it doesn't support TRIM. The motherboard's SATA ports should handle those fine though.

Also ordered an ADT-link R42MR m.2-PCIe x4 riser for when I add 2.5GbE to the PC. Slightly worried about the fitment for this, but it shold work. If not, it's just $22+VAT, so not the end of the world. Much cheaper than C_Payne's (admittedly far superior) x8+x8 riser at €80+shipping.
 
  • Like
Reactions: Phuncz and eedev