Part Component (and OS) recommendations for DIY NAS?

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
As is my habit, this post turned into a bit of a novella, so I'll put the Tl;dr version first: current NAS build seems to be dying, need a relatively cheap replacement motherboard+CPU+RAM, preferably on an AMD platform. Needs: stability, 4 SATA ports. Decent performance, longevity, Windows file sharing, PCIe slot for a HBA AIC. Wants: ECC support, nGbE networking, decent CPU performance. Also looking for OS recommendations.


For the more patient among you:
I've been getting intermittent resets from my trusty old NAS PC recently, with the event log ominously reporting a fatal hardware error from the WHEA-logger service, but no details about which hardware has failed or how. This makes it a bit hard to troubleshoot, obviously, though I'm planning to test different RAM at the very least (I have some lying around). I was planning to upgrade this PC soon anyway, so I guess I might as well start figuring things out. Having your backup PC crash regularly is hardly an ideal situation.

The PC is currently running an AMD A8-7600, ASRock FM2A88-ITX+ motherboard, 16GB of DDR3 (1600 IIRC, some no-name AliExpress brand), no GPU, three HDDs (2x4GB WD RED mirrored for backups, a 6TB Seagate for media storage) and a SATA boot SSD. It's powered by a Silverstone SX500LG, and it's all living in a Fractal Node 304. There's room for another three HDDs. I want to keep the PSU, SSD (possibly for caching?) and HDDs.

The future use of the PC is pretty simple: network storage and backups, running headless, stuffed in a closet. Minimal interaction, controlled over the network as much as possible. Storage duties include backups to my cloud storage service which has a command line tool compatible with Linux, BSD, and so on. Useful folders from the network share are mapped as network drives to the various (Windows) PCs around the house, so that obviously needs to still be possible. Some CPU power for transcoding would probably be useful down the line, if for no other reason than space savings (making all those H264 videos into H265 or similar), though that's not a big priority. Still, the A8 has been struggling noticeably lately even just logging into Windows and running various background tasks. I don't want a (d)GPU - it won't have any use, and the PCIe slot has other potential uses. The PC will run headless, in a closet, touched only when necessary. I was planning to move my Ryzen 5 1600X + Biostar X370GTN to this PC when I upgrade my main PC, but given these errors I might need to do this earlier than that. ECC memory support would definitely be nice to have given the use case, likely with some used server DDR4 off Ebay.

The uses for the PCIe slot are worth mentioning: I'll probably want to add an HBA card at some point to go beyond the 4 SATA ports of most motherboards these days, and I want some form of nGbE networking, though 2.5GbE is probably more than enough for our needs + budget (10GbE switches aren't likely to be available to mortals in the next few years).

I saw a good deal on the Asus B550i Strix the other day, and for a second considered getting that for this build (even if it's utter overkill for this use), mainly because of the built-in 2.5GbE (despite it being the faulty Intel type - I'd just need to be careful about which switch I get). But then I realized it's not compatible with 3000-series APUs, meaning I couldn't use an Athlon 3000G like I was thinking, and instead would need to get something like a Ryzen 3 3100 at twice the price. There's also the question of compatibility with non-Windows OSes with that NIC, though given that it's an Intel NIC I'd expect it to at least work. Still, that doesn't seem like the best solution, but anything below B550 rules out integrated nGbE, forcing me to either get a USB NIC (not really suited for long-term use IMO) or necessitating a bifurcation riser to fit both that and a HBA (which definitely wouldn't be cheap, and would require a motherboard with bifurcation support). Current motherboards having just four SATA ports stresses the importance of the HBA, as the current drive layout already uses four ports, and we'll be needing more capacity long before these drives need replacing. So I'm in a bit of a bind. What would you recommend? I don't have a fixed budget for this, but I'd like to keep it as cheap as possible.


I'm also looking for OS recommendations - so far this PC has been running Windows 10, but that's far from ideal for this use case. I'm not interested in overly complex systems, VMs, etc., but I don't want a completely closed-off system either, and as I said I need to be able to run my cloud provider's command line tool.
 

elvendawn

Average Stuffer
Nov 12, 2020
60
27
Hi Valantar,

I have a similar situation. There really isn't a best of all worlds unicorn that I've found so far, that meets my budget expectations. I've been considering something like SuperMicro-X10SDV-4C-TLN2F SoC MB. Has few more Sata ports, and duel 10G NICs, could help avoid an HBA if 6+1m.2 is sufficient for you. It also has IPMI for headless management if you are having issues remotely connecting to it. Then you can also consider something like a GTX 1650S for Turning NVENC encoding support, which will significantly improve your transcoding times, however it might contend for space in the last set of disk trey in the Node 304 (I can't really tell but I don't think that disk trey + a single fan 1650S will fit at the same time - Currently use a Node 804 so bit more room). Budget would definitely be a concern for these options though.

As far as OS, if you are comfortable with Linux, Ubuntu server is a decent option, lots of support in the user communities if you get stuck or come upon issues, and you have host of filesystem and disk management options, handbrake works great, but Samba might require bit more learning curve for some advanced features than W10 and getting all your Windows ACLs setup right is a bit more of chore.

FreeNAS (based on FreeBSD) is another good option, with support for Dockers of anything you would really need, definitely simplifies the web management and NAS feature sets too, and plenty stable.
 

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
Hi Valantar,

I have a similar situation. There really isn't a best of all worlds unicorn that I've found so far, that meets my budget expectations. I've been considering something like SuperMicro-X10SDV-4C-TLN2F SoC MB. Has few more Sata ports, and duel 10G NICs, could help avoid an HBA if 6+1m.2 is sufficient for you. It also has IPMI for headless management if you are having issues remotely connecting to it. Then you can also consider something like a GTX 1650S for Turning NVENC encoding support, which will significantly improve your transcoding times, however it might contend for space in the last set of disk trey in the Node 304 (I can't really tell but I don't think that disk trey + a single fan 1650S will fit at the same time - Currently use a Node 804 so bit more room). Budget would definitely be a concern for these options though.

As far as OS, if you are comfortable with Linux, Ubuntu server is a decent option, lots of support in the user communities if you get stuck or come upon issues, and you have host of filesystem and disk management options, handbrake works great, but Samba might require bit more learning curve for some advanced features than W10 and getting all your Windows ACLs setup right is a bit more of chore.

FreeNAS (based on FreeBSD) is another good option, with support for Dockers of anything you would really need, definitely simplifies the web management and NAS feature sets too, and plenty stable.
Thanks for the response, but I think something like that is way overkill for my use - I would love IPMI support, but it's not worth paying that kind of premium for given that I'd probably need it 1-2 times a year, and the other features of boards like that are ... well, better suited for other uses. Especially when the 10GbE ports sadly don't support 2.5GbE or 5GbE, which is all too common on any but the newest 10GbE controllers, that would essentially lock me into 1GbE until I could afford to spend >$500 on a switch, which ... well, isn't happening. I've spent some time drooling over some of these server ITX boards, but they're mostly way out of my budget range. And to be honest, I mostly don't need what they're offering, given that $150 B550 boards now offer EEC support and 2.5GbE.

As for Ubuntu Server, again I'd likely need something made more specifically for my use case. There's no doubt that's configurable into a really good setup given enough time and effort, but that'd quite simply be too much work given my (lack of) familiarity with Linux overall. I've considered both FreeNAS, Unraid, Amahi, Open Media Vault, Xpenology and a bunch of others, I'm really mostly looking for anyone with experience with these in use.
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,924
4,949
Would 10Gb SFP+ be sufficient ? You can find plenty of 24-48 port switches with 2x 10Gb SFP+ ports second hand for around 100€/$, as well as a few small 10Gb SFP+ only switches like the Mikrotik CRS305. 10Gb SFP+ is common on server hardware and used parts (cards and switches) are cheap, especially with just a few ports and using Direct Attach Cables (DAC). RJ45 10GbE is expensive and will likely be for a while.

I can also support FreeNAS (now is named TrueNAS), as well as CentOS and ESXi to get you started. All of these allow virtualisation in some form or another to efficiently use resources. I use ESXi as a hypervisor (the base OS) and have virtual machines of a NAS OS, PiHole and CentOS on top of that on 8 core Ryzen, 32GB RAM, SSD and HDD storage tiers. Link to my build.

If you need to buy all new parts and are interested in Ryzen 3000 or 5000 series CPUs, consider the ASRockRack X570D4I-2T, it seems to fit most of your demands. It has RJ45 for the 10GbE outputs though so not the SFP+ sockets.
You'll want to consider a server board with IPMI or Out Of Band Management, as most Ryzen CPUs don't have an integrated GPU for display output. This is also an option: ASRockRack EPYC3521D4I-2T.

But I would start with the motherboard as that will be the central point of the build. Determine the CPU (cores, performance, power efficiency) and RAM requirements you need as that'll depend greatly on the stuff you need and plan to do.
With 10G networking and SATA/SAS on the board, you can keep the PCIe free for other potential use (quad M.2 PCIe card ?) or upgrade the network. Dual 40Gb QSFP+ is really cheap (requires cooling !) but switches aren't. 25Gb SFP28 is also worth considering for more futureproof use with QSFP28 (100Gb) --> 4x SFP28 (25Gb).

Some good jumping off points for motherboards:
ASRockRack mITX boards
SuperMicro mITX boards
Gigabyte Server mITX boards
 

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
Would 10Gb SFP+ be sufficient ? You can find plenty of 24-48 port switches with 2x 10Gb SFP+ ports second hand for around 100€/$, as well as a few small 10Gb SFP+ only switches like the Mikrotik CRS305. 10Gb SFP+ is common on server hardware and used parts (cards and switches) are cheap, especially with just a few ports and using Direct Attach Cables (DAC). RJ45 10GbE is expensive and will likely be for a while.

I can also support FreeNAS (now is named TrueNAS), as well as CentOS and ESXi to get you started. All of these allow virtualisation in some form or another to efficiently use resources. I use ESXi as a hypervisor (the base OS) and have virtual machines of a NAS OS, PiHole and CentOS on top of that on 8 core Ryzen, 32GB RAM, SSD and HDD storage tiers. Link to my build.

If you need to buy all new parts and are interested in Ryzen 3000 or 5000 series CPUs, consider the ASRockRack X570D4I-2T, it seems to fit most of your demands. It has RJ45 for the 10GbE outputs though so not the SFP+ sockets.
You'll want to consider a server board with IPMI or Out Of Band Management, as most Ryzen CPUs don't have an integrated GPU for display output. This is also an option: ASRockRack EPYC3521D4I-2T.

But I would start with the motherboard as that will be the central point of the build. Determine the CPU (cores, performance, power efficiency) and RAM requirements you need as that'll depend greatly on the stuff you need and plan to do.
With 10G networking and SATA/SAS on the board, you can keep the PCIe free for other potential use (quad M.2 PCIe card ?) or upgrade the network. Dual 40Gb QSFP+ is really cheap (requires cooling !) but switches aren't. 25Gb SFP28 is also worth considering for more futureproof use with QSFP28 (100Gb) --> 4x SFP28 (25Gb).

Some good jumping off points for motherboards:
ASRockRack mITX boards
SuperMicro mITX boards
Gigabyte Server mITX boards
Thanks for the info! A lot of stuff to look into there, for sure. I remember reading your build log a while back, and I'll definitely be going back to it for reference - it's pretty close to the kind of setup I want.

I've considered going SFP+, but running that kind of cabling through the apartment doesn't really seem feasible - with concrete walls everything needs to be run in plain sight, which doesn't quite work with those thick cables, and the inability to terminate cables myself is a major drawback - I don't have a feasible place to store coils of excess wiring next to the router/switch or connected PCs. Not to mention the horrendously noisy fans of most enterprise switches, of course. That Mikrotik switch looks pretty nice (and it's even passively cooled!), but if I understand the product page correctly the Ethernet jack is only for management, meaning I'd need another Ethernet switch with an SFP+ jack to actually get this connected to the internet - and now we're looking at stuffing a lot of hardware next to my fiber modem. If I lived in a house with lots of room for a server closet (and didn't have the damn fiber modem mounted in the middle of a hallway, with connected devices in all directions) I'd likely be going for an SFP+ setup, but sadly it's not feasible here. And given the usage I'm not likely to see any benefit from faster SFP+ - even 10GbE is a bit overkill to be honest, but nice to have when editing photos off the NAS.

That ASRock Rack motherboard is pretty much my dream NAS motherboard - with the possible exception of unclear nGbE support on the 10G ports (Intel's X550-AT2 product page says "nGbE only on Linux", ASRock's product page only lists 10GbE). Still, if I were able to hook it up to a 2.5GbE switch, that would save me getting a NIC, 8 SATA through OCulink means I could skip the HBA, there's room for an NVMe caching drive should I want one, IPMI support, and the board just seems excellent in most ways. It even supports Picasso APUs (though no mention of the Athlon 3000G). So yeah, that's pretty much perfect - it's just too bad that the board alone costs more than I'm likely to spend for this entire build. If I could reasonably afford it I would definitely go that direction, but it's just too expensive for now.

As for performance, as I said my needs are pretty simple - I might do some video transcoding down the line, but mostly this box will be serving files, acting as a file history location for the W10 PCs in the house + a backup target, and not much else. It might run some torrents if I find a convenient way to do that remotely, though that's not very high on the list of priorities. For now I honestly think an Athlon 3000G would work perfectly for my needs, and it would of course leave the door open for future upgrades. 16GB of RAM is the baseline (from what I understand ZFS essentially requires 1GB of RAM/TB of storage, meaning I currently need at least 14GB), though I might go for 32 just for the sake of longevity - those used server ECC UDIMMs are quite cheap after all.

What has your experience running ECC memory on a B350 board been like, by the way? Any issues?
 
  • Like
Reactions: Phuncz

vlad1966

Average Stuffer
Sep 21, 2017
58
31
I highly recommend Open Media Vault - it's been rock-solid stable for me, requires low resources (I run it with 8GB RAM with a Pentium 5900), though all I use it for is storing files & playing movies off my OMV NAS over my 1GBe network to my 4K TV - no issues, works great.

And the Microtik switch - I used to have 1 & may get another. They're great little units & very affordable. I had the RJ45 port connected to my $20 unmanaged GbE switch, which was connected to my router, which was connected to my cable modem, so I could have internet access for each PC connected to the Microtik switch with SFP+.
 
  • Like
Reactions: Valantar

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,924
4,949
What has your experience running ECC memory on a B350 board been like, by the way? Any issues?
No issues, though ECC is not enabled (just supported) on consumer Ryzen. I was just able to get Samsung B-die RAM for a good price, which is important for Ryzen 1st gen because of limited RAM overclock support.
 

eedev

Cable-Tie Ninja
Apr 23, 2020
156
292
Hello!

For the OS, I know it won't fit most use cases people have so it's not really a recommendation but here's my experience after using Ubuntu Server, FreeNAS Coral, and some other on the way...

I use Proxmox (without any kind of subscription) for Virtual Machines that I sometimes need, for example my girlfriend plays on a windows Virtual Machine with GPU Passthrough on that server (proxmox oversimplifies this process).
Sometimes I try stuff like Steam Proton (on Pop Os!) or Lutris, Android VMs, Mac VMs, VM in a VM for double VPN encapsulation, ...

I use ZFS On Linux for my drives, I have 2 pools, one for Nextcloud stuff (my photos, various PDFs, my audio files, ...) with 1 drive backed by 2 other drives, the other pool for my medias.
ZFS allows you to share a datastore with NFS and/or SMB (Windows shares) with a simple command.

I also use Docker (via a docker-compose.yml) directly on the host for better performance. I use it for Nextcloud, Plex, Wallabag, FreshRSS, Lychee, Gitlab, ...
 

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
Hello!

For the OS, I know it won't fit most use cases people have so it's not really a recommendation but here's my experience after using Ubuntu Server, FreeNAS Coral, and some other on the way...

I use Proxmox (without any kind of subscription) for Virtual Machines that I sometimes need, for example my girlfriend plays on a windows Virtual Machine with GPU Passthrough on that server (proxmox oversimplifies this process).
Sometimes I try stuff like Steam Proton (on Pop Os!) or Lutris, Android VMs, Mac VMs, VM in a VM for double VPN encapsulation, ...

I use ZFS On Linux for my drives, I have 2 pools, one for Nextcloud stuff (my photos, various PDFs, my audio files, ...) with 1 drive backed by 2 other drives, the other pool for my medias.
ZFS allows you to share a datastore with NFS and/or SMB (Windows shares) with a simple command.

I also use Docker (via a docker-compose.yml) directly on the host for better performance. I use it for Nextcloud, Plex, Wallabag, FreshRSS, Lychee, Gitlab, ...
Thanks for the input :)

I've heard of Proxmox, but kind of dismissed it for my use as I kind of see it as overkill - managing a bunch of separate VMs seems clunky for what will be 99% a file server. I've considered setting up pi-hole for the home network, so I guess that's a possible second use case, but beyond that as I said the system will essentially just be hosting files for the various HDD-less PCs in the house. If there are reasons to go this route that I haven't thought of I'm all ears, of course.

I've pretty much settled on ZFS for the storage, as it seems practical and flexible, even if it's not the most intuitive system. My current thinking is two pools, one consisting of a single media storage drive, and the other with mirrored drives for backups + an SSD for caching (for photo editing, mainly). When you say ZFS on Linux, does that mean you're running some Linux VM with OpenZFS? What made you go for that setup rather than something like FreeNAS? My familiarity with Linux is ... I won't say nonexistent, but close enough to not make a difference, so I definitely want GUIs where I can get them.
 

elvendawn

Average Stuffer
Nov 12, 2020
60
27
I use OpenZFS with Ubuntu Server, and I have Docker setup for few things other things like Pi-hole, I also use Handbrake, Deluge server, Gimp, etc on that system. As I mentioned Samba takes a bit of a learning curve if you want anything above and beyond a basic CFS share but it wasn't too bad.

If you really aren't feeling comfortable with linux, TrueNAS (FreeNAS) uses ZFS based filesystem on freeBSD, and is all managed through a webgui that helps you through setting everything up, as well as it supports Dockers for things like Pi-hole, Handbrake, and many others.


Both options have a lot of information and tutorials on the web if you go looking.
 
  • Like
Reactions: Valantar

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
I use OpenZFS with Ubuntu Server, and I have Docker setup for few things other things like Pi-hole, I also use Handbrake, Deluge server, Gimp, etc on that system. As I mentioned Samba takes a bit of a learning curve if you want anything above and beyond a basic CFS share but it wasn't too bad.

If you really aren't feeling comfortable with linux, TrueNAS (FreeNAS) uses ZFS based filesystem on freeBSD, and is all managed through a webgui that helps you through setting everything up, as well as it supports Dockers for things like Pi-hole, Handbrake, and many others.


Both options have a lot of information and tutorials on the web if you go looking.
Does TrueNAS/FreeNAS have Docker support? I didn't know that. Definitely makes that sound like the way to go, if I can run a base NAS OS with the option for additional functionality down the line. Thanks for pointing that out!
 

eedev

Cable-Tie Ninja
Apr 23, 2020
156
292
I've heard of Proxmox, but kind of dismissed it for my use as I kind of see it as overkill - managing a bunch of separate VMs seems clunky for what will be 99% a file server. I've considered setting up pi-hole for the home network, so I guess that's a possible second use case, but beyond that as I said the system will essentially just be hosting files for the various HDD-less PCs in the house. If there are reasons to go this route that I haven't thought of I'm all ears, of course.

If you don't see any use case for it Debian/Ubuntu Server would be more than enough.

When you say ZFS on Linux, does that mean you're running some Linux VM with OpenZFS? What made you go for that setup rather than something like FreeNAS? My familiarity with Linux is ... I won't say nonexistent, but close enough to not make a difference, so I definitely want GUIs where I can get them.

My bad I had to install OpenZFS on Ubuntu Server couple years ago but ZFS comes with Proxmox out the box ;)

I don't use GUIs often, Proxmox is the exception because it makes Single GPU Passthrough so much easier.

I tried using FreeNAS couple years ago it was FreeNas Coral and they discontinued it. I didn't like that lol.
At that time the regular FreeNAS was ugly and overcomplicated. Coral was looking good.

If you want GUIs I think FreeNAS/TrueNAS or Unraid is the way to go...
 
  • Like
Reactions: Valantar

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
Guess I'm bringing this thread back to life! I haven't really had time to deal with the NAS over the past few months, but it's become unstable enough to force my hand the past few days. It's been crashing enough that I've taken it entirely offline for the moment, which is ... not all that comfortable. Guess we'll be relying solely on cloud backups for a while. Thankfully I've upgraded my main system by now, so the NAS can inherit my Ryzen 5 1600X + Biostar X370GTN. I really wish I had a better motherboard (literally any other X370 or X470 board would be better, as Biostar's BIOS is utter crap), but it is what it is, and replacing a fully funcitonal motherboard isn't worth it. I'll be adding an HBA for storage, likely something LSI 2008-based, as well as a 2.5GbE NIC when I get around to wiring up the apartment and buying a switch. The NIC will be connected either through one of C_Paynes ingenious x8+x8 risers or an m.2 riser. The HBA is getting ordered today, probably alongside 2x16GB of used server ECC UDIMMs.

Software wise I've landed on TrueNAS, mostly because it seems like the easiest to set up and use. I've got a test setup going on the motherboard and RAM, and while I can barely make sense of it for now, I think I've figured out enough to make it work. The trickiest part will be getting data onto it - I'm reusing my old drives, but thankfully I had just enough spare storage that I could make a copy of everything. I don't quite trust one of the spare drives though, so I'm very happy I have online backups as well. The drives will need to be wiped for TrueNAS to use them, after all. It seems like the only workable solution for getting data off the spare drives and onto the array once it's set up is to connect the drives to another PC and transfer it all across the network, which is mind-bogglingly dumb, but I guess I have no choice.
 
  • Like
Reactions: lozza_c and Phuncz

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
Ordered an HBA yesterday, found a pretty good Ebay store called The Art of Server that had a great selection + great explainer videos on the various differences between models (which I definitely wouldn't have figured out by myself - there are so damn many!). Reportedly excellent support too, and I like that they repair and sell damaged parts also - actively working to reduce e-waste is a great thing in my book.

Anyhow, I ordered an IBM M1115 (LIS 9210-8i) HBA plus two SFF8087-4xSATA cables. They even had pretty short cables, and I absolutely love how thin they are compared to most SATA cables. Should be a marked improvement in cable management. The case only supports six HDDs, but at least this way I can plan to keep the HBA if/when I get/build a better NAS case (I'm really seeing the appeal in hot-swappable drives!). Thanks to the store, I also know to not run my boot/cache SSDs off the HBA, as apparently it doesn't support TRIM. The motherboard's SATA ports should handle those fine though.

Also ordered an ADT-link R42MR m.2-PCIe x4 riser for when I add 2.5GbE to the PC. Slightly worried about the fitment for this, but it shold work. If not, it's just $22+VAT, so not the end of the world. Much cheaper than C_Payne's (admittedly far superior) x8+x8 riser at €80+shipping.
 

findingmyfeet

Trash Compacter
Bronze Supporter
Feb 23, 2021
48
16
@Valantar r I looked at building a FreeNAS a couple of years back. One thing I do remember was that they were very restrictive on what was considered a supported build our not. It was a bit of you're on your own if it's not like this.

They were very much Zeon, very specific Supermicro or ASRock server boards, Samsung or a few other Flash drives, WD Red NAS as preferred. Also has to be ECC ram or there's no point building since you are introducing instability and uncertainty into the data you are using FreeNASand ZFS pools to verify and write back.

Also from memory you needed to pick your HDD size quite carefully as if you went up in size half way through building a ZFS pool the drive capacities need to match, so you could only use the capacity amount of the new HDD. E.g. if you start on WD Red NAS 4TB and then buy a larger HDD your still only able to use 4TB from it in the ZFS pool will still only let you use it as 4TB unless you sequentially resize all in the pool.

Last thing was they recommended the Node 804 as the smallest recommended enclosure, From what I recall even slight increases in temp of the HDD greatly degrade it's service life.

This was a while back so things might have changed a bit since then. Hope that helps as you've certainly helped me out loads with my current build.
 

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
@Valantar r I looked at building a FreeNAS a couple of years back. One thing I do remember was that they were very restrictive on what was considered a supported build our not. It was a bit of you're on your own if it's not like this.

They were very much Zeon, very specific Supermicro or ASRock server boards, Samsung or a few other Flash drives, WD Red NAS as preferred. Also has to be ECC ram or there's no point building since you are introducing instability and uncertainty into the data you are using FreeNASand ZFS pools to verify and write back.

Also from memory you needed to pick your HDD size quite carefully as if you went up in size half way through building a ZFS pool the drive capacities need to match, so you could only use the capacity amount of the new HDD. E.g. if you start on WD Red NAS 4TB and then buy a larger HDD your still only able to use 4TB from it in the ZFS pool will still only let you use it as 4TB unless you sequentially resize all in the pool.

Last thing was they recommended the Node 804 as the smallest recommended enclosure, From what I recall even slight increases in temp of the HDD greatly degrade it's service life.

This was a while back so things might have changed a bit since then. Hope that helps as you've certainly helped me out loads with my current build.
Thanks for the input :) I finally migrated the system over to TrueNAS about a month ago now, and so far it's been a decently smooth ride. Definitely a bit of a learning curve, but ... well, that's to be expected. The basics - installation, setting up pools and shares, etc., are incredibly simple and easy to do once you understand some key terms. There are some basics that I wish was configured out of the box (SMART tests and pool scrubs, for example), but it's mostly been pretty smooth.

I managed to limp along the old configuration long enough to survive past my main PC getting upgraded, so it inherited my old Biostar X370GTN as I had originally planned, plus the 1600X, 16GB of DDR4, and so on.

After looking around a bit I ended up getting an HBA AIC for adding drives, and thanks to excellent advice from Ebay store The Art of Server I ended up getting an IBM M1115, which is an OEM rebrand of an LSI 9210-8i. 8 ports, IT (initiator target) mode, in other words it just passes through drives to the system. Not suited for SSDs (doesn't have TRIM support), but performs excellently with SATA or SAS HDDs, and is reasonably affordable. I also get the benefit of those ultra-thin server grade SATA cables. My boot SSD as well as another 500GB SSD for fast NAS storage are connected to the motherboard's SATA ports.

What iXsystems considers officially supported hardware, the accepted best practices within the DIY TrueNAS community, and what can work decently well are three pretty distinct things. iXsystems cares about their business customers, so they are pretty exlcusive to server hardware. The community seems to be driven largely by IT professionals (and to a large degree US-based), so they are particularly partial to used server hardware too, but with a wider range and more flexibility of configurations. What can work is on the other hand nearly anything. For example, Ryzens are still mentioned in the most commonly used hardware guides like this: "Although AMD’s Ryzen architecture is very interesting, experience is limited and low-end server products are few and far between." Which of course ignores that all Ryzens support ECC (though motherboard support and ECC actually being active is another question), and is rather baffling when taking into consideration that they also recommend non-Xeon CPUs (i.e. without ECC support at all) in the same guide. In general, the community seems rather conservative and slow to adopt new tech (unless it's first been tested for a decade in servers).

The main reason for this Intel-only approach seems to be IPMI support, which is definitely a huge boon, but also means using very expensive (or used and old) server hardware. They are of course correct that there isn't yet much used Ryzen server hardware out there - AMD hasn't had much market penetration at all for low-end EPYC (similar to Xeon W), and higher end EPYC servers tend to be huge and IO-focused, i.e. unsuited for home NAS use. There are parts to be had, and I would love IPMI support in my NAS, but I'm not paying >$400 for a new motherboard to get it. So I'm perfectly content with using a consumer-grade motherboard until further notice.

But the main takeaway is this: 1st gen Ryzen seems to work very well for TrueNAS. I've had zero real issues, and even the (notoriously unsupported in FreeBSD) Realtek NIC on the motherboard works fine. I have changed some power settings in BIOS to avoid some idle shutdown issues that this platform can have, but I had those issues long before the TrueNAS setup, and haven't seen any unexpected shutdowns since setting up the NAS. I've also disabled boost simply because I don't need the performance and avoiding the extra heat seems like a good idea.

As for my pools, as mentioned above for now I'm keeping my two 4TB drives. I won't (likely ever) be running any many-drive pools with advanced parity - I simply don't have those kinds of storage needs. zRAID(1), single drive parity, has also been deprecated and is viewed as insecure, meaning the minimum zRAID configuration is now a five-drive setup. So yeah, that's not happening. I'm running a mirrored 2-drive array for the main backup storage on the NAS, and I'm fine with that. I at first considered adding capacity through adding a second VDEV to my pool, but after learning that VDEVs are inextricably bound to pools that's a no-go - adding a second mirrored VDEV would necessitate keeping (and periodically replacing to avoid failures) my original drives as well as the new ones for the lifetime of the system. So in order to expand my capacity I'll simply be replacing one 4TB drive at a time with a higher capacity one, which will increase capacity once both are replaced.

The Node 303 is mentioned in the hardware guide I linked above, but beyond that I've been running it as a Windows HTPC/NAS for something like six years, so I see no reason to not continue using it. My drives have been fine up until now, and an OS change won't affect that :) My drive thermals never really exceed 40°C, which is perfectly fine - the case has its intake fans blowing directly at the drives after all. I'm also going to improve intake airflow by drilling a bunch of holes in the front cover and covering it with speaker cloth for dust filtration, I just haven't gotten around to it.

I've also btw bought 32GB of ECC RAM for the NAS, which arrived a few days ago. I'll be installing it as soon as I have the time, and hopefully ECC will work without issues. I've also ordered a 2.5GbE NIC, as they finally added RTL8125 support in the latest release - figuring out how to get anything above 1GbE into the system without either going SFP+ or using wildly expensive Intel server NICs has been quite the journey, and one met with quite some pushback from server gear diehards on the TrueNAS forums. Luckily the addition of RTL8125 support should have solved that entirely, allowing for cheap and widely available nGbE support. (I have absolutely zero need for 10G, so 2.5 should be ideal.) The only thing I'm missing so far is getting my cloud backup working - it's only got command line support in FreeBSD, and I've gotten it set up and kind of working, but it has some issues that prevent me from really using it, so I'm thinking I'll install an Ubuntu VM to run it through instead. That should allow me to install PiHole as well, which would be great.

All in all, this build has been a bit of a mixed experience, but I'd definitely recommend TrueNAS to anyone looking to build a DIY NAS. The basics are dead simple and it's incredibly easy to use once you get past an initial step in the learning curve (learning a bit of terminology and understanding pools/vdevs/shares etc.). After that, things can get as complicated or stay as simple as you want them, which is great.
 

findingmyfeet

Trash Compacter
Bronze Supporter
Feb 23, 2021
48
16
But the main takeaway is this: 1st gen Ryzen seems to work very well for TrueNAS. I've had zero real issues, and even the (notoriously unsupported in FreeBSD) Realtek NIC on the motherboard works fine. I have changed some power settings in BIOS to avoid some idle shutdown issues that this platform can have, but I had those issues long before the TrueNAS setup, and haven't seen any unexpected shutdowns since setting up the NAS. I've also disabled boost simply because I don't need the performance and avoiding the extra heat seems like a good idea.

As for my pools, as mentioned above for now I'm keeping my two 4TB drives. I won't (likely ever) be running any many-drive pools with advanced parity - I simply don't have those kinds of storage needs. zRAID(1), single drive parity, has also been deprecated and is viewed as insecure, meaning the minimum zRAID configuration is now a five-drive setup. So yeah, that's not happening. I'm running a mirrored 2-drive array for the main backup storage on the NAS, and I'm fine with that. I at first considered adding capacity through adding a second VDEV to my pool, but after learning that VDEVs are inextricably bound to pools that's a no-go - adding a second mirrored VDEV would necessitate keeping (and periodically replacing to avoid failures) my original drives as well as the new ones for the lifetime of the system. So in order to expand my capacity I'll simply be replacing one 4TB drive at a time with a higher capacity one, which will increase capacity once both are replaced.

The Node 303 is mentioned in the hardware guide I linked above, but beyond that I've been running it as a Windows HTPC/NAS for something like six years, so I see no reason to not continue using it. My drives have been fine up until now, and an OS change won't affect that :) My drive thermals never really exceed 40°C, which is perfectly fine - the case has its intake fans blowing directly at the drives after all. I'm also going to improve intake airflow by drilling a bunch of holes in the front cover and covering it with speaker cloth for dust filtration, I just haven't gotten around to it.
Its great to hear you've got it all up and running and very interesting to hear on the Ryzen front. At the time I decide really didn't want to go with something like a Xeon / SuperMicro and 5 bay setup as it was a significant uplift on what I had before with a singe disk backup solution.

It would be good to see how its all fairing some time in the future as it is something I'll come back to at some point to re-evaluate.
 
  • Like
Reactions: Valantar

Valantar

Shrink Ray Wielder
Original poster
Jan 20, 2018
2,201
2,225
A minor update here: I've installed the 32GB of ECC RAM, which worked as a charm - registered at 2666MT/s straight away, and ECC options appeared in BIOS. Seems to work as intended. Not bad for a crappy Biostar consumer motherboard.

I also got a cheap Realtek 2.5G NIC, which I have installed but not put to use yet as I haven't bought a matching switch yet. Getting it to work was problem-free after updating the system to 12.0-U4, as RTL8125 support was added in that release. Just required adding two tunables (if_re_load = YES, if_re_name = /boot/modules/if_re.ko, both "loader" tunables) enabled the new driver and made the NIC appear immediately.

I'm now at 25 days of uptime, the system stays below 60°C reported CPU temp (that includes the ... what is it, 25°C offset of the 1600X), which isn't bad for it sitting in a closet with ambient temps being really high these days, even if it's mostly idle.

Oh, right, I also replaced the rear fan with an Arctic P14 PWM CO (the version with a dual ball bearing, rated for continuous operation with a 10Y warranty), made an improved duct between it and the cooler, and drilled a bunch of holes in the front cover to aid in airflow. Put some speaker cloth over to pretty it up some, and it looks passable for a quick and dirty job. I wouldn't have that sitting on my desk, but it works well enough, and for sticking down very elastic speaker cloth with double sided tape it really isn't terrible :p


 
  • Like
Reactions: eedev