Hello! I've been lurking for a while, and created an account to join the fun. Long post ahead in the interest of sharing information.
I had an Asustor 3204T which just died. Sending it off to them for RMA, they might get unhappy with all the 'warranty void' stickers removed but I'm done with their product anyways. My requirements are pretty low; fileservice is via sshfs on the clients and the usual suspects running via docker. Additionally, a kodi-standalone instance for viewing pleasure.
My goal is to move to 2.5" HDDs. Currently in the 2.5" factor I have five assorted size SSD, two brand new 5tb seagate HDD, and about four other various size 2.5" HDD (.5TB to 1TB). A boatload of 3.5" 1TB, 2TB, and a single 8TB. Currently I have the need for 2TB of data saved with mergerfs and snapraid for parity.
I'm torn between my options, and have always preferred SFF opposed to big footprints. The asustor 3204T was about as big as I could tolerate. I have a pretty good knowledge of DC power and it's implementation. What I am not familiar with is how to get a SFF board (mini-STX / NUC / SBC flavor) with the capability of eight drives. Right now I have coming in the mail a rockpro64 (the one with PCIe x4). Initial plan is to use a SAS controller with 8087 or 8643 breakout to begin my array with 4 drives (3 data 1 parity) and expand to a dual port SAS controller when I want to go to 8 drives. I'm not convinced the rockpro will pan out.
I've also been eyeballing NUC and deskmini systems. I've seen a lot of information on here about m.2 to pcie x4 adapters, but have also read a lot of horror stories about quality and reliability of those components. One thing I have not been able to figure out is if I am using this type of adapter for a SAS controller, will I still need to provide power? The other factor is price of those components. Pros and cons of each system:
NUC: thunderbolt3, go straight into an Akitio Thunder 3 quad mini. Spend $$ and not have to do much work. Unsure if the drives can be commanded to spindown over the thunderbolt interface, can't be powered by the thunderbolt interface which means I will want to tap into my DC power supply. Also unsure if the drives can be commanded to stagger spinup on boot. No expandability on NUC, chip is soldered to the board. Possible m.2 to pcie adapter for a SAS controller to work it's magic.
Deskmini (STX board): Ability to upgrade. Is it a huge bonus? Not really, since the weaker Celeron in my Asustor was doing everything well enough, a light i3/i5 or Ryzen will hang for at least 3-5 years. Still stuck with m.2 to pcie, instead of a true pcie x4 onboard. Has anyone seen STX with pcie x4? I haven't used an AMD chip since 2002, so I'm unsure about things like iGPU passthrough and linux compatibility.
Thin-mini ITX: pros of having pcie, DC input so I can use my DC power supply, and SODIMM 260 pins (I have 4x16GB sitting around). Theoretically could mount inside my cabinet without the need for a case.
Overall, I'd appreciate any advice or opinions on the points I brought up above. My biggest concern with committing to m.2 to pcie adapters is my perceived opinion that they are flaky and could hurt some hardware.
Thanks!
I had an Asustor 3204T which just died. Sending it off to them for RMA, they might get unhappy with all the 'warranty void' stickers removed but I'm done with their product anyways. My requirements are pretty low; fileservice is via sshfs on the clients and the usual suspects running via docker. Additionally, a kodi-standalone instance for viewing pleasure.
My goal is to move to 2.5" HDDs. Currently in the 2.5" factor I have five assorted size SSD, two brand new 5tb seagate HDD, and about four other various size 2.5" HDD (.5TB to 1TB). A boatload of 3.5" 1TB, 2TB, and a single 8TB. Currently I have the need for 2TB of data saved with mergerfs and snapraid for parity.
I'm torn between my options, and have always preferred SFF opposed to big footprints. The asustor 3204T was about as big as I could tolerate. I have a pretty good knowledge of DC power and it's implementation. What I am not familiar with is how to get a SFF board (mini-STX / NUC / SBC flavor) with the capability of eight drives. Right now I have coming in the mail a rockpro64 (the one with PCIe x4). Initial plan is to use a SAS controller with 8087 or 8643 breakout to begin my array with 4 drives (3 data 1 parity) and expand to a dual port SAS controller when I want to go to 8 drives. I'm not convinced the rockpro will pan out.
I've also been eyeballing NUC and deskmini systems. I've seen a lot of information on here about m.2 to pcie x4 adapters, but have also read a lot of horror stories about quality and reliability of those components. One thing I have not been able to figure out is if I am using this type of adapter for a SAS controller, will I still need to provide power? The other factor is price of those components. Pros and cons of each system:
NUC: thunderbolt3, go straight into an Akitio Thunder 3 quad mini. Spend $$ and not have to do much work. Unsure if the drives can be commanded to spindown over the thunderbolt interface, can't be powered by the thunderbolt interface which means I will want to tap into my DC power supply. Also unsure if the drives can be commanded to stagger spinup on boot. No expandability on NUC, chip is soldered to the board. Possible m.2 to pcie adapter for a SAS controller to work it's magic.
Deskmini (STX board): Ability to upgrade. Is it a huge bonus? Not really, since the weaker Celeron in my Asustor was doing everything well enough, a light i3/i5 or Ryzen will hang for at least 3-5 years. Still stuck with m.2 to pcie, instead of a true pcie x4 onboard. Has anyone seen STX with pcie x4? I haven't used an AMD chip since 2002, so I'm unsure about things like iGPU passthrough and linux compatibility.
Thin-mini ITX: pros of having pcie, DC input so I can use my DC power supply, and SODIMM 260 pins (I have 4x16GB sitting around). Theoretically could mount inside my cabinet without the need for a case.
Overall, I'd appreciate any advice or opinions on the points I brought up above. My biggest concern with committing to m.2 to pcie adapters is my perceived opinion that they are flaky and could hurt some hardware.
Thanks!