• SFF.N will be offline briefly for planned maintenance on May 24th, from 11:00 PM to 11:30 PM EDT.

Server + NAS in Silverstone CS01-HS case

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
My old NAS in a PowerMac G5 case and my NUC home server are growing old, I also want to upgrade beyond 1Gbit networking. So as I usually do, I start from the case I want for this project. It needs to be well below 20L in volume. It also needs to have serious performance and I want to use as much leftover parts as I can.


Case: Silverstone CS01-HS (link)
This case looks amazing, is small and has 6 easy-remove 2,5" bays that support disks up to 15mm thick. The outer shell is one piece of 5mm anodized aluminium, with two easily removed side panels. It's design reminds me of the PowerMac G5 that it is going to replace. At 14 liters it'll easily fit somewhere. But as usually with Silverstone's storage-oriented cases, it's not easy to install powerful components without caveats. The main one being it only supports one low-profile PCIe card and it's CPU cooler height is limited to 68mm.
In the famous words of Barney Stinson: Challenge accepted !

CPU: AMD Ryzen 7 1800X (link)
I didn't have this one available but I wanted to upgrade to a newer Ryzen last year anyway. With the release of Ryzen 3000-series I found a 2700X for cheap, upgraded my own PC and had the 1800X available for this project. 8 cores at 3.6 GHz base clock is still reasonably nice for home server usage.

CPU Cooler: Cooler Master MasterLiquid 120 (link)
Maybe the biggest issue of this case is the CPU cooler solution. Ideally I wanted a Silverstone TD03-LITE but those are hard to come by, atleast in my region. The next best solution with an integrated pump is the MasterLiquid 120. Why ? Because a 25-27mm max radiator thickness is important, there is not much more than 40mm of space between the only fan mount and the motherboard. It also ticks the most important boxes, namely long lifetime and lack of RGB.

Motherboard: ASRock AB350-GAMING ITX/ac (link)
This is another component I had "lying" around. I prefer ASRock boards for my kind of stupid projects because they support PCIe bifurcation out of the box. At the point of conceptualizing this build, I wasn't sure I would go with bifurcation or use the M.2 to PCIe adapter. In the end, the bifurcation route would cost me precious space I didn't have. But it would have been ideal, PCIe lane wise.

RAM: 2x 16GB Samsung "B-die" DDR4-2400 ECC memory (link)
AMD's Ryzen CPUs support ECC memory and Samsung B-die was (at that time) the best fit for a first-generation Ryzen. I found these sticks for a reasonable price at the time B-die was on its way out.

Storage Controller: LSI SAS9207-8i based HBA card (link)
Like the popular IBM M1015 HBA card, this offers up to 8 SAS/SATA drives, through a PCIe 3.0 x8 link. There are many OEMs with this solution and they are cheap to find. Considering I'm only going to be running HDDs from these, being limited to 4 PCIe lanes shouldn't be a problem. It requires two "Mini-SAS (SFF8087)to 4 SATA" cables.

Storage Tier 1: 2x Samsung 2,5" SSD 830 SATA (link)
I had one of these and was able to buy a second one used, for not much more. This allows cheap but fast SATA SSD storage. It'll do for now.

Storage Tier 2: 6x Toshiba 2,5" 3TB 5400rpm SATA (link)
This was the toughest part. Recently it has become known that a lot of HDDs are SMR based, which you don't want for storage that involves random writes. I'm going to be using ZFS and SMR is discouraged for that use. The largest 2,5" HDDs I could find are 2TB drives, which later turned out to be SMR-based as well. The only 1TB+ drives that aren't and also aren't enterprise 10.000rpm (needing cooling, making lots of noise), are the Toshiba MQ03ABB300, which are hard to come by. But they were available in my region and still are for about 100€ a piece. Much cheaper than SSDs and not much more expensive than SMR drives.

Network Controller: Mellanox ConnectX-3 MCX354A-FCBT dual 40GbE card (link)
These are interesting beasts. They are cheap and offer two 40GbE ports through a QSFP+ port. Direct-Attach Cables are easily found, basically it's about 100-120€ for two cards and a DAC, having a 4GB/s link between two devices. Switches are more difficult though. But these cards are also interesting for 10GbE SFP+, with QSFP+ to SFP+ adapters also readily available. Oh and did I say it has two of those ports ?
Even though 25/50/100GbE is becoming the norm, these cards are dirt-cheap because a lot of companies are migrating to the formentioned better upgrade path.

M.2 Adapter: ADT Link R43MR M.2 M-key to PCIe 3.0 x4 adapter (link)
I bought these before the build was underway and luckily I did, bifurcating the PCIe x16 is not easy with the common x8/x8 adapters in that tight of a space. These components also heat up considerably without active cooling. In the end it's the better solution for my build but with this I am limited to PCIe 3.0 x4.

 

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
I'm still struggling with the SAS-card and the storage cage. I've already established the SAS/SATA backplane is defective, as I can't get it to power the HDDs when they are attached. But all 6 drives are detected when I insert the SAS card into my PC and using SATA power seperately.

The strange thing is that ESXi, the underlying VM Hypervisor OS on the server, does see the card, but doesn't recognize the disks. Even when I enable PCIe Passthrough, the disks aren't visible in the VM which has the controller allocated. Maybe it's an issue with the 4 PCIe lanes instead of 8. I'm going to try different cables first, as I see a few confirmations that PCIe 3.0 x4 shouldn't be a problem. going from nameless cables to Molex.
 
Last edited:

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
This week I received new Molex SFF-8087 to SATA cables (P/N 79576-3003) cables through eBay and these work ! All six disks are now in a ZFS RAID-Z2 array, currently chugging at 20-30MB/sec over Gbit LAN through Rsync, because I couldn’t figure out how I could copy the data with zfs send with different dataset compression on both ends.
 

NG-Fidel

Efficiency Noob
New User
Feb 5, 2020
6
3
I am watching this closely. I have a server/NAS build in a normal mid tower case and would love to slim it down. Curious to see how you do.
 
  • Like
Reactions: Phuncz

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
I was hoping to find an mITX AM4 board that has an 8-port SATA/SAS controller, but the only one is the new ASRockRack X570 board that doesn't support 1000 or 2000-series Ryzen CPUs. For me it wasn't an option when I started this project.

By the way, the rig is still copying data from my old NAS to the new one, this is a screenshot of two screens while copying but for some reason performance at the moment is low while it has been copying at 20-30MB/s (CPU limit with encryption on old NAS) for 4 days straight.


For OCD people like me: the da1-da6 should be allighned to "3TB #1 to #6", but the ports seem to mapped randomly.
 

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
This has been and excellent read. I then went on to read up on the and found this: https://www.servethehome.com/surreptitiously-swapping-smr-into-hard-drives-must-end/ . It definitely informed my decisions and I thank you for starting on that path.
Thanks for the appreciation ! ServeTheHome has been a very good resource, they also did a build in the same case that is more appropriate for small server use as it has IPMI and support Registered RAM, but I'm probably ahead on CPU power.

I'll soon be able to stress-test the storage as the initial sync has been completed.
 
  • Like
Reactions: Soul_Est

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
Now that I've switched from my old NAS to my new NAS after confirming it is working as expected, I now set out to enable 10G networking through the ConnectX-3 Pro adapters in both the NAS/server as my workstation. Both use a QSFP+ to SFP+ adapter and use 10G SFP+ Direct Attach Cable.

This is the first result copying a 4GB file from the 6 disk RAID-Z2 to my desktop (NVMe SSD):





The first graph is my 1G local network, the second graph is my 10G point to point network.
 
  • Like
Reactions: Soul_Est

Phuncz

Lord of the Boards
Original poster
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,521
4,065
Today I tried getting my UPS configured in XigmaNAS. It wasn't much of a problem on my old XigmaNAS build, but it is not that easy when running on ESXi apparently.
I was struggling trying to find the cause why it kept giving communication errors. Apparently ESXi can cause this for this type of communication over USB devices that are shared to the virtual machine.

I'm contemplating the option to get a USB PCIe card so I can "pass-through" the entire controller, making it dedicated towards the VM. This should solve the problem but also involves somehow getting a PCIe card on an mITX board that already has two PCIe cards attached. It so happens the board also has an M.2 key A+E PCIe Wifi card, but I can't find a card that has A+E key and a USB controller. It would also mean I need to strip the entire case because there's a screw holding it in place on the bottom of the board...
 

TLN

Caliper Novice
Mar 9, 2020
30
11
Are you passing through USB device or full usb controller? If you're passing through just USB device it might not work as expected. Try using controller instead. You might need to disable AHCI/XHCI modes in BIOS, if you don't see your device in VM. There're usually two controllers, you can start passing through both of them.
TLDR: You don't need additional PCIe card, you can passthrough onboard USB controllers.
 

ermac318

Master of Cramming
Mar 10, 2019
516
414
You can also pass through a serial port - does your UPS have a DB9 serial connection? A USB->Serial->UPS connection where you pass through the COM port might work better.