Server + NAS in Silverstone CS01-HS case

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
My old NAS in a PowerMac G5 case and my NUC home server are growing old, I also want to upgrade beyond 1Gbit networking. So as I usually do, I start from the case I want for this project. It needs to be well below 20L in volume. It also needs to have serious performance and I want to use as much leftover parts as I can.


Case: Silverstone CS01-HS (link)
This case looks amazing, is small and has 6 easy-remove 2,5" bays that support disks up to 15mm thick. The outer shell is one piece of 5mm anodized aluminium, with two easily removed side panels. It's design reminds me of the PowerMac G5 that it is going to replace. At 14 liters it'll easily fit somewhere. But as usually with Silverstone's storage-oriented cases, it's not easy to install powerful components without caveats. The main one being it only supports one low-profile PCIe card and it's CPU cooler height is limited to 68mm.
In the famous words of Barney Stinson: Challenge accepted !

CPU: AMD Ryzen 7 1800X (link)
I didn't have this one available but I wanted to upgrade to a newer Ryzen last year anyway. With the release of Ryzen 3000-series I found a 2700X for cheap, upgraded my own PC and had the 1800X available for this project. 8 cores at 3.6 GHz base clock is still reasonably nice for home server usage.

CPU Cooler: Cooler Master MasterLiquid 120 (link)
Maybe the biggest issue of this case is the CPU cooler solution. Ideally I wanted a Silverstone TD03-LITE but those are hard to come by, atleast in my region. The next best solution with an integrated pump is the MasterLiquid 120. Why ? Because a 25-27mm max radiator thickness is important, there is not much more than 40mm of space between the only fan mount and the motherboard. It also ticks the most important boxes, namely long lifetime and lack of RGB.

Motherboard: ASRock AB350-GAMING ITX/ac (link)
This is another component I had "lying" around. I prefer ASRock boards for my kind of stupid projects because they support PCIe bifurcation out of the box. At the point of conceptualizing this build, I wasn't sure I would go with bifurcation or use the M.2 to PCIe adapter. In the end, the bifurcation route would cost me precious space I didn't have. But it would have been ideal, PCIe lane wise.

RAM: 2x 16GB Samsung "B-die" DDR4-2400 ECC memory (link)
AMD's Ryzen CPUs support ECC memory and Samsung B-die was (at that time) the best fit for a first-generation Ryzen. I found these sticks for a reasonable price at the time B-die was on its way out.

Storage Controller: LSI SAS9207-8i based HBA card (link)
Like the popular IBM M1015 HBA card, this offers up to 8 SAS/SATA drives, through a PCIe 3.0 x8 link. There are many OEMs with this solution and they are cheap to find. Considering I'm only going to be running HDDs from these, being limited to 4 PCIe lanes shouldn't be a problem. It requires two "Mini-SAS (SFF8087)to 4 SATA" cables.

Storage Tier 1: 2x Samsung 2,5" SSD 830 SATA (link)
I had one of these and was able to buy a second one used, for not much more. This allows cheap but fast SATA SSD storage. It'll do for now.

Storage Tier 2: 6x Toshiba 2,5" 3TB 5400rpm SATA (link)
This was the toughest part. Recently it has become known that a lot of HDDs are SMR based, which you don't want for storage that involves random writes. I'm going to be using ZFS and SMR is discouraged for that use. The largest 2,5" HDDs I could find are 2TB drives, which later turned out to be SMR-based as well. The only 1TB+ drives that aren't and also aren't enterprise 10.000rpm (needing cooling, making lots of noise), are the Toshiba MQ03ABB300, which are hard to come by. But they were available in my region and still are for about 100€ a piece. Much cheaper than SSDs and not much more expensive than SMR drives.

Network Controller: Mellanox ConnectX-3 MCX354A-FCBT dual 40GbE card (link)
These are interesting beasts. They are cheap and offer two 40GbE ports through a QSFP+ port. Direct-Attach Cables are easily found, basically it's about 100-120€ for two cards and a DAC, having a 4GB/s link between two devices. Switches are more difficult though. But these cards are also interesting for 10GbE SFP+, with QSFP+ to SFP+ adapters also readily available. Oh and did I say it has two of those ports ?
Even though 25/50/100GbE is becoming the norm, these cards are dirt-cheap because a lot of companies are migrating to the formentioned better upgrade path.

M.2 Adapter: ADT Link R43MR M.2 M-key to PCIe 3.0 x4 adapter (link)
I bought these before the build was underway and luckily I did, bifurcating the PCIe x16 is not easy with the common x8/x8 adapters in that tight of a space. These components also heat up considerably without active cooling. In the end it's the better solution for my build but with this I am limited to PCIe 3.0 x4.

 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
I'm still struggling with the SAS-card and the storage cage. I've already established the SAS/SATA backplane is defective, as I can't get it to power the HDDs when they are attached. But all 6 drives are detected when I insert the SAS card into my PC and using SATA power seperately.

The strange thing is that ESXi, the underlying VM Hypervisor OS on the server, does see the card, but doesn't recognize the disks. Even when I enable PCIe Passthrough, the disks aren't visible in the VM which has the controller allocated. Maybe it's an issue with the 4 PCIe lanes instead of 8. I'm going to try different cables first, as I see a few confirmations that PCIe 3.0 x4 shouldn't be a problem. going from nameless cables to Molex.
 
Last edited:

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
This week I received new Molex SFF-8087 to SATA cables (P/N 79576-3003) cables through eBay and these work ! All six disks are now in a ZFS RAID-Z2 array, currently chugging at 20-30MB/sec over Gbit LAN through Rsync, because I couldn’t figure out how I could copy the data with zfs send with different dataset compression on both ends.
 

NG-Fidel

Efficiency Noob
Feb 5, 2020
6
3
I am watching this closely. I have a server/NAS build in a normal mid tower case and would love to slim it down. Curious to see how you do.
 
  • Like
Reactions: Phuncz

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
I was hoping to find an mITX AM4 board that has an 8-port SATA/SAS controller, but the only one is the new ASRockRack X570 board that doesn't support 1000 or 2000-series Ryzen CPUs. For me it wasn't an option when I started this project.

By the way, the rig is still copying data from my old NAS to the new one, this is a screenshot of two screens while copying but for some reason performance at the moment is low while it has been copying at 20-30MB/s (CPU limit with encryption on old NAS) for 4 days straight.


For OCD people like me: the da1-da6 should be allighned to "3TB #1 to #6", but the ports seem to mapped randomly.
 
  • Like
Reactions: owliwar

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
This has been and excellent read. I then went on to read up on the and found this: https://www.servethehome.com/surreptitiously-swapping-smr-into-hard-drives-must-end/ . It definitely informed my decisions and I thank you for starting on that path.
Thanks for the appreciation ! ServeTheHome has been a very good resource, they also did a build in the same case that is more appropriate for small server use as it has IPMI and support Registered RAM, but I'm probably ahead on CPU power.

I'll soon be able to stress-test the storage as the initial sync has been completed.
 
  • Like
Reactions: Soul_Est

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
Now that I've switched from my old NAS to my new NAS after confirming it is working as expected, I now set out to enable 10G networking through the ConnectX-3 Pro adapters in both the NAS/server as my workstation. Both use a QSFP+ to SFP+ adapter and use 10G SFP+ Direct Attach Cable.

This is the first result copying a 4GB file from the 6 disk RAID-Z2 to my desktop (NVMe SSD):





The first graph is my 1G local network, the second graph is my 10G point to point network.
 
  • Like
Reactions: Soul_Est

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
Today I tried getting my UPS configured in XigmaNAS. It wasn't much of a problem on my old XigmaNAS build, but it is not that easy when running on ESXi apparently.
I was struggling trying to find the cause why it kept giving communication errors. Apparently ESXi can cause this for this type of communication over USB devices that are shared to the virtual machine.

I'm contemplating the option to get a USB PCIe card so I can "pass-through" the entire controller, making it dedicated towards the VM. This should solve the problem but also involves somehow getting a PCIe card on an mITX board that already has two PCIe cards attached. It so happens the board also has an M.2 key A+E PCIe Wifi card, but I can't find a card that has A+E key and a USB controller. It would also mean I need to strip the entire case because there's a screw holding it in place on the bottom of the board...
 

TLN

Average Stuffer
Mar 9, 2020
57
32
Are you passing through USB device or full usb controller? If you're passing through just USB device it might not work as expected. Try using controller instead. You might need to disable AHCI/XHCI modes in BIOS, if you don't see your device in VM. There're usually two controllers, you can start passing through both of them.
TLDR: You don't need additional PCIe card, you can passthrough onboard USB controllers.
 
  • Like
Reactions: Phuncz

ermac318

King of Cable Management
Mar 10, 2019
655
510
You can also pass through a serial port - does your UPS have a DB9 serial connection? A USB->Serial->UPS connection where you pass through the COM port might work better.
 
  • Like
Reactions: Phuncz

PIRATAS

Chassis Packer
May 26, 2020
18
3
Hello there!
Is this project still ongoing?
I don't see any pictures, and if there aren't any, can't wait to see all things placed in that case, as I'm thinking to use one of those SilverStone cases for a mini-itx system ;)
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
Are you passing through USB device or full usb controller? If you're passing through just USB device it might not work as expected. Try using controller instead. You might need to disable AHCI/XHCI modes in BIOS, if you don't see your device in VM. There're usually two controllers, you can start passing through both of them.
TLDR: You don't need additional PCIe card, you can passthrough onboard USB controllers.
That's the problem, I only see two controllers, of which one is used for the USB stick that has the ESXi software on there, the other is attached to other devices that seem risky to share. Because of Ryzen's lack of built-in GPUs in their non-G series, I don't have a way to actually display anything, so to go to the BIOS I need to remove some components and attach a GPU. It's quite the chore and one of the only issues with this build.

You can also pass through a serial port - does your UPS have a DB9 serial connection? A USB->Serial->UPS connection where you pass through the COM port might work better.
Alas the Eaton Ellipse ECO doesn't have that. A good idea though. The main issue is I want "enterprise" features from a "home use" product. I should have gone for the Eaton 5P series, as it has both USB and serial, with a slot for a proprietary network card. But that costs about 3-4 times as much and the network card costs almost as much extra. Maybe when I have the rack space I'll look into finding one used.

Hello there!
Is this project still ongoing?
I don't see any pictures, and if there aren't any, can't wait to see all things placed in that case, as I'm thinking to use one of those SilverStone cases for a mini-itx system ;)
Yes for sure, but I'm not making any (new) pictures since there isn't much to report. In the first post there is an IMGUR album with multiple photos.
 
  • Like
Reactions: PIRATAS

TLN

Average Stuffer
Mar 9, 2020
57
32
That's the problem, I only see two controllers, of which one is used for the USB stick that has the ESXi software on there, the other is attached to other devices that seem risky to share. Because of Ryzen's lack of built-in GPUs in their non-G series, I don't have a way to actually display anything, so to go to the BIOS I need to remove some components and attach a GPU. It's quite the chore and one of the only issues with this build.
Can you move "risky" devices to same controller you use for ESXI and share other controller?
Alternatively you can install ESXI on it's own drive and passthrough one controller? You can use SATA drive or M.2 if you have extra slot. If you need small M.2 drive I have some left, just cover shipping.
Or alternatively you can create new namespace on existing NVME drive and "split it" into two drives: use first small part for ESXi installation and 2nd part for datastore. That way you can re-install ESXi and preserve datastore. I did that with my build and it works great. I wish I can see all namespaces in EFI, and boot from 2nd or 3rd name space, but oh well. You will need drive that supports multiple namespaces though.
 
  • Like
Reactions: Phuncz

PIRATAS

Chassis Packer
May 26, 2020
18
3
....

Yes for sure, but I'm not making any (new) pictures since there isn't much to report. In the first post there is an IMGUR album with multiple photos.
Ah hahaah!! Just realized that the pic was a link to IMGUR :D

It looks to be a nice work. Is the system running yet?
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
Can you move "risky" devices to same controller you use for ESXI and share other controller?
It's coupled to the motherboard's SATA controller so that's quite a commitment.

Thanks for the possible solutions but most of it involves tetrising everything out and in to get it configured. A board with IPMI was ideal but it wasn't available back when I started the build. But I'll make the UPS work somehow, no worries.

It looks to be a nice work. Is the system running yet?
Thanks, it has been indeed running for a few weeks, image 2, 3 and 4 are of the system running. It's running XigmaNAS for storage/NAS functionality, and CentOS with PiHole and Unbound as a DNS. Other functionality will end up there later.
 
  • Like
Reactions: PIRATAS

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
I'm fighting an issue on my XigmaNAS VM that I can't seem to solve.

The disks have a fast increasing LOAD_CYCLE_COUNT at the tune of about 25 times per hour, I'm at almost 30.000 for each drive after about 1.5 months. Meaning that after about a year I'll be nearing 300.000 head load cycles, which for some drives is their rated amount before failure. What I try, I can't get it to stop cycling the head every few minutes. I've tried with setting all drives to Always On, setting Power Management to disabled (255 ?) but also tried level 128 and 254. I've tried reducing or increasing the SMART check value, from every minute to every hour. I can't use the WDIDLE3 tool as that is specifically for the WD drives, these are Toshiba and there is no tool to be found.
 

ermac318

King of Cable Management
Mar 10, 2019
655
510
I'm fighting an issue on my XigmaNAS VM that I can't seem to solve.

The disks have a fast increasing LOAD_CYCLE_COUNT at the tune of about 25 times per hour, I'm at almost 30.000 for each drive after about 1.5 months. Meaning that after about a year I'll be nearing 300.000 head load cycles, which for some drives is their rated amount before failure. What I try, I can't get it to stop cycling the head every few minutes. I've tried with setting all drives to Always On, setting Power Management to disabled (255 ?) but also tried level 128 and 254. I've tried reducing or increasing the SMART check value, from every minute to every hour. I can't use the WDIDLE3 tool as that is specifically for the WD drives, these are Toshiba and there is no tool to be found.
If you are looking at the raw SMART data in decimal, you may not be getting the whole story. Most HGST/WD drives give you real numbers here. Seagate doesn't, they give you a binary value that indicates more information that's only readable to their tool. I don't know if Toshiba does the same thing, but you can read up on Seagate's SMART data here:

If you are testing your drive, use whatever SMART tools you have available to run Short and Long tests on the drive, and if those tests show as passed, your drives are fine. This is how to do those tests on FreeNAS, which is also FreeBSD based like XigmaNAS:
 
  • Like
Reactions: Soul_Est and Phuncz

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,937
4,951
Thanks but I don't think the issue is the way the decimal value is interpreted since it's just a counter for the load cycles of the head. Meaning one cycle is one increment. All the other values are correct in decimal like the Power_On_Hours, Temperature, etc.

This issue is common for non-enterprise drives, WD Green drives also have the issue this severely without disabling it with the WDIDLE3 tool.



The problem is I can't find any source for Toshiba drives how to check if the above is correct. Too bad as these are the only 3TB or larger 2,5" drives that aren't SMR. The only other option is 10.000rpm 2.4TB drives but I'm having a hard time finding these for less than 200€ a piece, these 3TB drives are 80€ a piece. I know why they are more expensive, but at 200€ for 2.4TB, I can just as well buy SSDs, but I wan't to wait to jump to SSD when their cost is more at 50€/TB.
 
  • Like
Reactions: Soul_Est