Haven't built a PC since 2003, (long winded) looking for advice thread!

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
First off hello :) New here, first post and definitely will have some redundant/noob questions. As the title says I have not built a desktop since 2003 but my background has been and always will be a tech head tinkerer. I would like to avoid as much trial and error as possible to reduce wasted expenses during this build and that, my friends, is why I am coming to this forum. As it is I have been watching as much YouTube (Optimum Tech is a great channel for SFF) and researching components, believe it or not I've narrowed down this list in only a few days of research (remember I haven't been in the computer scene since about '04).

Key Factors:

- Reasonable portability without sacrificing performance and OC ability (i.e no larger than an Ncase M1)
- Stay within a $2k budget
- Workstation and Gaming all-in-one (Fusion 360, Cura, Video Editing to accompany AAA titles along with a mix of Battle Royale style games)

Case:
- Sliger sm560

PSU:
- Corsair sf 750w 80+ Platinum Fully Modular

Motherboard:
- Asus ROG Strix x570-I Gaming

CPU:
- AMD Ryzen 7 3700X

CPU Cooler:
- Noctua L9A-AM4

GPU:
- Asus ROG Strix RTX 2080 Super Advanced OC

Memory:
- G.Skill Trident Z Neo 2x16 DDR4-3600 (F4-3600C16D-32GTZN)

Storage (NVMe):
- Sabrent Rocket 1TB PCIe 4.0

Monitor:
- Asus ROG PG279Q 1440p @ 144hz

As you can see from above I am not 100% set on how to approach the cooling situation with an SFF build and I am also not 100% set on the components, either due to cooling uncertainty or thermals vs performance. Now would be a good time to explain why I have not been able to settle on the core components to this build!

AIO vs Custom Loop vs Air Cooled:

I have done enough research on this to make me really want a 120mm AIO for both CPU/GPU to wondering if it is even worth dealing with. There are so many on the fence reviews that all basically state the same thing - The pumps are noise junk and in many cases did not offer much in terms of cooling. Then there are also many positive that exclaim the cooling is good but a con is the pump noise. So my question in regard to this is going with say a 120mm Corsair H75 or H80i (one for cpu, one for gpu) mounted to the large top hat of the S1 with 120x25mm fans would this be sufficient to cool a 3700 or 3800x and a 2080 Super both overclocked respectively.

In terms of a custom loop say using the Alpha Cool Eisbaer w/ 240mm rad to cool both CPU/GPU - I know this would cool efficiently but how much of a hassle are custom loops?! Every 6 months I need to drain, rinse and refill - If algae starts growing I need to disassemble and clean everything. Is there any fool proof method to preventing this amount of upkeep? Drain and refill every 3 months vs every 6 months or something.

As for air cooled - I still need to research what coolers this case can use and how well a stock cooled 2080 Super GPU can handle an overclock -
I would consider the 2080 Super Founders Edition as I understand it is designed well to keep everything with-in 75-80 temps even with some light OC?

Storage:

As far as I can tell both the Corsair and Sabrent 1TB pcie 4.0 sticks aren't much higher in price than other premium pcie 3.0 sticks (typically $50-80 more) so it seems to me a no-brainer to go this route for the slightly higher cost but much higher speeds. Now keep in mind my younger days AGP was the standard and when I got out pcie and sli was just coming about - despite my cousin explaining this to me it is still hard to wrap my head around for some reason. So my question here is this - I want to run a total of 2TB via 2x 1TB NVMe in Raid 0 as both MB's will allow it, the only thing I am not certain on is will this hinder my pcie lanes for the GPU to 8x and does this drastically affect GPU performance?! I would also want to run a 2TB SSD as a backup drive in the event I ever lost an NVMe stick along with my data, if that is even possible while using the NVMe slots.

* Please feel free to criticize me as long as it is structural criticism I don't mind. I know a lot of this is easy knowledge to many of you long time builders so a knowledge bump in the right direction is much appreciated! Thank you in advance to any and all of you here, I hope to hang around for a good while and maybe even bring some of my own knowledge to the table some day.
 
Last edited:

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
Well despite no replies I chugged along with my research and have come up with the final build list above. The plan is to start off air cooled and when I upgrade into a better CPU (say 3900x) I will switch over to the Ghost S1 or Nouvolo Steck and do a custom loop. I really felt confident in at least one person chiming in but guess I came to strong with my noob like questions.

I may do a build log down the road when I put this together.
 

Wolfe1

Trash Compacter
Dec 12, 2019
41
58
New here so take everything with a grain of salt from me.

Anyway planning on building in the smaller SM550 with some similar parts (same motherboard, cpu) but was going to go with the 92mm aio for the cpu. I still need to do some more research on thermals for that AiO vs air though.


Build looks sane to me. 750w power supply is probably overkill here if you plugged your components into something like pc part picker. Depends on your future upgrades I suppose.
 

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
New here so take everything with a grain of salt from me.

Anyway planning on building in the smaller SM550 with some similar parts (same motherboard, cpu) but was going to go with the 92mm aio for the cpu. I still need to do some more research on thermals for that AiO vs air though.


Build looks sane to me. 750w power supply is probably overkill here if you plugged your components into something like pc part picker. Depends on your future upgrades I suppose.

Hey thanks for the reply! I think the AIO would be sufficient as long as your GPU will still fit with it. From what I can tell the Noctua l9a and C7 CU both keep the 3700x plenty cool with vented sides and a negative pressure setup. That's the downfall to SFF, everything is a trade off one way or the other. In the 560 I can't use an AIO with the GPU I want. In regards to the PSU, yeah I want to have room for upgradability in the future since I do plan to get into a 9 series CPU and do a custom loop. Another factor is my GPU calls for a minimum 650w PSU.
 

DemLep

Caliper Novice
Nov 8, 2019
22
2
Hey there,

Sorry for seeing this so late. Here is what I can say at this time.

The build you came up with seems like it is really well put together.
For cooling, the Noctua should be efficient at cooling your CPU, so good choice there.
For Storage, the drive you went with looks good. If you haven't bought the parts already you might be able to drop down to a 500GB M.2 drive and add a 2TB SSD to fit your storage needs.
Also, unless your editing video or other large files you should need 32GB of RAM and might not get much higher performance from 16GB. Other way just make sure you are using both slots.

Also, please do post the build log.
 

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
Hey there,

Sorry for seeing this so late. Here is what I can say at this time.

The build you came up with seems like it is really well put together.
For cooling, the Noctua should be efficient at cooling your CPU, so good choice there.
For Storage, the drive you went with looks good. If you haven't bought the parts already you might be able to drop down to a 500GB M.2 drive and add a 2TB SSD to fit your storage needs.
Also, unless your editing video or other large files you should need 32GB of RAM and might not get much higher performance from 16GB. Other way just make sure you are using both slots.

Also, please do post the build log.

Hey and thank you for the reply :) I have been trying to figure my storage situation out. I'd like to do raid 0 with dual nvme and then have a 2tb sata drive for backing up. As I understand it you can run raid 0 nvme with a data drive but you must leave bios in AHCI mode, so I was thinking of going for raid 10 as that Mobo supports it. Idk yet gotta research more on all that.

I am sure I'll do a build log, I enjoy doing them. :)
 

DemLep

Caliper Novice
Nov 8, 2019
22
2
I would not run raid on the system drive. You can, but most of the time it's not necessary. You could do Raid 0 on 2 1TB drives for a bit of extra protection with basically no loss to space. Raid 10 would require 4 drives. 2 striped and 2 that are a duplicate of the stripe.

Honestly, I would probably go with a Raid 0 and buy an external drive that you can backup to on a regular basis.
 

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
I would not run raid on the system drive. You can, but most of the time it's not necessary. You could do Raid 0 on 2 1TB drives for a bit of extra protection with basically no loss to space. Raid 10 would require 4 drives. 2 striped and 2 that are a duplicate of the stripe.

Honestly, I would probably go with a Raid 0 and buy an external drive that you can backup to on a regular basis.

So in regards to this I have never owned a machine with raid setup and therefore do not understand the inner-workings and to be quite frank raid has always been "scary" to me. The biggest question I have contemplated and don't know how to search for is can you run 2x NVMe in raid 0 AND also run 2x Sata SSD's all together on the motherboard (not as external usb drives) without sacrificing speed or causing conflict?

An Example (raid 0): I would run 2TB total NVMe in Raid 0 as my system drive for all of my basic programs, pictures, videos, CAD etc and then have the two SATA SSD's - one strictly setup for a daily backup and the other setup as a drive for data such as games and all the random crap I have for retropie, emulation and retro gaming in general.

An Example (raid 10): I would run 4TB total NVME in Raid 10 with the 2x Sata SSD's as the duplicate drives. Then for excess storage such as movies, pics, emulation etc have an external SSD.

Correct me if I am wrong but I feel like with the major increase in speeds with PCIe 3.0/4.0 that Raid isn't even really a necessity anymore. There doesn't seem to be much you could possibly gain as I feel you would be limited at the motherboard side of things, especially in real world applications. So that being said what is the point even in only running an NVMe for your system and then running Sata SSD for all your data? That to me seem's to defeat the advancement in technology and associated speeds. UGH all this is confusing to me ☠
 

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
I did quite a bit of research into memory in an effort to find something better and cheaper and found the G.Skill Trident Z Neo seems to be the best bang for the buck.

A) Samsung B-Die vs Corsairs unknown
B) 16-16-16-36 vs Corsairs 16-18-18-36

Price difference is only $4 but if you consider that Corsair isn't verified to have B-Die chips and the better timing it seem's to be the best bang for the buck.

Verified with B-Die Finder and Hardwareluxx
 

DemLep

Caliper Novice
Nov 8, 2019
22
2
Good find on the RAM.

Raid isn't about speed increase it is a method of fault tolerance. Though you may be able to have NVMe drives and SSD drives in the same RAID you don't want to do this. Because of how RAID works it is advised to use the all the same type and size of dives for the RAID set up. The one exception to this is if you have a parity drive. That one can be smaller.

RAID 0 - Striping - takes a file and splits it between multiple drives in the RAID. If you of the drives goes down you can replace that drive and it will rebuild the files. In this setup you don't lose space, so if you have two 1TB drives, you get 2TB (mostly) of space.

RAID 1 - Mirroring - Is a full copy of the other drive(s). So if the drive goes down you have a exact copy of it that you can keep running from. Because of this you get half the available space.

RAID 10 vs RAID 0+1 - The difference is how you stack the two types of RAID. In RAID 10 the drives are mirrored and then striped. In RAID 0+1 the drive are striped and then mirrored. RAID 10 tends to be a bit better.

The point of separating your system drive from your data drives is a convenience and fault tolerance one. One, if you need you move your data for any reason it is decoupled from the system making it easier to move. Two, if the system drive becomes corrupt or broken it is easy to wipe it and reload a new OS onto the drive without having to worry about your data. This is why most of the time the system drive is not included in the RAID. It is one less area where you need to buy extra drives. The exception to this is if you run a server that you want to make sure doesn't go down. Then you want that extra fault tolerance on the OS as well.

I hope all of that helps. If you have more questions or need me to explain something different just let me know.
 
  • Like
Reactions: override

override

Cable Smoosher
Original poster
Dec 6, 2019
10
2
I've decided to just stick with NVMe for now and will explore RAID options down the road. I do appreciate your help and advice!

In other news - I just ordered the Sliger SM560 and I picked up the Sabrent 1TB NVMe a bit ago when it was on sale! I will continue picking the puzzle pieces up as I can afford them. In keeping up with the latest and greatest I almost feel pressured to wait and see where Asus moves on their new Monitor tech, AMD moves on their next gen CPU and GPU's coming and to see if memory prices reduce into this new year - Especially B-Die schtuffs since several chipsets have been rolled out exclusive to AMD CPU's.
 

Choidebu

"Banned"
Aug 16, 2017
1,196
1,204
Raid isn't about speed increase it is a method of fault tolerance.

Not to undermine the rest of your very good and accurate reply, but raid striping is about speed. Theoretically speaking striping across 2 drives doubles the speed, across 3 triples, etc etc until you hit the controller bandwidth bottleneck. In reality this is never the case though, striping across two hdds would probably net you 70% increase in sequential reading throughput.