Going Threadripper, anyone else? (strategies)

VisualStim

Master of Cramming
Original poster
Bronze Supporter
Mar 6, 2017
431
211
biggest problem ive bumped into is that you cant run SLI / Crossfire with a Threadripper dual 8pin motherboard.

some boards even have a PCIE power port as well
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,442
3,982
Why wouldn't you be able to run Crossfire or SLI on ThreadRipper with dual 8-pin EPS ? That's not really an issue of ThreadRipper but of the PSU I'd think.

Personally I have no need or desire for ThreadRipper (except for that juicy socket mechanism) where as I feel content with Ryzen's 8-core.
 

VisualStim

Master of Cramming
Original poster
Bronze Supporter
Mar 6, 2017
431
211
Sorry I forgot to say

SFX PSUs only have 3 8pin connectors

So with TR you have 2 for the motherboard 1 left over for a GPU

The case is a Cerberus X

I just saw that the ASRock boards are 1x 8 pin connector but I’m not sure how well the single 8pin boards would fare for overclocking
 

Kmpkt

Innovation through Miniaturization
KMPKT
Bronze Supporter
Feb 1, 2016
3,350
5,817
Yeah it's very common. There is a really good video here from Jayztwocents that shows that power delivery may suffer slightly using this approach, but you don't really have another decent option right now.


The alternative if size isn't a big consideration is to go with a dual PSU setup using one of these:



http://www.add2psu.com
 

WadeAK78

Master of Cramming
Jun 7, 2016
381
429
Sorry I forgot to say

SFX PSUs only have 3 8pin connectors

So with TR you have 2 for the motherboard 1 left over for a GPU

The case is a Cerberus X

I just saw that the ASRock boards are 1x 8 pin connector but I’m not sure how well the single 8pin boards would fare for overclocking
There are definitely SFX PSUs with more 8 pins than that, such as the:

Silverstone SX650-G
Connectors
1 x 24 / 20-Pin motherboard connector (300mm)
1 x 8 / 4-Pin EPS / ATX 12V connector (400mm)
2 x 8 / 6-Pin PCIE connector (400mm / 150mm)
2 x 8 / 6-Pin PCIE connector (550mm / 150mm)
6 x SATA connector ("300mm / 220mm / 100mm" x 2)
3 x 4-Pin Peripheral connector (300mm / 200mm / 200mm)
1 x 4-Pin Floppy connector (100mm)
 

IntoxicatedPuma

Customizer of Titles
Editorial Staff
Moderator
Silver Supporter
Feb 26, 2016
939
1,181
robspc.tech
SLI or Crossfire? Vega doesn't even support it.... And who knows about the next Nvidia cards.

I'd be interested in thread ripper if I was rich but ryzen is enough. Unfortunately it's expensive here too so I'm waiting to see what Coffee Lake brings.
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,442
3,982
Indeed, maybe both Nvidia and AMD are gearing towards DX12 multi-GPU support, which would explain the reduction in supporting SLI and Crossfire respectively. Or another theory is that one or both are developing a new hardware infrastructure (AMD's Infinity Fabric or Nvidia's NVLink) but @EdZ might be able to shed a light on this much more accurately than me.

BTW: Quad SLI is supported on Titan cards.
 

Aibohphobia

aka James
Feb 22, 2015
4,955
4,685
I think DX12 gets way more credit than it's due, because using DX12 for a game does not magically mean it'll have amazing multi-GPU support. It'll still require the developers to spend time making sure the game takes advantage of the hardware and they've historically been bad at it even when Nvidia/AMD was doing much of the work with their drivers.
 

EdZ

Virtual Realist
Gold Supporter
May 11, 2015
1,578
2,106
Aibophobia has hit the nail on the head: DX12 and Vulkan move all the optimisation work that was previously done by the GPU vendor in their driver, and says to the developer "here, you deal with this now". If you're a massive engine developer like Epic or Unity, then you can afford to hire, train and pay experts on each architecture (not just each vendor, because everyone has multiple different architectures) to tweak your engine at a low level in the same way the vendors previously did. If you're an indie developer, you pretty much have three choices:
- Use DX12/Vulkan, and leave performance on the table or accept an extremely long and arduous development cycle optimising for each architecture
- Use vendor-provided pre-optimised libraries and have the internet accuse you of killing puppies
- Use DX11/OpenGL as before


--------------------------


On multi-GPU; there are broadly three multi-adapter addressing methods:
- Implicit Multiadapter: this is similar to DX11/OpenGL, where the application pretends there is just one GPU and the driver has to handle splitting the workload across multiple GPUs. In practice this will end up performing worse, as DX12/Vulkan encourages low-level noodling with the dispatch process, and inevitably overoptimising for single-adapter at the expense of multi-adapter.
- Explicit multiadapter with discrete GPU: Each GPU is exposed independently, and the application determines what jobs get dispatched where and when. Maximum flexibility, but means changing the optimisation yourself for each architecture and re-doing that work for single-GPU, 20GPU, 3-GPU, etc. In theory you can mix-and-match GPUs from different architectures or even vendors, but again, MORE WORK FOR YOU.
- Explicit multiadapter with linked GPUs: The GPUs are exposed as a 'composite' GPU with shared resources but multiple 'nodes'. This leaves the driver still dealing with some parts of the setup, but the applciation also needs to be aware of dispatching jobs to multiple nodes. In theory this can be a heterogenous setup, but for now it;s assumed that you will only see linked adapters with the same performance.

As far as I am aware, there aren't any games actually using linked GPUs at the moment. Most are using Implicit Multiadapter (if even supporting multi-GPU at all on low-level API), but there are a handful that have implemented Explicit Multiadapter (Ashes of the Singularity, Rise of the Tomb Raider, and Deus Ex: Mankind Divided, and Hitman 2016 are the only ones I can recall off the top of my head).


---------------------------


On NVLink and Infinity Fabric: we might see these as 'supplementary connectors' between GPUs, but probably not as the primary way to connect a GPU to a CPU. Even if CPU vendors could be persuaded to add NVLink or Infinity Fabric (and take up more die area) you'd then end up with having to make GPUs with a new PHY interface, have motherboard vendors produce yet more variants (that need to confirm to different signal routing standards), etc.
And when it all boils down to it, these new interfaces offer greater bandwidth but do not otherwise affect the software side of multi-GPU. Neither are close to the bandwidth or latency required to do 'transparent' multi-die-single-chip linking, so you still have to deal with the same issues of multi-device dispatch that you do at the moment.
 

Nanook

King of Cable Management
May 23, 2016
797
780
I'm interested in the additional core/threads too! Mostly for rendering purposes. Multiple PCIe lanes are not as important for me, and will likely be under-utilized for my purposes.
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,442
3,982
Thanks @EdZ for the interesting insights !

I think DX12 gets way more credit than it's due, because using DX12 for a game does not magically mean it'll have amazing multi-GPU support. It'll still require the developers to spend time making sure the game takes advantage of the hardware and they've historically been bad at it even when Nvidia/AMD was doing much of the work with their drivers.
I was aware of the DX12 Multi-GPU feature being much more involved than DX11, but that last one has had a difficult SLI/Crossfire support to begin with: many having needed more than a few driver updates just to get working properly and almost all still a far cry from a good performance boost It seems to me that the current generation devs either don't have the time/budget or it's of low priority. I'd rather see good and effective muilti-GPU implementation than the current support.
 

Tilltech

Caliper Novice
Jun 30, 2017
33
9
In the worst case scenario you could get a Define C - it's almost an mATX case, but supports ATX boards.
So no compromises there.

I think Define C and Corsair 400Q are the only exceptionally small ATX cases.
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
4,442
3,982
I wouldn't call 37 and 42L cases "the only exceptionally small ATX cases", there are easily a hundred ATX cases that fall within that range. Even more if you consider HTPC cases.
20-25L is exceptionally small for ATX.

On-topic: most ThreadRipper boards shown thusfar are ATX but the Asus Zenith is EATX. Most ATX cases don't support it officially, but it might fit in the Thermaltake G3 without the front panel fans.
 
Last edited:
  • Like
Reactions: AleksandarK

3lfk1ng

King of Cable Management
Editorial Staff
Gold Supporter
Jun 3, 2016
865
1,649
www.reihengaming.com
He's not wrong. There are a lot of cases in that range. I did a ton of research and averaged them somewhere around 46L.

They aren't all what I would consider quality cases, nor are they new, but they do exist.

Edit: 5:29pm
Now that I am at my desk, here are a few from the list I made on August 1st:
ATX case sizes in liters:

Corsair Tempered Glass Crystal Series 460X RGB
44.9152

In Win 303
50.7217649

In Win 805
44.1270859

In Win 101
47.1541863

In Win 904
46.3905328

Corsair Carbide 270R
49.0857132

Riotoro ATX
53.5955315

Fractal Design Define S
57.6149922

Fractal Define R5
55.8174879

Fractal Define C
39.2896847

Jonsbo UMX4
37.977048

Lian-Li PC-V720
30.56515

Thermaltake Core G3
23.5542742

Phanteks Eclipse 400
46.0470763

NZXT 340
41.567904

Define Mini C (mATX)
33.43221

Based on my personal criteria, I narrowed it down to these 4:

In Win 805
44.1270859

NZXT S340 / S340 Elite
41.567904

Corsair 460x
44.9061142

Fractal Define C TG
40.598313

Still far too big for SFFn but enough to hold someone over until the Cerberus-X drops.
 
Last edited: