Other PCIe re-drivers, re-timers, muxs, oh my...

W4RR10R

Cable-Tie Ninja
Original poster
Jan 29, 2019
211
211
So this is an entirely hypothetical idea at this point.
So, sometimes I sit and ponder about "crazy" and/or "useless" things but I recently had one that I really like and did't know how exactly to try implement it.

Bare with me.

  • 3rd gen Ryzen is coming out in the (near) future and has been confirmed to support PCIe 4.0
  • PCIe 4.0 ~ 2 * PCIe 3.0 ---> PCIe 4.0 x 8 ~ PCIe 3.0 Bandwidth
  • PCIe 3.0 IIRC won't scale up to PCIe 4.0, it will work in the slot but not run a better than in a 3.0 slot
  • Currently x3/470 chipsets support PCIe Bifurcation (x16 or x8/x8) out of the cpu, some b series board support it to (its a BIOS thing)
  • A PCIe 3.0 x16 GPU in a PCIe 4.0 x8 slot will perform exactly as if it was in a 3.0 x8 slot, because there are still only 8 physical lanes connected and the GPU cannot utilize the higher per lane bandwidth.
I know that some combination of one or more of the components in the title can be used (currently) to adapt a PCIe 3.0 x8 connection into a physical PCIe 2.0 x16 connection thus allowing a PCIe 2.0 x16 card to use the max bandwidth.

What I would like to do is the same thing but with PCIe 4.0 to 3.0. Just think about it, if the IMO likely parts of the Zen 2 rumors are true, you could have an mITX board with 2 PCIe 3.0 x16 GPUs connected to it at ~full bandwidth and a rumored 16c/32t processor.

You could do a ridiculous water cooled work station build with the rumored R9 3850X (16c/32t 4.3/5.1 GHz), and two 2080 tis (or if Navi comes out swinging), in something like Jayztwocents' Loque ghost build (GPUs would be single slot because WC). I don't know how 2 240 rads would do though. Perf/L would be crazy.

Circuit design is not my strong suit, I'd love some input on the idea.
 

BonfireOfDreams

Average Stuffer
Mar 14, 2019
68
32
Given the right parts are used (including riser) & stars align, I don't see why it wouldn't work. I can't imagine ever doing that kind of build myself but I'm always interested in seeing things people can hack together.

Edit: Assumming PCIe gen 4 (or 5) comes to m.2, I'd like to see a mini-stx board that supports two m.2 drives so a similair setup to the one you're describing can be done on an even smaller scale without any bifurcation required.
 
Last edited:

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,935
With PCIe bifurcation enabled I find myself wondering if you could get a M.2 board made that would let you stack multiple M.2 drives vertically since there's normally a big gap of nothing above it anyways (think the way M.2 NVMe stacks above M.2 WiFi, just on an expander card).
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
With PCIe bifurcation enabled I find myself wondering if you could get a M.2 board made that would let you stack multiple M.2 drives vertically since there's normally a big gap of nothing above it anyways (think the way M.2 NVMe stacks above M.2 WiFi, just on an expander card).
Heat might be an issue doing this (the bottom SSD would get near zero airflow, after all, and m.2 WiFi cards produce near zero heat), but it'd probably be doable.

More on topic, this should very likely be possible - heck, for the consumer space I see converting PCIe 4.0 -> 2x the lanes of 3.0 as the most useful application. Twice the number of high-end NVMe SSDs on an ITX board, without significantly increased board/trace complexity (or even possibly lower board complexity, as you'd need half the number of lanes from the CPU per slot)? Yes please. Of course you'd need a chip of some kind to ... translate (no idea of the right terminology here) the signals, but given the prevalence of these kinds of chips from 3.0 to 2.0 (including, but not limited to Intel's previous and AMD's current chipsets) this shouldn't be much of an issue, even if it would of course increase board costs somewhat.

Given sufficient bifurcation (or, uh, quinquefurcation?) support, an x16 riser card splitting PCIe 4.0 x16 into dual 3.0 x16 could then house a GPU plus one of those quad-NVMe adapters. Or even dual x8 GPUs plus four SSDs. *drool*