Part ADT 16x Risers — can anyone confirm whether they work @ pcie4?

dc443

Average Stuffer
Original poster
Jun 4, 2020
64
17
I’m gearing up for an ITX sandwich case build targeting 16 core zen 3 & either Navi21 or Ampere. I have completed the design of the case and I think I will need to source the riser for this case (sunmilo T03 pro).

Now, all of these ADT risers state “PCIe 3.0 x16” on the seller pages, which is reasonable, but i’m surprised that they still say that because for a while now we have had the capability with ITX systems on x570 & navi10. Presumably since the riser is no more than a ribbon cable, and although I haven’t seen this explicitly concerned, I’d be very surprised if they somehow fail to work at 4.0 transfer rates.

So... should I just go ahead and order one now or should I wait?
 

chx

Master of Cramming
May 18, 2016
547
281
PCIe 4.0 is known for insane tight timings so I'd be surprised if a riser not designed for it would somehow work. There are PCIe 4.0 risers but they are not cheap. Nor, as Till mentioned, is it necessary.
 

REVOCCASES

Shrink Ray Wielder
REVOCCASES
Silver Supporter
Apr 2, 2020
2,057
3,331
www.revoccases.com
PCIe 4.0 is known for insane tight timings so I'd be surprised if a riser not designed for it would somehow work. There are PCIe 4.0 risers but they are not cheap. Nor, as Till mentioned, is it necessary.

Agreed. I think PCIe 4.0 will become really interesting in 2 or 3 years. But for now you should be fine with PCIe 3.0

This was similar when PCIe 3.0 came out.
 

dc443

Average Stuffer
Original poster
Jun 4, 2020
64
17
In the long term, I am definitely not ok with halving the bandwidth, because I will be experimenting with GPGPU/ML and such workloads on this build. Bandwidth may not be a big factor for gaming, and although the machine may be suitable for gaming, I spend maybe 10% of my time actually gaming, realistically speaking. For algorithms in general it is a much more prevalent situation that memory bandwidth is the limiting factor. It comes down to factors such as the arithmetic intensity of your bottlenecking algorithm.

This build is a return to form for SFF for me as the daily driver, this time it must meet a more strict set of requirements than any build I have ever done in the past:

- Near silent operation under 100% load on both CPU and GPU. Well thought out SFF sandwich layouts make this easier by placing heat dissipating components where the fresh air is. I am enhancing the case design with 120mm case fans mounted at variable depth to reach what I intend to be a deshrouded aftermarket cooler on the GPU. A Ryzen 9 4950X/5950X will be manageable with pretty much any 240mm AIO.
- Under 15L, and under 150mm in shortest dimension, for travel portability. I particularly like this specific case because it is extremely visually clean, extremely robust with 4mm panel thickness, and allows for aggressive ventilation within that clean design, all of these factors elevating it above contenders like the Formd T1 (price & not as ventilated) & Ncase M1 (smallest dimension a bit thick).
- Uncompromised performance. Stuck on PCIe 3.0 signaling would stick out like a sore thumb. For some time, I was almost dead-set on the Ncase M1, which would render the PCIe 4.0 riser a non-issue, but I knew what I wanted the instant I discovered the T03, and I will go to quite some lengths to get what I want now.

I did find this: so it does look like I'll need to potentially upgrade this PCIe riser at some point if it turns out that I cannot source a suitable unit in the short term... I'll need a 180 degree slot at the female side of the riser, either 90 or 180 degrees at the male slot side, and the riser also needs to be a specific length...

I will likely assemble the machine when the new Ryzen line drops and run a 1080ti in it and take some time to let the dust settle on the GPU.
 
Last edited:
  • Like
Reactions: khanate

dc443

Average Stuffer
Original poster
Jun 4, 2020
64
17
So yeah in conclusion PCIe 3.0 x16 may just have to do for now. I may even need to drop $100 on this extender! I don't like that but portability isn't cheap.

I dont actually actively do ML stuff yet (just plan to). The GPGPU code that I'm already writing these days likes to gobble through bandwidth a lot more than gaming or ML stuff. The point is, with more general applications of algorithms (GPU or otherwise), especially prior to heavy optimization, moving data around can easily be more expensive both on time and on energy. The point of making a no-compromise build like this is to throw caution to the wind when it comes to energy, but when it comes to time actually for anything bandwidth-bound, the implementation of a new bus standard is the only opportunity to land an improvement.

I'm definitely in the minority of users who can appreciate having a faster bus beyond the imperceptible game level loading time and the less imperceptible but equally pointless notion of seeing the higher theoretical number reported in a hardware stats program. It would halve the latency of all host to device and device to host memory transfers in GPU algorithms, certainly nothing to shake a stick at.
 
Last edited:

REVOCCASES

Shrink Ray Wielder
REVOCCASES
Silver Supporter
Apr 2, 2020
2,057
3,331
www.revoccases.com
So yeah in conclusion PCIe 3.0 x16 may just have to do for now. I may even need to drop $100 on this extender! I don't like that but portability isn't cheap.

I dont actually actively do ML stuff yet (just plan to). The GPGPU code that I'm already writing these days likes to gobble through bandwidth a lot more than gaming or ML stuff. The point is, with more general applications of algorithms (GPU or otherwise), especially prior to heavy optimization, moving data around can easily be more expensive both on time and on energy. The point of making a no-compromise build like this is to throw caution to the wind when it comes to energy, but when it comes to time actually for anything bandwidth-bound, the implementation of a new bus standard is the only opportunity to land an improvement.

I'm definitely in the minority of users who can appreciate having a faster bus beyond the imperceptible game level loading time and the less imperceptible but equally pointless notion of seeing the higher theoretical number reported in a hardware stats program. It would halve the latency of all host to device and device to host memory transfers in GPU algorithms, certainly nothing to shake a stick at.

Would be interesting to see if there is actually a measurable difference between 3.0 and 4.0 for your use case scenario.

Maybe stick with 3.0 for now and upgrade when availability of 4.0 risers gets better.
 

chx

Master of Cramming
May 18, 2016
547
281
Let me in you to a secret: unless you'd buy a powerful GPU for gaming anyways there's absolutely no way you can beat Google TPU for price/value. Casual ML is a great fit for the cloud model as you need computing resources rarely and even then only for a short time but then you need a lot of them.
 

dc443

Average Stuffer
Original poster
Jun 4, 2020
64
17
You're right about that of course. Even as progress continues to show that it makes more sense to do it in the cloud there is satisfaction to be had in having the hardware in-house.

Well, anyway, to close the loop on this, i got the linkup pcie4 riser for my zx-1 build and 3080 runs great at pcie4.

I also ordered a Velka 7 the other day and it will be bundled with the nice new loose wire pcie4 cable, which is great because the ungodly fold you had to do in the riser (which was only pcie3) put me off the entire concept of the case earlier.

Not sure why I didnt think of using the louqe pcie4 cable for velka 7 though, at the time...