Motherboard PCIe 4.0 in 2017?

Kmpkt

Innovation through Miniaturization
Original poster
KMPKT
Feb 1, 2016
3,382
5,936
Came across this article today on Toms Hardware and it gives a couple of new tidbits about PCIe 4.0 which is apparently due for spec finalization sometime in 2017. The most interesting part is that the PCIe connector won't only double GT/s but will also at least QUADRUPLE power delivery spec at the slot from 75W to 300W. What does this mean? NO MORE PEG CONNECTORS on GPUs. If this comes to fruition, it could be huge for SFF builds.

http://www.tomshardware.com/news/pcie-4.0-power-speed-express,32525.html

The other part I found interesting is that the maximum distance for simple riser like products will likely drop to a maximum spec of 7" (~18cm). This is obviously going to have some degree of impact on build layouts and GPU placement.
 
Last edited:

Kmpkt

Innovation through Miniaturization
Original poster
KMPKT
Feb 1, 2016
3,382
5,936
300W per slot will mean more cable to the motherboard ... Or ... Maybe .... 12V only motherboard ? Please please please.

Considering how little space it seems to require to convert 12V to 5V and 3.3V (ie. Pico PSU), I'm hoping that in addition to having this functionality on the motherboard, for the motherboard, perhaps they'll also roll power into whatever the present day versions of SATA and Molex connectors end up as.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
Yeah I keep hoping Intel or someone will push for a 12V only PSU spec and then have the motherboard convert the other voltages.
I doubt that has anything to do with a PCIe spec though.

I would like to see what happens with OCuLink.
 

Hahutzy

Airflow Optimizer
Sep 9, 2015
252
187
The other part I found interesting is that the maximum distance for simple riser like products will likely drop to a maximum spec of 7" (~18cm). This is obviously going to have some degree of impact on build layouts and GPU placement.

After reading @hardcore_gamer's post here about how manu's like adexelec calculate recommended riser lengths, I tried my hand at calculating the length for PCI-E 4.0

And it's around 94mm or 3.7 inches.

So the max spec you mentioned is already doubled up from previous generations.

That's... a bit alarming.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I wonder how they're going to handle the physical interconnect for pushing 300W+. Either they add a supplementary connector (IIRC there is a 'guided insertion' style Molex Mini-Fit connector, but I'm not sure it comes in Mini-Fit Jr sizes), or they somehow distribute power across the entire card-edge connector. Fixed-DC offset for every transmission pair (like Power Over Ethernet in gigabit mode) would distribute load nicely, but I'm not sure how well injecting noisy power onto a new ultra-high-bandwidth trace will work without either a load of extra power conditioning on the motherboard, or more stringent PSU and internal power cabling/routing standards.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
@Hahutzy @EdZ @iFreilicht

Here are a few salient features/limitations of PCIe 4.0 I've noticed so far:
(Note that specs haven't been finalized and the existing hardware is limited to FGPA implementations. Also, unlike PCIe 3.0, I haven't involved in the design of any actual silicon / boards with PCIe 4.0. So these info could change or I could be outright wrong)

1. Unlike other communication technologies that hit a wall with increasing signalling speeds on copper (cough..Ethernet..cough), there's no plan to use PAM (pulse amplitude modulation). Therefore we can conclude that:
a. PCIe 4.0 still uses Manchester encoded differential signalling
b. Speed is doubled by simply doubling the base band frequency (16GT simply means 16Ghz). Upper cutoff frequency is 32Ghz. The channel can not be modeled as linear between the base band and upper cut off frequencies. Therefore the best practice guidelines I've used in @Hahutzy's post no longer applies.
c. Any minor attenuation in the channel (ie risers, extenders) should be still okay, albeit with length restrictions. In PAM, attenuation would directly affect the decoded bits, as the information is also carried in amplitude. Manchester encoding just relies on the timings of "zero crossings".

2. PCIe 4.0 uses a different connector. Motherboards with PCIe 4.0 should accept PCIe 3.0 cards. However, PCIe 4.0 cards won't work on motherboards with PCIe 3.0. Same applies to extenders and risers.

3. One worrying thing is, back plane length is reduced to 7 inches (this may be increased later, but 7" is the design goal at the moment). If this is true, the 7 inch includes the motherboard trace length + extender cable length.

My speculation is we can kiss goodbye to long extender cables as these would require re-timers (expensive circuitry). Simple risers as those used in Zaber Sentry / NFC S4 Mini should be still possible.
 
Last edited:

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
Even more reason for right-angle PCIe connectors on the top of the board to become standardised. Though to be fair, if PCIe 4.0 is finalised in 2017, I suspect it will take at least until 2018 for the first GPUs to use it, and 2020 until no newly released GPUs will have PCIe 3.0 anymore.

What I deem reassuring is that OCuLink will become part of the standard if I understood correctly. Each cable will provide four lanes at PCIe 3.0 speeds, so it wouldn't surprise me if we see some OCuLink-powered risers that don't rely on ribbons anymore. Those should be extremely stable compared to what we have now and they would also reduce cost for low-volume production, as varying lengths of OCuLink cables will hopefully be readily available.

Interestingly, 3M already claimed that their ribbon risers would be ready for PCIe 4.0, so I wonder whether that's actually the case or not.
 
  • Like
Reactions: Soul_Est

hardcore_gamer

electronbender
Aug 10, 2016
151
125
Even more reason for right-angle PCIe connectors on the top of the board to become standardised. Though to be fair, if PCIe 4.0 is finalised in 2017, I suspect it will take at least until 2018 for the first GPUs to use it, and 2020 until no newly released GPUs will have PCIe 3.0 anymore.

It's safe to say that it won't see widespread adoption before 2020. Motherboards need to have PCIe 4.0 support first, before the GPUs switch over as PCIe 4.0 add-on cards aren't compatible with PCIe 3.0 motherboards. Besides, current GPUs don't even saturate PCIe 2.0, let alone PCIe 3.0. GPUs won't gain much by switching to PCIe 4.0.




What I deem reassuring is that OCuLink will become part of the standard if I understood correctly. Each cable will provide four lanes at PCIe 3.0 speeds, so it wouldn't surprise me if we see some OCuLink-powered risers that don't rely on ribbons anymore. Those should be extremely stable compared to what we have now and they would also reduce cost for low-volume production, as varying lengths of OCuLink cables will hopefully be readily available.

Optical fiber cables can be very expensive (active cables with optoelectronic components, micrometer-level tolerance requirements for optical interface, also fibers themselves are expensive). This is one of the reasons why 100GbE isn't taking off as fast as previous generations of Ethernet. Moreover, OCuLink uses Molex’s NanoPitch connectors, another new standard that isn't compatible with existing peripheral connectors. Not sure if this will become economically viable for mainstream PC gaming in the near future.


Edit: @Hahutzy @iFreilicht Please don't stop working on your awesome projects, guys. These new standards could take a while to come. By that time, you'd be working on Hutzy / Brevis 5.0 :)
 
Last edited:
  • Like
Reactions: Soul_Est

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
Oh I didn't know they used fiberoptics and nanopitch, that makes cables extremely expensive. NanoPitch is like 40$ a connector last time I checked.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
OCuLink can be either Fibre or Copper, using the same connector (same as first-gen Thunderbolt). IIRC OCuLink-2 is needed for PCIe 4.0 (and 5.0), though OCuLink has seen little uptake for PCIe 3.0 (partially because uptake is limited by Mini-SAS HD doing pretty much the same job and having existing market penetration).
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
OCuLink is designed to be able to use copper or optical hence the O Cu, Optical Cuprum (copper)
IIRC, the first version of the spec only has single, 4 link cables, I don't remember any mention of connection ganging for x8 and x16, though that doesn't mean nobody will make out of spec risers that do use multiple cables.


edit: gah ninjaed
OCuLink can be either Fibre or Copper, using the same connector (same as first-gen Thunderbolt). IIRC OCuLink-2 is needed for PCIe 4.0 (and 5.0), though OCuLink has seen little uptake for PCIe 3.0 (partially because uptake is limited by Mini-SAS HD doing pretty much the same job and having existing market penetration).
I am pretty sure it's a new connector. First and Second Gen Thunderbolt used a mini-DP connector. Third gen uses USB-C. I do agree with the reasons for not using it though. No reason at the moment, though I do think they want to push its use for both internal storage as well as external devices like what Thunderbolt is used for now. of course, for external stuff, I think Thunderbolt does a better job since it's designed to be able to link multiple devices, where OCuLink is strictly point to point.

This also isn't the first time they've made an external PCIe specification either.
 

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,783
Bit late but article posted here. I'll just copy/paste part of it because it's basically my comment for this thread:

Tom's Hardware had the opportunity to chat with Richard Solomon, VP of PCI-SIG (the group responsible for the PCI Express standard), about the upcoming PCI Express 4.0 standard. The new standard is still a work in progress but they're expecting to finalize it by the end of the year.

Check out the article for all the details but the two main takeaways is that the bandwidth doubles, from 16GB/s with 3.0 x16 to 32GB/s with 4.0 x16, and the power available from the PCIe slot increases from 75W to a whopping 300W! And that's the minimum, the spec isn't finalized and it could maybe even end up as high as 400-500W!! This is great news for SFF because one constant headache for really small cases is the tendency of GPU makers to put their PEG power connectors on the top edge of the card, increasing the necessary width of the case. The AMD Radeon Nano bucked that trend and put the power connector on the front edge of a flagship card but then AMD regressed with the RX 480, the reference version of which has the power connector back at the top.

With the motherboard delivering at least 300W from the slot that should negate the need for power connectors on the card for all but the highest end of GPUs. The downside though is that power has to come from somewhere and the existing 24-pin motherboard connector cannot handle that amperage and unless Intel has a modernization of the ATX spec planned that they haven't told anyone about, the only solution is to move the 6 and 8-pin PCIe power connectors from the video card to the motherboard. So the same amount of cabling will be needed but at least SFF cases will no longer need to account for anything other than the width of the video card's PCB/shroud itself.

One worry I have about this development though is that flagship Mini-ITX motherboards are crowded enough as it is. Having to make room for possibly two extra power connectors plus the traces and circuitry to handle 300W of power to the slot could be difficult and my concern is that motherboard manufacturers will put more of their effort into ATX since there's much more room to work with there. It could be a boon to the microATX form factor though which seems neglected as of late.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
One worry I have about this development though is that flagship Mini-ITX motherboards are crowded enough as it is. Having to make room for possibly two extra power connectors plus the traces and circuitry to handle 300W of power to the slot could be difficult and my concern is that motherboard manufacturers will put more of their effort into ATX since there's much more room to work with there. It could be a boon to the microATX form factor though which seems neglected as of late.
All the more reason to revise the ATX spec to allow for the dropping of the 3.3VDC and 5VDC lines, and have the motherboard provide those voltages. Of course this does mean also adding back in power connectors to power hard drives, so that eats some of the savings.

Of course, as I've mentioned before, I'd love it if board manufacturers would just switch to using SODIMMs, and mini-SAS connectors. SODIMMs are common enough now that they aren't expensive anymore, and have pretty much the same capabilities as DIMMs at this point, and mini-SAS HD has ubiquity in the server market and has started showing up on consumer boards for U.2 connections to drives, and a Mini-SAS HD connector would let them double up on interfaces (either 4 SATA drives off a splitter, or 4 lanes of PCIe to a U.2 connector on the drive. Also, potentially 1-2 SATA Express, even if there's nothing that uses it), and you can fit 2-3 of them in the space everyone currently uses for a pair of STA express (and they usually also add a couple other random SATA ports, and sometimes an actual U.2 port).

On a side note, this might allow for some cabling efficiency if you have multiple cards that don't use the entire capacity on their PEG connectors (if it's less than half, they could share a cable).
 
  • Like
Reactions: Soul_Est

hardcore_gamer

electronbender
Aug 10, 2016
151
125
OCuLink can be either Fibre or Copper

OCuLink is designed to be able to use copper or optical hence the O Cu, Optical Cuprum (copper)

Copper may be used for x1 or x2 connections (for storage devices or other relatively "slow" :) external peripherals) , x8 and x16 seems to be a bit too much for copper to handle. Direct signalling through the external copper cable is out of the question at 8Ghz, as the backplane it self is limited to 7 inches. x1 or x2 may be possible with additional modulation (for example, PAM-5) of a lower frequency base band. Implementing x4 would be really pushing it. This, together with the additional lane requirements involved in x8 and 16 makes the optical option favorable.

I should be getting an FPGA accelerator card in Q4 (thanks to company projects). I'll keep you guys posted on the backplane and connector details. However, I won't be able to get hold of any OCuLink hardware without spending money from my pocket :)
 
  • Like
Reactions: Phuncz

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I am pretty sure it's a new connector.
Sorry, that was poor phrasing on my part. I meant the capability to use optical or copper cables on the same connector was the same as Thunderbolt, not that the connector itself was the same as Thunderbolt.
 
Last edited:

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
Copper may be used for x1 or x2 connections (for storage devices or other relatively "slow" :) external peripherals) , x8 and x16 seems to be a bit too much for copper to handle. Direct signalling through the external copper cable is out of the question at 8Ghz, as the backplane it self is limited to 7 inches. x1 or x2 may be possible with additional modulation (for example, PAM-5) of a lower frequency base band. Implementing x4 would be really pushing it. This, together with the additional lane requirements involved in x8 and 16 makes the optical option favorable.

I should be getting an FPGA accelerator card in Q4 (thanks to company projects). I'll keep you guys posted on the backplane and connector details. However, I won't be able to get hold of any OCuLink hardware without spending money from my pocket :)
Well there's a number of factors here. First of all, each link is separate from the others, so signal integrity has no effect on the number of links, just the speed, so poor cabling will still be able to carry 4 links, they just won't be as fast whether this is due to downclocking or retransmission losses..

Second, the backplane (be that the motherboard, a riser card, or whatever) is unshielded traces that tend to be more or less parallel, while cables will be shielded and likely twisted pairs which reduces noise and crosstalk, so cables can be longer. It is true that copper cables will be shorter, but it should still allow for a meter or two at reasonable speeds

Lastly, as far as I am aware, each cable under the spec only carries up to 4 PCIe links.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
Well there's a number of factors here. First of all, each link is separate from the others, so signal integrity has no effect on the number of links, just the speed, so poor cabling will still be able to carry 4 links, they just won't be as fast whether this is due to downclocking or retransmission losses..

The major problem is not the inter symbol interference between adjacent differential pairs. Electricity behaves differently when the frequency increases. 16 Ghz direct base band signalling creates a spectrum that carries information up to 32 Ghz in harmonics. Frequencies beyond this are less important to signal reconstruction at the receiver. At these multi Ghz frequencies, a shielded twisted pair is a poor transmission line because it acts as a low pass filter with decreasing gain frequency characteristics and non-linear phase delay. It also produces too much skin effect, radiates power (n λ/ 4 antenna segments), and discontinuities like bending causes signal reflection and standing waves. Electricity prefers to travel as electromagnetic waves at these frequencies. This is the area of wave guides. On PCBs, wave guides take the form of micro-strip lines. Metallic wave guides, although suited for communication and radar, can not be easily manufactured into flexible cables used for connecting peripherals to PCs. Die electric wave guides are flexible, but they become practical only at multi-THz range. Die electric wave guides, made with a core, cladding and protective jacket, carrying 100s of Thz signal (light) modulated with information are called optical fibers. This is the logical evolution of twisted pairs and coaxial cables to optical fibers when the signalling rate increases beyond 10s of Ghz.

(Sorry for all the bad grammar and spelling :( )
 

artimaeus

Apprentice
Apr 13, 2016
30
11
One more thought to pop the bubble on PCIe 4.0 is that it of course requires cpu support and there isn't a cpu that's been announced to support that (neither cannonlake, kaybe lake, or Zen)