So, my understanding of this is limited and I could be wrong, but here's my understanding.*
From what I can understand by myself and after plenty hours of research, is it because Asus motherboard use a second M.2 slot for NVMe?
AsRock can use all 24 lanes at once. PCI-E 3.0 x16 (16 lanes), USB 3.1 Gen2 (4 lanes) and M.2 (4 lanes)
Asus have one M.2 NVMe/Sata port more so it is 4 extra lanes of use. When you have connected 2 M.2 NVMe SSD's your device on x16 slot is working in x8 mode (if i correctly understand my native language)
So, if x16 slot is working in x8 mode when you connect 2 NVMe drives, is it not a bifurcation? Correct me if I'm wrong. Can not we bifurcate x8 to 2 x4? If we install 1 NVMe drive or none of them, aren't we possible to bifurcate x16 to 2 x8? It's a silly move Asus...
(sorry if it stupid but I'm newbie in bifurcation)
PCI-E works in groupings of lanes, x1, x2, x4, x8, and x16. (Maybe x32 and beyond is technically a thing, but it doesn't really matter here.)
When a motherboard manufacturer is designing a board/working with a chipset, it's up to them to engineer the lanes as they see fit. The engineering of the board will dictate where the lanes
physically go. However, PCI-E
as a protocol can only work in the above lane increments.
So, if a PCI-E slot had 24 lanes physically going to it, it still is only going to run at x16 at maximum--you wouldn't see an engineer design a board this way because it'd be using lots of resources for literally no gain.
In a more realistic use-case, let's say we have a PCI-E slot and a M.2 slot
both going through the chipset. Our PCI-E slot, physically has, 16 lanes, and our M.2 slot has 4 physical lanes. In the case of X370, we can use 24 lanes at any given moment. This means our GPU can run at x16 and our M.2 drive can run at x4, and we're still only using 20 of our 24 PCI-E lanes.
Of course, things like our SATA ports or USB 3.1 Gen2, like you mentioned, are going to take up PCI-E lanes on our chipset, too. Maybe the board has a built in M.2 Wifi card (typical on Mini-ITX boards), whatever. These devices are generally going to take up between 1 and 4 PCI-E lanes. Which can lead to a weird situation.
Let's say we have an NVMe SSD running at PCI-E x4 in one of our M.2 slots, our motherboard's USB 3.1 Gen2 is built into the chipset and using PCI-E x4. We've also got a Wifi card built into the motherboard using PCI-E x1. We're using a total of 9 of our 24 PCI-E lanes--there's only 15 left.
It's not possible to run a device at PCI-E x15, and we don't have 16 lanes free anymore--if we run a GPU, it's going to run at PCI-E x8 or lower.
Long story short: our devices are sharing all of our available PCI-E lanes.
To answer your question "Is it not a bifurcation?"... I actually don't know the
technical answer. But, when we refer to PCI Bifurcation, we're specifically referring to running two devices off of one physical slot. Whether or not a motherboard will support this depends mostly on how the logic/software of the board is designed. My understanding is it's basically as simple as the manufacturer "turning the feature on," but I have no perspective on what the development to achieve that is like.
The basic problem we run into is, as consumers, the hardware we're normally using is not designed to suit every possible use-case. Manufacturers and engineers spend a lot of time and effort trying to make their motherboards (etc) simple, plug-and-play, and "fool proof." This is usually a good thing, but it can sometimes mean that supporting something "niche" like PCI bifurcation never enters the conversation.