Concept Questions About Using PCI Bifurcation For an x99 ITX Server

TristanDuboisOLG

Average Stuffer
Original poster
May 10, 2018
81
21
Hello all! This is my first post here and I'm excited to talk with you! From all the projects that I've seen come from here, I can tell that you all are probably the one of the most intuitive communities out there.


I'm currently trying to downsize my server. To be honest, E-ATX is HUGE and I really don't want to deal with it taking up room in my living room. So, I've settled on an x99E-ITX server inside of a GEEEK a30. The reason that I've chosen x99 is because of the price/performance of the used Xeons from the 2011-3/E5-V3 line.



I want to go diskless, have for a while now, and with 1TB m.2 SSDs and nvme drives I can actually do it. But, given that there is only one PCI 3.0 x16 slot, there is a problem. Xeons don't tend to have integrated graphics. Which means that I would have to run a GPU in that slot. This is a problem because I want to pick up a PCI card to run more m.2s.

So, I started looking into something I first saw here, Bifurcation. I first saw it when someone was running a Galax Katana and a El Gato inside a Saber Sentry. Pretty cool concept and to be honest, when I build stuff, I go out of my way to either try or learn something new. So I thought I'd try it.

I've turned this into a game of numbers. The x99 v3 Xeons have 40 PCI lanes, (so no problem there), a PCIe 3.0 x16 obviously has 16 lanes, and from what I've heard nvme and m.2 drives only take up x4 lanes.
What I want to do is create a raid array of about 2 m.2 or nvme drives and use bifurcation in order to also run a single-slot GPU for a display. But I have a few questions.



1. Bifurcation seems to be a bit... shady. Not in the "I'm out there to take your money" sort of way but more in the way that I can't seem to find much, if any, information about it. During my searches I've found that Asrock supports it and my motherboard also seems to be able to do it. But I haven't been able to find any documentation on it.

Does anyone know of anywhere to go to find out more about this? Perhaps even know about a video tutorial?

2. Does anyone know where to buy a decent bifurcation riser? Most PCI risers that I find through searching are either sold out or they're mining risers.

3. Power consumption is somewhat a concern for me, will bifurcation split the 75w power from the original pci slot to 37.5w for the bifurcated slots?

4. Some x16 m.2 raid cards claim to support bifurcation. Though, I'm assuming that a bifurcated PCI port will act like any other normal interface for a device. I've found a chart that suggests that for at least this card, and this card, that the m.2 drives are being read as separate devices by the PCI slot itself. A lot of the "cheaper" m.2 and nvme expansion slots rely on a SATA connection in order to get the data out.





Does anyone know of a good nvme expansion cards? I'm looking for something that can support at least 2 nvme OR m.2 SSDs. (Bifurcated x16 will leave 1x8 lane for a GPU and 1x8 lane for a m.2/nvme and they use x4 lanes for a normal transfer rate) I was hoping to not need a SATA cable, but if it needs it, I'll use one.



Anyways, sorry about the word vomit. This has actually been quite the satisfying project and I've learned a lot in the past few days because of it. I look forward to hearing from you all! Feel free to message any suggestions you may have!
 

el01

King of Cable Management
Jun 4, 2018
770
588
I think what you should do is manage storage smarter rather than going for raw performance. this will be the first part of this post.

essentially, ask yourself, do you need ALL NVMe drives? there is already a slot on the board, which you could use for a decent-sized cache drive, and then just run SATA or SAS (if feeling fancy) SSDs.

if you really need all the NVMe, then here:
https://www.amazon.com/dp/B006DJ2YU6/?tag=theminutiae-20
pick up a used server riser and hope it works, then just find a x8 to 2x M.2 adapter.
 
  • Like
Reactions: thewizzard1

TristanDuboisOLG

Average Stuffer
Original poster
May 10, 2018
81
21
I think what you should do is manage storage smarter rather than going for raw performance. this will be the first part of this post.

essentially, ask yourself, do you need ALL NVMe drives? there is already a slot on the board, which you could use for a decent-sized cache drive, and then just run SATA or SAS (if feeling fancy) SSDs.

if you really need all the NVMe, then here:
https://www.amazon.com/dp/B006DJ2YU6/?tag=theminutiae-20
pick up a used server riser and hope it works, then just find a x8 to 2x M.2 adapter.

I just want it to be diskless. I'm not planning on loading this with Samsung 970 pros. The point is that the storage is decent in size, fast, and durable. There are some 2TB nvme's going on sale on black friday for about $200 so that's mainly why I want them.

As for the 2x adapter, most of those aren't true PCI adapters and actually cheat in order to run 2. The second drive is actually connected via SATA.
 
  • Like
Reactions: el01

Choidebu

"Banned"
Aug 16, 2017
1,198
1,205
There's a thing called carrier boards.

Basically 16x pcie cards with 2 or 4 m2 on them.

I do not have experience with them so I cannot say about software/driver side of things unfortunately.

For example..
http://amfeltec.com/products/pci-express-gen-3-carrier-board-for-4-m-2-ssd-modules/

Edit: read your 4th point. Well that makes my reply pointless then. Sorry ^^

Regarding bifurcation, it needs proper motherboard support. Power wise it won't 'fairly' distribute, it'll just provide as needed up to 75w.

I doubt that the above carrier boards work with bifurcation at all, since afaik each m2 is pretty much physically routed to one x4 lane, while bifurcation splits a pcie bus into 2 - which means if it works at all 2 slot will be left unusable. This is all postulation though, take it with a grain of salt.
 
Last edited: