Hi guys,
For quite some time I was thinking making a custom TR4 mobo for a workstation leasing business. Other boards out there are big overkills in my opinion. Other than the north bridge itself, things like sound card, PCIE/SATA expansions, WiFI, additional USB hubs, tons of "management" chips and such.
Lately the performance of regular AM4 Ryzens got so high that I though of dropping the idea of using TR4 altogether. AM4 CPUs are near completely autonomous, requiring only SPI flash to store BIOS and memory to boot. Not only I get cheaper CPUs, but lots of board space and power to spare. If I can go below $1k per system, I can think of a consumer market too, which is great money wise.
Since we talk about PCIE 4.0 now, we can carry bandwidth of PCIE 2.0 16x with only 4 lanes. That is enough for a gaming grade GPU, and a lot of other things. Moreover, we can now use SFF8643 connectors and existing cables for riser boards.
Since I can make the board work without so many extra chips now, I'm starting to think of putting a DC-DC PSU on board. Near all chips other than memory and CPU these days are fine with either 3.3V or 1.8V. And PCIE whith GPU external power takes 12V. This looks to me quite doable. I think of external voltage of 48V and every other power rail being powered by direct 48-to-x bucks, or with 48-to-12-to-n for 3.3 and 1.8 for ease of sourcing parts.
Now back to high frequency lanes. PCIE and DDR4. Those would be the only high speed lanes on the board, and that's a great thing for the cost. Since DDR4 can pack a lot on a single DIMM, and Ryzen will not get any extra perf from more than 2 DIMMs, I think of only having 2 slots. The small premium for high capacity DIMMs should worth it. For PCIE, I think of going away with regular PCIE slots as such, as most use cases will use risers anyways. 3rd gen Ryzen has more than enough lanes available to be used without chipset. I'm thinking of simply wiring all lanes to SFF8643 connectors other than ones needed for on board network and m.2 slots. In general, having less peripherals should make for easier routing, and hopefully reduce the amount of problems with PCIE signal quality.
What do you think?
For quite some time I was thinking making a custom TR4 mobo for a workstation leasing business. Other boards out there are big overkills in my opinion. Other than the north bridge itself, things like sound card, PCIE/SATA expansions, WiFI, additional USB hubs, tons of "management" chips and such.
Lately the performance of regular AM4 Ryzens got so high that I though of dropping the idea of using TR4 altogether. AM4 CPUs are near completely autonomous, requiring only SPI flash to store BIOS and memory to boot. Not only I get cheaper CPUs, but lots of board space and power to spare. If I can go below $1k per system, I can think of a consumer market too, which is great money wise.
Since we talk about PCIE 4.0 now, we can carry bandwidth of PCIE 2.0 16x with only 4 lanes. That is enough for a gaming grade GPU, and a lot of other things. Moreover, we can now use SFF8643 connectors and existing cables for riser boards.
Since I can make the board work without so many extra chips now, I'm starting to think of putting a DC-DC PSU on board. Near all chips other than memory and CPU these days are fine with either 3.3V or 1.8V. And PCIE whith GPU external power takes 12V. This looks to me quite doable. I think of external voltage of 48V and every other power rail being powered by direct 48-to-x bucks, or with 48-to-12-to-n for 3.3 and 1.8 for ease of sourcing parts.
Now back to high frequency lanes. PCIE and DDR4. Those would be the only high speed lanes on the board, and that's a great thing for the cost. Since DDR4 can pack a lot on a single DIMM, and Ryzen will not get any extra perf from more than 2 DIMMs, I think of only having 2 slots. The small premium for high capacity DIMMs should worth it. For PCIE, I think of going away with regular PCIE slots as such, as most use cases will use risers anyways. 3rd gen Ryzen has more than enough lanes available to be used without chipset. I'm thinking of simply wiring all lanes to SFF8643 connectors other than ones needed for on board network and m.2 slots. In general, having less peripherals should make for easier routing, and hopefully reduce the amount of problems with PCIE signal quality.
What do you think?