Rumor 28 core ITX from ASRock Rack EPC621D4I-2M

rokabeka

network packet manipulator
Jul 9, 2016
248
268
oh my!
I have an engineering sample skylake (or two... :D) this must go into an S4M-C, maybe with a single slot gpu and a nic. based on screenshots about bios it should support pcie bifurcation as expected from asrock
 

Elerek

Cable-Tie Ninja
Jul 17, 2017
228
165
I would imagine their (flawed imo) thinking with only 1gbe was that the pcie slot would most likely be used for a network card
 

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
while we're on it, how much performance impact does it have on the CPU when it has to deal with NVMe and 10Gbe? o_o there's only the DMI and a x16 connection reaching between the CPU and the other parts of the system...
 

Syr

Cable Smoosher
Feb 17, 2018
10
8
How is the dynatron for noise?
Personally I find it's noise easily bearable but I dont really prioritize silence and ive got fans and other machines running around me. Though on its own (ie, alone, in a room with no other fans running or other noise sources), it is certainly audible when cooling a fully loaded 165W+ cpu.
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
As I was staring at that mini SAS connector, I had this thought -- instead of compromises or risers like on the X299 board, why don't they just use a connector like mini SAS or U.2 and have a sleeved cable that terminates with several headers/connectors, like USB 3.0, multiple SATA, fan headers, power button/LEDs, even M.2 boards that you can tape somewhere in your case. They can have 2-3 U.2 (used only for the wiring) that effectively provide all of the connectivity for the board and free up a lot of room on the actual board. It wouldn't be very clean and would require more cable management, but you'd be able to fit a lot more on the board. And with the freed up room on the back, they could put a lot of the chips there, even the chipset if they use a low-profile long heatsink with a heatpipe. That would then free up even more room on the front for additional I/O, perhaps even 2 more power phases. To go along with that, they could design a new SODIMM slot that allows the SODIMMs to be put closer together. With the hexa-channel version of this board, they have 3 on each side of the socket with the soldering points very close but they have to have one on the back because the slots are too bulky. They could have them all the front, maybe angle each a little.

And yeah all of that would require more R&D and make the board even more expensive, but this is a board for $10K CPUs, it can easily cost $700 and not be a big deal.
 
  • Like
Reactions: Phuncz

rokabeka

network packet manipulator
Jul 9, 2016
248
268
while we're on it, how much performance impact does it have on the CPU when it has to deal with NVMe and 10Gbe? o_o there's only the DMI and a x16 connection reaching between the CPU and the other parts of the system...

short answer if I get the question right: you need a really special case to see performance degradation.

long and boring explanation:
DMI v3 x4 goes to the PCH (plus there is a x1 PCIe uplink) so the theoretical bandwidth is something like 3.93GBps + 925MBps between the CPU and everything the PCH handles (including the 2 NVME x4 drivers in case of the 4 DIMM version of the mobo). so yes, theoritically you can create a bottleneck if you plan to use extremely fast m.2 drives like 2 970 EVOs and/or using up the total bandwidth of the 2 USB 3.0 interfaces, too. in an I/O-intensive case like this where you frequently need 4GBps (gigabyte per second) data though the PCH you might want to go with PCIe version of storage instead installed directly to the x16 slot (with or without bifurcation). this won't happen with the 6-DIMM version tho where there are no m.2 slots.
If you use PCIe bifurcation appropriately (e.g. x4 for the 10G NIC, another x4 for storage and e.g. x8 for a GPU) you most likely will not see any performance impacts on any devices.

Asrock used Intel's recommendation to connect the CPU with PCH because in C621 there is no QAT (quick assist technology - dedicated hardware to offload e.g. encryption from the CPU) nor 4x10G NIC embedded.
using the PCIe lanes of the CPU typically does not create a bottleneck, I have never seen a case where using all PCIe lanes of a CPU made them slowing down. depending on how you distribute your resources across the physical cores however might be important (cpu pinning, cpu isolation, etc)

a typical 2x10G NIC does not even eat up x4 PCIe (theoritical bandwith of x4 is ~32Gbps - gigabit per second) for that you need a 40G NIC. using up all 16 lanes your best choice is a 100G NIC (e.g. mellanox connectx-5) and there is still significant overhead remains.
and modern systems (at least since sandy bridge) have DMA between the NIC and the CPU so actually making the packet reach the CPU L3 cache is almost 'effortless', done by purely hardware so does not use processing power on the actual CPU cores (apart from handling interrupts if you are in interrupt mode but at high packet rates even linux can switch from interrupt mode to poll mode making packet processing more efficient but that is a completely different topic again :) )

so you can choose an underpowered CPU for your workload like a W-2125 (4 cores/8 HT) and running high resolution virtual reality with streaming then the CPU itself will not be enough to serve your storage/GPU/NIC.
 

Revenant

Christopher Moine - Senior Editor SFF.N
Revenant Tech
SFFn Staff
Apr 21, 2017
1,674
2,708
Insert token “will my L9i be able to cool it?” comment.

Answer: No. No it will not.
 
  • Like
Reactions: Solo

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
using the PCIe lanes of the CPU typically does not create a bottleneck, I have never seen a case where using all PCIe lanes of a CPU made them slowing down. depending on how you distribute your resources across the physical cores however might be important (cpu pinning, cpu isolation, etc)
you can choose an underpowered CPU for your workload like a W-2125 (4 cores/8 HT) and running high resolution virtual reality with streaming then the CPU itself will not be enough to serve your storage/GPU/NIC.
yeah that's what I'm wondering about ._. thanks!
 

Analogue Blacksheep

King of Cable Management
Dec 2, 2018
833
689
Just a thought. The name here is for a server board. By that logic I reckon we still have a workstation board to appear. Hope it has more USB's!

@ASRock System Is there any chance of this happening?
 
Last edited:

Supercluster

Average Stuffer
Original poster
Feb 24, 2016
87
127
I really hope we see an ASRock Threadripper 3 mITX X599 motherboard in the future...!
I've had some discussion over this a few days ago.
My concerns are that the TR4 socket seems to be slightly larger as well as having more pins. I know little about CPU trace layout, but these two small and simple facts might just make it highly unprofitable, and therefore unlikely to happen.
I always welcome professional insights and opinions, so please, anyone with knowledge, educate us ?.
 

SystmSix

Minimal Tinkerer
New User
Jun 5, 2019
3
0
Now that I have built 2 Windows Server 2016 Essentials servers on its predecessor board I think its time to build a new server with this board. This time Ill start with Server 2019 Essentials.... Once done I will post pictures here...My other servers run 24/7 in businesses flawless on 128gb ram raid 10....
 

Windfall

Shrink Ray Wielder
SFFn Staff
Nov 14, 2017
2,117
1,583
it's not needed, they're making two variants of that. the other one will have 6 SODIMM sockets and no M.2 sockets below it o_o
(see comment above)

Going back to this, would it be possible to run a riser off of the m.2 slots on the model that has rear RAM mounting? Then you could have the best of both worlds!