This morning we posted about the amazing ASRock DeskMini GTX and DeskMini RX, and now we have shots of the new Micro-STX motherboard that makes this all possible: the ASRock Z270M-STX MXM!
Read more here.
This morning we posted about the amazing ASRock DeskMini GTX and DeskMini RX, and now we have shots of the new Micro-STX motherboard that makes this all possible: the ASRock Z270M-STX MXM!
Yeah, I'm down, is your plan for this mostly to sell direct to OEMs for SFF prebuilts like the MSI Trident 3?
With the multi threaded performance of the new AMD CPUs and their awesome pricing, I have to admit that my interest is slowly going to a Ryzen miniITX build, considering that Ryzen 5 lineup is shortly ahead.
Maybe consider having a STX board with AMD chipset?
Mini STX supports up to 32gb.Would a single 32gb ram card (in order to get 2 x 32gb RAM) work with this motherboard? I've heard of 32gb SoDIMM cards, but can't find any on the net right now. But before I scour the net, is there any reason they wouldn't work on this motherboard?
Would a single 32gb ram card (in order to get 2 x 32gb RAM) work with this motherboard? I've heard of 32gb SoDIMM cards, but can't find any on the net right now. But before I scour the net, is there any reason they wouldn't work on this motherboard?
Ok, no need for over 640kB ram anyhow.
Should just put a 180mm radiator and fan on the back of the motherboard.
You would need some kind of heatsink that wasn't electrically cunductive, right?
I've thought about something like this. I was told a thin plastic sheet would have negligable effect on termal conduction.You could put a plastic sheet between the two to act as an insulator
I believe, though I'm not certain, that limit for non-ECC DDR4 is 16GB - I've also seen 32GB sodimms but they were ECC, and would not be supported by this board or the CPUs that go in it.
... Huh actually. There's Kaby Lake Xeons now that go in socket 1151 and support ECC. I wonder if this board does... Would be pricey for sure.
Basically the way the I/O on your system works is certain devices (RAM and GPUs among them) get access to PCIe lanes directly from the CPU. Because there isn't enough PCIe lanes direct from the CPU to meet the needs of your system, everything else needs to go through the PCH which is an I/O controller hub. While this technically increases the number of PCIe lanes your system has available significantly, the lanes from the PCH become less effective as demand on the chipset go up. What this means for a 3 NVMe array is that when you're drawing from multiple drives, the PCH may become something of a bottleneck. That being said, in terms of real world or perceivable performance you may or may not notice an issue. I do know that the PCH is supposed to be the limiting factor in running NVMe RAID 0 as it has a max throughput of something like 4GB/s, so if you have 2 960 Pros running in RAID 0 with a theoretical bandwidth of 3500 MB/s, you're only going to see a 14% performance boost by adding the second drive since the entire system has a 4000 MB/s "cap". Obviously using slower/cheaper drives would reduce this effect since adding they will hit the bottleneck later.