SFF.Network ASRock Z270M-STX MXM Micro-STX Motherboard Pictured

LocoMoto

DEVOURER OF BAKED POTATOES
Jul 19, 2015
287
335
Yeah, I'm down, is your plan for this mostly to sell direct to OEMs for SFF prebuilts like the MSI Trident 3?

Sort of, they are planning to use it with their own barebones system unless I've been out of the loop for too long.

With the multi threaded performance of the new AMD CPUs and their awesome pricing, I have to admit that my interest is slowly going to a Ryzen miniITX build, considering that Ryzen 5 lineup is shortly ahead.

Maybe consider having a STX board with AMD chipset?

I could see that, might be a massive undertaking to develop the uSTX formfactor on a new platform though, and not to mention the lack of iGPU with the ZEN cores that might be a complication with the barebones system (despite a dGPU likely being present.)
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Would a single 32gb ram card (in order to get 2 x 32gb RAM) work with this motherboard? I've heard of 32gb SoDIMM cards, but can't find any on the net right now. But before I scour the net, is there any reason they wouldn't work on this motherboard?
 

GentlemanShark

Asus RMA sucks
Marsupial Computing
Dec 22, 2016
358
148
Would a single 32gb ram card (in order to get 2 x 32gb RAM) work with this motherboard? I've heard of 32gb SoDIMM cards, but can't find any on the net right now. But before I scour the net, is there any reason they wouldn't work on this motherboard?
Mini STX supports up to 32gb.
 
  • Like
Reactions: Phryq

vluft

programmer-at-arms
Jun 19, 2016
159
140
Would a single 32gb ram card (in order to get 2 x 32gb RAM) work with this motherboard? I've heard of 32gb SoDIMM cards, but can't find any on the net right now. But before I scour the net, is there any reason they wouldn't work on this motherboard?

I believe, though I'm not certain, that limit for non-ECC DDR4 is 16GB - I've also seen 32GB sodimms but they were ECC, and would not be supported by this board or the CPUs that go in it.
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Ok, no need for over 32gb ram anyhow.

Is there any cheapish, not too hot GPU that can go with this? Something slightly better than integrated graphics with a 7700k.
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Right, I just meant my current needs. Actually, I *could* benefit from more, but I can use fast read NVMe disks as a workaround.

Heat is my big issue, which is why I'm wondering about cool - GPUs.


I wonder, would it be possible to just put the entire motherboard inside this?

https://www.amazon.com/dp/B00RVAEI1E/?tag=theminutiae-20

And use that as a case? I guess the question would be ram clearance. $50 passive cooling case.

Maybe HDPlex can make a custom fit passive cooling case.
 
Last edited:

thewizzard1

Airflow Optimizer
Jan 27, 2017
334
251
I think STX is larger than that (5x5 IIRC, plus the MXM area) and that cooler is more like 4"x8", plus it has no heatplate for the MXM area, and I doubt it'd mount cleanly on either CPU or MXM. They are 'gigantic' scale CPU coolers, and they might work better.

This was my baby, and kept my i7-950 cool through it's lifespan at almost 4GHz: https://www.google.com/search?rlz=1C1CHZL_enUS708US708&espv=2&biw=1600&bih=1110&tbm=isch&sa=1&q=cooler+master+gemini+ii&oq=cooler+master+gemini+ii&gs_l=img.3..0i30k1j0i10i24k1l2.83304.85692.0.85891.23.19.0.2.2.0.156.1671.12j5.17.0....0...1c.1.64.img..5.18.1566...0j35i39k1j0i67k1.wWv68HGTVH0
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Ya, I've though about gigantic coolers that could have a case built around them, like this



So the cooler would be the lid of the case, more or less. But for the ASRock, same problem - it needs GPU cooling (unless there's some very simple / lightweight GPU. Like the 'u' version of a GPU. I know nothing of GPUs.
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,936
Should just put a 180mm radiator and fan on the back of the motherboard.
 

vluft

programmer-at-arms
Jun 19, 2016
159
140
I believe, though I'm not certain, that limit for non-ECC DDR4 is 16GB - I've also seen 32GB sodimms but they were ECC, and would not be supported by this board or the CPUs that go in it.

... Huh actually. There's Kaby Lake Xeons now that go in socket 1151 and support ECC. I wonder if this board does... Would be pricey for sure.
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
This show says you won't get full throughput for all 2280 slots at the same time? So will drive read-times be bottlenecked on the motherboard?




... Huh actually. There's Kaby Lake Xeons now that go in socket 1151 and support ECC. I wonder if this board does... Would be pricey for sure.

Sorry for derailing, but I guess you mean something like an Intel-Xeon-E3-1275?
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,936
Basically the way the I/O on your system works is certain devices (RAM and GPUs among them) get access to PCIe lanes directly from the CPU. Because there isn't enough PCIe lanes direct from the CPU to meet the needs of your system, everything else needs to go through the PCH which is an I/O controller hub. While this technically increases the number of PCIe lanes your system has available significantly, the lanes from the PCH become less effective as demand on the chipset go up. What this means for a 3 NVMe array is that when you're drawing from multiple drives, the PCH may become something of a bottleneck. That being said, in terms of real world or perceivable performance you may or may not notice an issue. I do know that the PCH is supposed to be the limiting factor in running NVMe RAID 0 as it has a max throughput of something like 4GB/s, so if you have 2 960 Pros running in RAID 0 with a theoretical bandwidth of 3500 MB/s, you're only going to see a 14% performance boost by adding the second drive since the entire system has a 4000 MB/s "cap". Obviously using slower/cheaper drives would reduce this effect since adding they will hit the bottleneck later.
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Basically the way the I/O on your system works is certain devices (RAM and GPUs among them) get access to PCIe lanes directly from the CPU. Because there isn't enough PCIe lanes direct from the CPU to meet the needs of your system, everything else needs to go through the PCH which is an I/O controller hub. While this technically increases the number of PCIe lanes your system has available significantly, the lanes from the PCH become less effective as demand on the chipset go up. What this means for a 3 NVMe array is that when you're drawing from multiple drives, the PCH may become something of a bottleneck. That being said, in terms of real world or perceivable performance you may or may not notice an issue. I do know that the PCH is supposed to be the limiting factor in running NVMe RAID 0 as it has a max throughput of something like 4GB/s, so if you have 2 960 Pros running in RAID 0 with a theoretical bandwidth of 3500 MB/s, you're only going to see a 14% performance boost by adding the second drive since the entire system has a 4000 MB/s "cap". Obviously using slower/cheaper drives would reduce this effect since adding they will hit the bottleneck later.

Ok, so does this mean RAM is subject to the same bottleneck? The CPU IO is basically 4000 MB/s, and only upgrading CPU can increase that?
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,936
This should help explain things. RAM falls outside of the PCH bottleneck so far as I know