SFF.Network ASRock Z270M-STX MXM Micro-STX Motherboard Pictured

vluft

programmer-at-arms
Jun 19, 2016
159
140
Sorry for derailing, but I guess you mean something like an Intel-Xeon-E3-1275?

Yup, though for this might as well go with 1280 so you're not wasting transistors on onboard graphics. (though for non-gpu-required workstation needs (which is what, basically software developers? :D), the probably upcoming non-MXM version of this board + a 1275 would make a pretty nice compact machine - NVME SSDs and 64GB of ECC ram...)
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,935
Pretty much the only way you're going to bottleneck the PCH is if you are moving simultaneous data from two or more top end NVMe drives at the same time for sustained periods of time. I know you're doing music composition and moving large files back and forth from our PMs. I also believe you said you're not planning on running RAID with your drives. As such I'd look at your read/write patterns when working and see if you see yourself doing this.
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
vluft, Musicians need fast random and continuous read SSDs, tons of ram, and no GPU, as well ass fast single-core speed (and more cores helps too, depending). GPU can help us a tiny bit... so a really low-end GPU can be of use, but not worth it if it's expensive.

Yes, even though I'm not streaming, I will be doing continous reads from all 3 NVMe drives. I'm not sure how to test it on my current system, as right now I load my samples into RAM (there is disk streaming in a sense. An instrument can have e.g. 8000 samples. Let's say it's an 8gb instrument to keep things simple. If I keep the first half of each sample buffered in ram, then triggering the sample will mean the CPU reads the first half of the sample from the RAM, and the second half of the sample from disk. The faster I can read from disk, the less of the sample I need loaded into RAM.

A regular SSD can read about 500 MB/s, so even with the 4000 MB/s cap (already saturated by 1 NVMe and 2 SATA SSDs, that's 8 times a normal SSD speed. The math I would need to do would be... how little can I load into ram and still be able to stream without 'dropouts'...

Will there be a non-MXM version of this board that supports a Xeon 1275? That would get some better IGP performance and similar clock to a 7700k? Maybe I'm still better off with a 7700k... and it supports 64gb, but I doubt the motherboard will have 4 ram slots or support ECC

Xeon 1275 base clock = 3.4,
7700k base clock = 4.20 GHz
 
  • Like
Reactions: iFreilicht

vluft

programmer-at-arms
Jun 19, 2016
159
140
Will there be a non-MXM version of this board that supports a Xeon 1275?

Looking around a bit, unfortunately it looks like the Kaby Lake Xeons are supported only on C232/C236 chipsets, not on any of the XX70 ones, so even if there is a non-MXM board it won't support it.
 

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,935
Just don't use an MXM card in the board? Should work totally fine without it.
 

LocoMoto

DEVOURER OF BAKED POTATOES
Jul 19, 2015
287
335
Ya, I'm just assuming the non-MXM version would be smaller / cheaper / might have some other features.

I can see how you would argue that this board could suit your needs, but really... you'd be much better suited with an ATX or MATX board for instance, then use the PCI-E slots for your storage, either with PCI-E to M.2 adapters or with PCI-E storage drives.
That would give you the RAM capacity, CPU lanes straight to your storage and price effectiveness.
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
I've been thinking about that. PCI-E can something increase system interupts, which lowers realtime CPU performance... it might be ok, it's one more thing I'm figuring out.
 

Boil

SFF Guru
Nov 11, 2015
1,253
1,094
Ya, I'm just assuming the non-MXM version would be smaller / cheaper / might have some other features.
Keep in mind, the space where the M.2 drives are is on the backside of where the MXM GPU mounts…

If they make a smaller board, then there would not be room for the plethora of M.2 drives…

Now…! If they made the same size board, but dedicated all of the GPU PCIe lanes to M.2s, then maybe three on front & three on back, for a total of SIX M.2 drives…!?!

Doubtful, but fun to think about…!
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Oh, good point. So actually, I'm thinking this board is right for me. It's still very small, I need the M.2, and anyhow, not having a GPU means more bandwidth on the South Bridge for my M.2s, right?
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Does anyone know... are the M.2 slots "behind the chipset", meaning they'd be bottlenecked by the South Bridge?

And is there any use for the MXM slot other than a GPU? Could I put a PCIe SSD in there, for example?
 

sergiiua

The builder of unique cases ...
CustomMOD
Jan 11, 2017
1,135
2,027
www.custom-mod.com
Does anyone know... are the M.2 slots "behind the chipset", meaning they'd be bottlenecked by the South Bridge?

And is there any use for the MXM slot other than a GPU? Could I put a PCIe SSD in there, for example?
 

LocoMoto

DEVOURER OF BAKED POTATOES
Jul 19, 2015
287
335
Does anyone know... are the M.2 slots "behind the chipset", meaning they'd be bottlenecked by the South Bridge?

I haven't seen a diagram of how they distribute PCI-E lanes on this board, but at most I'd guess 2x 4 lane M.2 slots would be directly connected to the CPU, assuming 8 lanes goes to the MXM slot out of 16 in total.
 
  • Like
Reactions: Phryq

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Skylake and Kaby Lake Xeons E3s, unlike previosu generations, cannot be used with consumer chipsets (e.g. Z170, H170, etc), only with C2xx series chipsets. To put a Xeon on a board like this would need a separate variant with a different chipset.

The only time you will 'bottleneck' m.2 SSDs connected to the chipset is if you are attempting to read from two at the same time (or write to two at the same time), and assuming that a single SSD can actually saturate half a 4x PCIe 3.0 link in the first place. If you are, for example, reading at full speed from one drive and writing at full speed to another drive, there will be no bottleneck (as the DMI 3.0 link between CPU and chipset is full duplex). Also, if you are just copying unmodified data between drives then DMA is in effect and that data never goes outside of the chipset, so never needs to cross over the DMI 3.0 link at all.
 
  • Like
Reactions: Phryq

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,935
Just so I understand, the only bottleneck that should occur is if you're using really high performance M.2 (Samsung 960 series) in RAID 0?
 
  • Like
Reactions: Phryq

|||

King of Cable Management
Sep 26, 2015
775
759
You could have the link saturated in JBOD, as well, if the system is functioning as a server and multiple users are making requests to it. Also, I'm not too familiar with the details of how video editing software operates, but I could see the potential for when multiple video streams are being ran together (side-by-side or over-lay) and the processor requesting large files that are highly sequential from multiple drives in that situation.
 
  • Like
Reactions: Phryq

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,935
This would be one of the advantages of a Ryzen version of this board. Ryzen has been designed to take one NVMe over a dedicated x4 link to the CPU, so hypothetically in this situation only two of the three NVMe would be behind the chipset. Now I'm not sure if you can RAID a chipset SSD to a CPU-direct SSD, but at the very least you would be able to have two separate high performance drives with virtually no real world performance deficit.
 
  • Like
Reactions: Phryq

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Maybe bottleneck isn't the right word. I need to read from all disks at the same time with minimal latency. Being behind the chipset means slightly more latency than direct to CPU. RAID 0 won't help me, as I need random-read speeds and lowest latency.

I think ASRock's other STX has the M.2 behind the CPU, so I'm hoping it should be the same here...