News ASRock Unveils the X299E-ITX/ac: Mini ITX + X299 + Quad-channel Memory

Mod edit:



Detailed overview of what we know about the X299E-ITX/ac thus far here: https://smallformfactor.net/news/asrock-x299e-itxac-little-monster-detailed

Original:

ASRock did it! Finally, there's an Intel HEDT platform motherboard with full quad-channel DDR4 memory. The new X299E-ITX/ac is for those who need up to 18 CPU cores and up to 64 GB of quad-channel DDR4 memory in their SFF machines for reasons. The board manages its limited PCB real-estate by going vertical. It features two riser cards, one with a few onboard controllers, and a pair of 32 Gb/s M.2 slots), and the other riser with SATA 6 Gb/s ports, a third M.2 slot, and the headers such as USB 3.1. The board draws power from 24-pin ATX and 8-pin EPS connectors, conditioning it for the LGA2066 CPU using a 7-phase VRM. The lone expansion slot is a PCI-Express 3.0 x16, memory is handled by four DDR4 SO-DIMM slots. Connectivity includes two Intel I219-V driven gigabit Ethernet interfaces, 802.11ac WLAN, and Bluetooth 4.1.



Source
 
Last edited by a moderator:

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
Wow, amazing community here!

"TDP ratings are based on the base clock, which no one cares about really" So the lowest (non-'turbo' etc. clock speed of a single core? Or base-clock * cores?

Anyhow... does this mean TDP is really useless? Is there any other measure to understand how much heat the chip creates?
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
Wow, amazing community here!

"TDP ratings are based on the base clock, which no one cares about really" So the lowest (non-'turbo' etc. clock speed of a single core? Or base-clock * cores?

Anyhow... does this mean TDP is really useless? Is there any other measure to understand how much heat the chip creates?

The all-core base clock, so for the 8700K, it's 3.7GHz. So when all 6 cores are operating at 3.7GHz, the chip supposedly consumes 95W at stock voltage. You can undervolt of course, and improve that. In reality, it also depends on the load. Just because the cores are running at max frequency, it doesn't mean that the CPU is under 100% load. And even at 100% load, it depends on what the load is. Prime95 is like the ultimate torture test, so it can cause higher power consumption. If you run ASUS RealBench (which is what I used to determine my optimal overclock), that's a more realistic power consumption figure under normal load conditions. So the TDP is more of an estimate.

Now, in the past boost clocks were not too far off of base clocks, so it didn't make that noticeable a difference from the rated TDP. The 6700K had a base clock of 4 GHz and a boost clock for one core at 4.2GHz and for all cores at 4GHz, so the 4-core boost clock was the same as the base clock. Anything higher was overclocking territory where you had to expect higher TDP. Hence, you could cool a stock 6700K with an L9i, you may have even been able to give it a very small all-core overclock if you got a good chip.

With the 8700K, it basically comes pre-overclocked to 4.7GHz on one core and 4.3Ghz across all 6. And they brought the base clock down to 3.7GHz so it fit the TDP rating, but in reality no one expects their CPU to run at 3.7GHz. Everyone is used to expecting the boost clocks at around the stated TDP, hence the confusion we all experienced when the L9i couldn't handle it. Because at the boost clocks the CPU is consuming a lot more than 95W, probably closer to 115W or so. It's just a way for Intel to give whatever TDP rating they want. The 8700 has a base clock of 3.2 GHz, so it has a rating of 65W, but the same all-core boost clocks as the 8700K, so at full boost it consumes almost double its rated TDP. So TDP is more or less arbitrary at this point unless you want to run your CPU at base clocks, which is quite a sacrifice.

It makes sense, I mean you can't magically add 2 more cores at the same frequency and expect the same TDP. Of course this hardly matters to the vast majority of consumers who have coolers that can easily handle 120W or more, but for the SFF community it's an important consideration. What I found most surprising is that Intel bundles the 8700 with a cooler that flat out cannot handle that CPU's boost clocks. I think that's the first time they've done that.

Boy, I typed a lot. Sorry for the digression from topic.
 
Last edited:

confusis

John Morrison. Founder and Team Leader of SFF.N
SFF Network
SFF Workshop
SFFn Staff
Jun 19, 2015
4,324
7,425
sff.network
So, for the LOLs, I decided to retest the HDPlex 400W DC-DC power supply on the board. It booted fine.

I don't know what I did wrong in initial testing but it seems that the config works fine. I have amended the article to remove this issue.
 

Phryq

Cable-Tie Ninja
Nov 13, 2016
217
71
www.AlbertMcKay.com
The all-core base clock, so for the 8700K, it's 3.7GHz. So when all 6 cores are operating at 3.7GHz, the chip supposedly consumes 95W at stock voltage. You can undervolt of course, and improve that. In reality, it also depends on the load. Just because the cores are running at max frequency, it doesn't mean that the CPU is under 100% load. And even at 100% load, it depends on what the load is. Prime95 is like the ultimate torture test, so it can cause higher power consumption. If you run ASUS RealBench (which is what I used to determine my optimal overclock), that's a more realistic power consumption figure under normal load conditions. So the TDP is more of an estimate.

Now, in the past boost clocks were not too far off of base clocks, so it didn't make that noticeable a difference from the rated TDP. The 6700K had a base clock of 4 GHz and a boost clock for one core at 4.2GHz and for all cores at 4GHz, so the 4-core boost clock was the same as the base clock. Anything higher was overclocking territory where you had to expect higher TDP. Hence, you could cool a stock 6700K with an L9i, you may have even been able to give it a very small all-core overclock if you got a good chip.

With the 8700K, it basically comes pre-overclocked to 4.7GHz on one core and 4.3Ghz across all 6. And they brought the base clock down to 3.7GHz so it fit the TDP rating, but in reality no one expects their CPU to run at 3.7GHz. Everyone is used to expecting the boost clocks at around the stated TDP, hence the confusion we all experienced when the L9i couldn't handle it. Because at the boost clocks the CPU is consuming a lot more than 95W, probably closer to 115W or so. It's just a way for Intel to give whatever TDP rating they want. The 8700 has a base clock of 3.2 GHz, so it has a rating of 65W, but the same all-core boost clocks as the 8700K, so at full boost it consumes almost double its rated TDP. So TDP is more or less arbitrary at this point unless you want to run your CPU at base clocks, which is quite a sacrifice.

It makes sense, I mean you can't magically add 2 more cores at the same frequency and expect the same TDP. Of course this hardly matters to the vast majority of consumers who have coolers that can easily handle 120W or more, but for the SFF community it's an important consideration. What I found most surprising is that Intel bundles the 8700 with a cooler that flat out cannot handle that CPU's boost clocks. I think that's the first time they've done that.

Boy, I typed a lot. Sorry for the digression from topic.

Thanks. So my understanding here is that an 8700 and 8700k basically have the same max-ability? If the all-core-boost-clocks are the same, how is an 8700k any more powerful? And how can they have a lower TDP than a 7800x? 7800x has a base clock of 3.5GHz, so shouldn't it actually have a lower TDP?
 

Biowarejak

Maker of Awesome | User 1615
Platinum Supporter
Mar 6, 2017
1,744
2,262
Thanks. So my understanding here is that an 8700 and 8700k basically have the same max-ability? If the all-core-boost-clocks are the same, how is an 8700k any more powerful? And how can they have a lower TDP than a 7800x? 7800x has a base clock of 3.5GHz, so shouldn't it actually have a lower TDP?
You can overclock the -k sku, and they raised the expected TDP to encourage users to use a beefier cooler. Just a guess. Also the Skylake-X architecture is less efficient than Cannonlake. Both seem to routinely exceed their TDP's due to some motherboard settings (multi-core enhancement).
 
  • Like
Reactions: Phryq

h2plus

Chassis Packer
Oct 5, 2017
13
34
I added a PCI-e capture card via an m.2 > pcie bridge using the front m.2 slot - worked right away on boot with no additional configuration! Very happy. So, even if that front slot is wired through the chipset, it will still work as a pcie expansion.
 

AleksandarK

/dev/null
May 14, 2017
703
774
I added a PCI-e capture card via an m.2 > pcie bridge using the front m.2 slot - worked right away on boot with no additional configuration! Very happy. So, even if that front slot is wired through the chipset, it will still work as a pcie expansion.
Very interesting.

Got a pic of that?
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
I added a PCI-e capture card via an m.2 > pcie bridge using the front m.2 slot - worked right away on boot with no additional configuration! Very happy. So, even if that front slot is wired through the chipset, it will still work as a pcie expansion.

I did the same with my X99E-ITX/ac. Which adapter did you use? I used the DeLock and it was plug and play, no driver needed, nothing unusual in Windows, just works as a regular PCIe slot. You need a riser of course. I was concerned that on the X299 board, there might not be enough clearance to use an adapter/riser with the front M.2 if you're using an AIO. A pic would also be much appreciated!
 
  • Like
Reactions: Biowarejak

h2plus

Chassis Packer
Oct 5, 2017
13
34
I am using a Noctua cooler, so my space around the CPU is limited, and I went with this because it seems like an elegant all in one item with good cabling:

http://www.bplus.com.tw/ExtenderBoard/R4 Series.html

ebay link

It's very hard to take photos inside such a tiny case! Basically the cable comes off the m.2 slot and does a loop, and just barely clears the ram. The round holes designed for liquid cooling pass-through become my rear output for the card. A bit of luck is involved.


 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
I am using a Noctua cooler, so my space around the CPU is limited, and I went with this because it seems like an elegant all in one item with good cabling:

http://www.bplus.com.tw/ExtenderBoard/R4 Series.html

ebay link

It's very hard to take photos inside such a tiny case! Basically the cable comes off the m.2 slot and does a loop, and just barely clears the ram. The round holes designed for liquid cooling pass-through become my rear output for the card. A bit of luck is involved.

Very cool! I used the watercooling holes on my M1 too, except I velcroed the card to the top of my GPU (may not be ideal for cooling, but it's a very low-power controller card) and I ran the cables through the watercooling holes haha. Another option is to use the third slot of course, but I'm guessing you have something there or the cable doesn't reach.

That bplus riser looks great. Good job finding it on eBay, they are pretty hard to find. I figured you'd used something like this otherwise a regular M.2-to-PCIe adapter + riser wouldn't fit in the front M.2 of the X299 board with an air cooler. I wonder if it'll fit with an AIO, probably not.

Is that an NH-U9S? What CPU are you using it with and what overclock are you running, just curious.

 
  • Like
Reactions: Phuncz

h2plus

Chassis Packer
Oct 5, 2017
13
34
Very nice! I think if my fit wasn't as naturally close to the edge as it turned out, I'd be mounting the card further in and running cables out like you did as well. I wanted to use the 3rd slot originally but I could not figure out how to run the cable past my GPU. That being said, I'm now considering running an additional PCIe riser from the rear m.2 slot with a much longer cable - I have to bend it backwards on itself, but I think there's juuust enough room between the back of the motherboard and the case door for that bend.

The bplus adapter is definitely hard to find! And it took a few weeks to ship to me from China, but well worth the wait. I haven't seen any that ship from the US.

I am running a i9-7920x with a U9S. It's not delidded, and it's not overclocked. From everything I read, power consumption and heat goes up so fast with the i9's that I am just happy with it being "fast enough." My thermals seem to live in very comfortable high 60s for a lot of the work I do.

This was my build log thread, has some more pictures!
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,947
4,952
Yes, sorry about that. I thought it was implied but that's not very clear, good point !
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
G.skill cannot even provide 3800 kit, just pointless marketing

What happened to their 3200MHz 64GB kit? I can't find it anywhere, including their website. They have a 2800MHz 64GB kit listed, but that seems to be near impossible to find, the few retailers that carry it are sold out.

Ripjaws look awful, I wish another manufacturer would make higher-clocked memory. There are lots of options for 64GB 2400MHz... Speed doesn't matter for 95% of applications, so may not be a bad option if you need a lot of memory.
 
Last edited: