• Save 15% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

10Gb/sec in ITX w GPU

zey

Cable Smoosher
Oct 18, 2017
8
7
Unfortunately haven't had time to test the functionality yet, but here is a simple fitment test to see if you can use a spare m.2 slot to mount a 2nd card next to a single slot GPU. Looks like you can, if you have 11mm clearance between the motherboard and backplate. That's a big if though.

Did you ever get around to testing this? I've been for I think a year now looking for an mitx board with 10gbe and the ablity to still use the x16 slot for graphics.
 
  • Like
Reactions: Biowarejak

hittheroadrunning

Case Bender
Aug 24, 2018
2
13
I joined just to share - I was able to add a EVGA 1050 TI WITH a Intel x550-T1 (10gb pcie adapter) - in a Fractal Node 202 mitx case using Gigabyte Z270N-WIFI (Supporting bifurcation) & Ameri-Rack; ARC1-PELY423-CxV3 (https://www.amazon.com/gp/product/B06Y5LBY8G/?tag=theminutiae-20) - So I got 2x lanes of PCIE 8x.

It actually fit pretty well - Had to cut case in a few spots - (The riser insert, below the insert in case bottom, and the bottom plastic shroud.)



 

AcquaCow

SFF Lingo Aficionado
Jul 14, 2017
113
84
I worry about that x550, especially if it runs as hot as my x540-T2. Thing overheats on full speed transfers w/o good airflow on it. Whole device shuts down.

I'm experimenting with the new Asus/Aquantia XG-C100C and hoping it'll not run as hot.
 
  • Like
Reactions: Biowarejak

rokabeka

network packet manipulator
Jul 9, 2016
248
268
I worry about that x550, especially if it runs as hot as my x540-T2. Thing overheats on full speed transfers w/o good airflow on it. Whole device shuts down.

x550-t1 is a good choice because it has a lower power consumption and better power management.
x550-t1 needs less than 8W max meanwhile x540-t2 _typical_ consumption is ~17W and x540-t1 is ~11W.
the reason that I would also create some wind there is the GPU.
In my setup I rather had the NIC in the front and making the GPU cooling worse but for me the NIC used to be loaded more frequently than the GPU (xfx rx460 4G single slot + intel x520)
 

hittheroadrunning

Case Bender
Aug 24, 2018
2
13
I worry about that x550, especially if it runs as hot as my x540-T2. Thing overheats on full speed transfers w/o good airflow on it. Whole device shuts down.

I'm experimenting with the new Asus/Aquantia XG-C100C and hoping it'll not run as hot.

Running these in desktops - so not huge long term transfers.. Thanks for the heads up - I'll keep an eye on them.

FWIW I run 560SFP+ in my servers and haven't had any drop outs.
 
  • Like
Reactions: Biowarejak

rokabeka

network packet manipulator
Jul 9, 2016
248
268

(offtopic: more info here. and offtopic because it was 80G :D)
 
Last edited:

AcquaCow

SFF Lingo Aficionado
Jul 14, 2017
113
84
I bought one of the Dell versions of my x540 10Gig Nic that has the fan on the heatsink. Thing is WAY too loud for desktop use...

I swapped the heatsink off my failed x540 that didn't have the fan and used it that way =P
 

aquelito

King of Cable Management
Piccolo PC
Feb 16, 2016
952
1,124
I joined just to share - I was able to add a EVGA 1050 TI WITH a Intel x550-T1 (10gb pcie adapter) - in a Fractal Node 202 mitx case using Gigabyte Z270N-WIFI (Supporting bifurcation) & Ameri-Rack; ARC1-PELY423-CxV3 (https://www.amazon.com/gp/product/B06Y5LBY8G/?tag=theminutiae-20) - So I got 2x lanes of PCIE 8x.

It actually fit pretty well - Had to cut case in a few spots - (The riser insert, below the insert in case bottom, and the bottom plastic shroud.)

Nice to see an other bifurcation user :)

I was never able to get a non-PEG GPU to be recognized by Windows though. I wonder how you managed !

Great job.
 
  • Like
Reactions: Biowarejak

jaagdijot

Trash Compacter
Jan 29, 2018
52
10
Dang I wish more ITX cases had the third PCI slot!

I am hoping in the upcoming premium ITX boards manufactures start putting the 10 GbE nic.
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
I am hoping in the upcoming premium ITX boards manufactures start putting the 10 GbE nic.
Agreed, but it's unlikely given the size and power draw (and thus cooling requirements) of the controller chips needed. They're too hot to be on the back of the board, which means they'll be competing with everything else for space on the front. Shouldn't be a problem for boards fancy enough to use daughterboards, though, like the Asrock X299 ITX. Asus isn't a stranger to daughterboards on their ITX solutions either.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Agreed, but it's unlikely given the size and power draw (and thus cooling requirements)
Its not an engineering problem at this point and hasnt been for years. You can go out now and buy mITX boards w/ dual 10G SFP+ and dual 10G RJ45 w/ an additional 1G for the BMC (An example build here) They just come in at a price point above even top tier consumer boards and dont always have features that consumer might care about (although there are enough on the market at this point that you have choices).
 
Last edited:

AcquaCow

SFF Lingo Aficionado
Jul 14, 2017
113
84
USB 3.0 has a max power spec of 4.5W
USB 3.2 up to 7.5W

Those $100 Aquantia chipsets use 4W at 5Gbit and 6W at 10Gbit.

So you could certainly do 2.5-5Gbit in a USB 3.0 dongle...and 10gig on a newer port.
 
  • Like
Reactions: Biowarejak

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
USB 3.0 has a max power spec of 4.5W
USB 3.2 up to 7.5W

Those $100 Aquantia chipsets use 4W at 5Gbit and 6W at 10Gbit.

So you could certainly do 2.5-5Gbit in a USB 3.0 dongle...and 10gig on a newer port.
->


Its not an engineering problem at this point and hasnt been for years. You can go out now and buy mITX boards w/ dual 10G SFP+ and dual 10G RJ45 w/ an additional 1G for the BMC (An example build here) They just come in at a price point above even top tier consumer boards and dont always have features that consumer might care about (although there are enough on the market at this point that you have choices).
(emphasis added)
The board in the article linked is a perfect example: its I/O is unacceptable for consumer use, not to mention that its layout is entirely unrealistic for consumer boards for a few reasons:
  • A socketed CPU would need dramatically more space than the CPU on that board. The CPU has a package size of 34x28mm - which is tiny. The socket itself for an AM4 or 115X platform is close to the area of the heatsink shown there. Then there's the keep-out zone around it, and the cooler mounting holes. I'd estimate the socket area for AM4 or 115X (in which you can't place other components as they'd interfere with the CPU cooler) to be 3-4x what you see on that board.
  • That's a 31W TDP Atom CPU. As such, there's barely any VRM components on the board at all. Even a single high-quality power phase could handle that. I guess you could make a socketed ITX board that only supported ~35W chips, but that wouldn't be acceptable to the vast majority of people. As such, VRM (and VRM cooling) would need significantly more space than on that Supermicro board. Particularly if you want OC options - which most people seem to do. If it's an 115X board, you need it to safely provide ~200W to the CPU to sustain a 5GHz 8700K. That takes some space.
  • That board is also designed for server usage, in other words, an environment with heavy forced-air cooling. Hence, it has zero VRM heatsinks (also due to the low power requirements, of course) and a tiny heatsink, even for a 31W CPU. For consumer usage, that CPU heatsink would likely be insufficient, and for anything more heavy-duty you'd need to make accomodations for cooling that doesn't involve 5000RPM fans running constantly.
  • That CPU has integrated 10GbE. In other words, it doesn't need a controller or heatsink for that controller - it's all handlet by the CPU and its heatsink.
In other words: of course this is an engineering problem. Your example to the contrary is an extremely specialized motherboard and CPU with a heap of advantages (in this regard) compared to consumer boards - none of which are transferable, regardless of the cost. You can't shrink the keep-out zone or area needed for proper VRM cooling even if you have loads of money. Nor can you shrink the CPU. Or magically integrate a 10GBe controller into a CPU or chipset that doesn't have one. And so on, and so forth. This is definitely an engineering problem. Does this board look like it has much room to spare? Or this? Or this? No.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
its I/O is unacceptable for consumer use
For your consumer uses perhaps
A socketed CPU would need dramatically more space than the CPU on that board.
Sure if you need a socketed CPU
~35W chips, but that wouldn't be acceptable to the vast majority of people.
Ask around here, there are plenty of people who are totes fine w/ doing exactly that
Particularly if you want OC options
Sure, but now your not a typical consumer, your a niche and an SFF overclocker is a niche of a niche
That board is also designed for server usage, in other words, an environment with heavy forced-air cooling
Actually its being targeted towards edge/branch use where there is a substantially lower expectation of forced air.
You know that sounds like, a solved engineering problem

But dont worry about it, you got choices
https://b2b.gigabyte.com/Server-Motherboard/MB10-DS3-rev-13#ov
http://www.supermicro.com/products/motherboard/xeon/d/x10sdv-tln4f.cfm
http://www.asrockrack.com/general/productdetail.asp?Model=D1541D4I-2L2T#Specifications
https://b2b.gigabyte.com/Server-Motherboard/MB10-DS4-rev-13#ov
https://b2b.gigabyte.com/Server-Motherboard/MB10-DS4-rev-13#ov
https://www.newegg.com/global/au/Product/Product.aspx?Item=N82E16813182964

The problem isnt Physics. the problem is most consumers think wifi is fine and vendors dont see it as a competitive offering at the cost to integrate it.

Mod Edit: Post edited to comply with community standards.
 
Last edited by a moderator:
  • Like
Reactions: Biowarejak

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
Are you proposing that someone integrate 10GbE into consumer CPUs? While I would love that, it's not happening. They haven't bothered to do that with regular GbE, so ... Yeah. 99.9% of consumers wouldn't want to pay the $100+ extra for that - and making it into specialized SKUs would require a new socket with enough pins to make this possible, which, again, doesn't make sense for platform designers or OEMs. Integrating it into a high-end chipset would make more sense, but would make that chipset both hot-running, large, and expensive, and could potentially cause bottlenecks with the CPU uplink.

Secondly: Yes, I want a socketed CPU. As does the vast majority of other people, even here on this forum. Both to be able to choose my own CPU (rather than the 1-3 SKUs an OEM is willing to produce with soldered chips), to be able to replace/upgrade components if they fail or underperform, and to reduce costs (as soldered solutions are always more expensive). Not to meniton cooler compatibility and all the other inherent strenghts of standardized modular systems.

Thirdly: Being happy with a 35W CPU for a particular build is not the same as buying a motherboard that only works with 35W CPUs. Most people I've seen here re-use their hardware across quite a few configurations before it's taken out of circulation. Buying a socketed motherboard that's firmware-locked to only accept 35W CPUs would be ... impractical. Not to mention that no OEM would be willing to produce and sell this, given the obvious and major problems this would lead to (mainly heaps of RMAs from people who can't read spec sheets or buy it by mistake). And all of this is before touching on whether OEMs would at all be allowed to sell these boards at retail, as they wouldn't be compliant with the minimum requirements of their platforms.

Fourthly: I haven't said that we "can't make physics work to put 10GbE in an ITX board". Nice straw man you've got there - is it fun arguing against it? I just said that it's incredibly difficult to squeeze this into already feature-packed consumer boards - which you haven't presented a single argument against. Given what ASRock fit into their X299 ITX, there's no doubt it can be done with a daughterboard or two. But other than that? Not likely, at least until they start making the chips on smaller process nodes.


But again, I don't see your problem here. As you've shown aplenty, if you're happy with the (severe) limitations of server/edge platforms for your use case, those often have 10GbE integrated. Go for it. Buy one, use it, be happy. The thing is, if the people discussing this topic in this thread were satisfied with that, the thread wouldn't exist. Most of us don't want the 2-minute POST times and 3-4-5x cost of these boards, nor do we want/need the use-case-specific features built into them - but we do want better I/O, PCIe x16 for GPUs, standardized cooler compatibility, quality VRM setups, and so on. Most people want modern UEFIs and consumer-grade usability, too. Your arguments - in general - aren't really applicable. You say these are solved problems, yet all your solutions come with massive limitations. Oh, and all of them have CPUs with integrated 10GbE - not a single one of those boards has an on-board 10GbE controller. So, unless you know how to integrate one of those into consumer-priced CPUs (for reference, the Xeon D-1541 has a 1000-unit tray price of $581) or chipset, then no, this isn't a solved engineering problem.


Mod Edit: Post edited to comply with community standards.
 
Last edited by a moderator:
  • Like
Reactions: Biowarejak

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Fine, lets analyse.


Are you actually proposing that someone integrate 10GbE into consumer CPUs?
Sure, im not convinced we will see it but it is a solution, it wouldnt be all that difficult for Intel to do this, there are enough reserved pins in the existing sockets to just rev the socket. 2011v3 much? No I dont think we will see it happen but that is one solution.
Yeah. 99.9% of consumers wouldn't want to pay the $100+ extra for that
99% of consumers don't care about 10G ethernet.
and making it into specialized SKUs would require a new socket with enough pins to make this possible
As I said, not necessarily but even if it did thats not a huge problem, Intel rev sockets every couple of years. If they thought there was a competitive market advantage then releasing a new socket to support new functionality isnt exactly ground breaking stuff.
doesn't make sense for platform designers or OEMs
Why not, there is already large amount of other stuff already integrated into the CPU, this doesnt seem like an impossible engineering challenge to overcome, in fact, it seems like an already solved problem for a whole host of high speed I/O. Its also a solved problem on other architectures that do put 10G on-die (Modern Sparc comes to mind) and a number of Intel's own SKU's as you so righteously pointed out over and over and over again.

As does the vast majority of other people
Who care substantially more about 11ac than they ever will about 10GBase-T
to be able to replace/upgrade components if they fail or underperform
You mean like building w/ an mATX board and a 10G PCIe card. Do you want an integrated solution as part of your motherboard or do you want a replaceable solution that you can dump if / when it fails?
Most people I've seen here re-use their hardware across quite a few configurations before it's taken out of circulation
So if it doesnt have a socket you cant reuse it later. excuse me whilst i go pull a bunch of servers out of circulation and send them to the e-waste place, they were used a few times before in other systems & have BGA CPU solutions on them. how did i not know this rule?
which you haven't presented a single argument against
My argument against it were links to a multitude of boards that have done exactly this. I didnt say it was easy, i said it was a mostly solved problem from an engineering angle, as evidenced by the fuckton of boards that have already solved it. Building the ISS also wasnt easy and also is a mostly solved engineering problem.

And all of this is before touching on whether OEMs would at all be allowed to sell these boards at retail
Not that this is the argument I was making but just to be clear, thats not an engineering problem, its a market one.

The problem isnt one of engineering, Its one of addressable market and cost. I was never suggesting anyone should buy any of those boards I linked and use them as their daily driver. I was suggesting that those boards demonstrated that 10G on mITX is entirely feasible. For sure they make other compromises but at the end of the day, they have solved the engineering problem of putting 10G eth on an mITX platform. We can put cold cathode lighting and individually addressable LED arrays into sticks of RAM, an engineering problem that was almost universally considered unsolvable right up until vendors decided it was a thing consumers would want. At the end of the day if board makers believed there was a competitive advantage to bolting 10G NICs to their mITX consumer boards then they would do it tomorrow. If Intel thought there was a competitive advantage to doing the same w/ their consumer CPU SKU's then they would enable the functionality tomorrow. The problem isnt engineering, its addressable market and cost to consumers for functionality most of them dont careabout.

Also, whilst I didnt address this directly above it occurs to me that 10G phy cant be that hot as you mentioned to implement on a chip. Intel are bolting them to 16 core CPU's and keeping them at 35w. Sure older 10G chipsets ran hotter than the surface of the sun, also an engineering problem thats been solved. As the tech matures and the processes gets smaller power draw goes down. Was only a few years ago that no one could put 10GBase-T into and SFP+ module because the power draw was too high, then it got lower, now you can pick those modules up for cheap.

Mod Edit: Post edited to comply with community standards.
 
Last edited by a moderator:
  • Like
Reactions: Biowarejak

Duality92

Airflow Optimizer
Apr 12, 2018
307
330
I think the easiest way to get 10 GbE on ITX in a case, is with a little modding and doing a m.2 x4 PCIE to regular X4 PCIE and adding a PCI-E 10 GbE card using the rear m.2 slots on motherboards to keep aesthetic clean.