News [EKWB] New EK-FC Terminals are now available!

EK Water Blocks, Ljubljana based premium liquid cooling gear manufacturer, is expanding its EK-FC Terminal lineup! By popular demand EK is releasing five new terminals and customers are getting new ways to connect multiple graphics cards and some new options when planning a custom loop.
https://www.ekwb.com/news/new-ek-fc-terminals-are-now-available/

EK has released new FC Terminals that might be very usefull for those that are looking to do some extreme watercooling or that are limited by they cases.

The 2 most notable terminals for us SFF enthusiasts are
EK-FC Terminal Angled
When going up or down isn't an option but going sideways is!


EK-FC Terminal DUAL Parallel 1-Slot
When you want to make the most of that mATX board or if you are one of those extremists that have entered PCIe bifurcation territory!
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
Very nice, especially the single-slot terminal. I really hope bifurication builds are going to take off with that, maybe we'll even see triple-SLI in the KI Cerberus?
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,827
4,902
Another important step for mITX through bifurcation and mATX with now possibly 4 GPU support, even though that latter one is very limited in real use cases.
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
Another important step for mITX through bifurcation and mATX with now possibly 4 GPU support, even though that latter one is very limited in real use cases.

Two dual-GPU cards in a microATX enclosure would be interesting, actually.

Vastly overpriced, overkill and impractical for the great majority of use cases. But still, interesting.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
If by "interesting" you mean "a nightmare scenario in every regard", then I fully agree! :D

Having dealt with a 690 in the NCASE M1 both on air and watercooled, the idea of 2x dual GPUs in the Cerberus makes my head hurt. Power consumption concerns, thermal constraints, compatibility issues of quadfire/quad sli.. oh man.
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,827
4,902
The new generation GPUs that arrive this year are expected to have seriously reduced power consumption or insanely increased performance. Either way, four cards is rarely supported on AMD and as far as I know Nvidia doesn't even support it because it doesn't scale much above three card SLI.
 

|||

King of Cable Management
Sep 26, 2015
775
759
They will still have 250W TDP parts; they'll be more efficient in their operation, but there will be greater number of ROPs, ALUs, etc. that will offset that to make it as powerful as possible. The interesting part of the upcoming generation of GPUs, at least for Nvidia, will be the interconnect. Nvidia has indicated they will drop the PCI-e SLI interconnect in favor of the new NVLink for GPU-to-GPU communications. Lower latency and greater bandwidth could definitely fix some of the poor scaling problems seen in many applications running on multiple GPU set-ups.
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
They will still have 250W TDP parts; they'll be more efficient in their operation, but there will be greater number of ROPs, ALUs, etc. that will offset that to make it as powerful as possible. The interesting part of the upcoming generation of GPUs, at least for Nvidia, will be the interconnect. Nvidia has indicated they will drop the PCI-e SLI interconnect in favor of the new NVLink for GPU-to-GPU communications. Lower latency and greater bandwidth could definitely fix some of the poor scaling problems seen in many applications running on multiple GPU set-ups.

Have they indicated that they're bringing that to consumer parts?

Makes a lot of sense for VR but the value proposition for practically anything else seems harder to argue. Especially since Pascal will be pretty expensive for them to make wholly due to the fab process.
 

|||

King of Cable Management
Sep 26, 2015
775
759
All the material they have released has been for Pascal as a whole. I have not seen anything to the contrary or excluding consumer products; the only thing that would make them chose to not include it would be if the IP block area on the silicon for the interconnect was excessively large, which I doubt. The four lanes from each GPU would go across a bridge, just like the SLI bridge, in a different "plane." They are developing the IP anyways and they will still need to verify and validate the functionality of an interconnect anyhow, so those things won't add much of an expense if they are any more.
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,827
4,902
Being Nvidia, NVLink could well be a Titan-only thing for 2016, I'm not expecting Pascal's high-end to be released in anything but >$1.000 high-end Titan and Quadro cards first. They did this with Kepler and Maxwell too (Titan before consumer cards) and NVLink seemed to be primarily developed for enterprise applications. It could very well be used for multi-card solutions but AMD has already shown us that PCIe 3.0 bandwidth is not the issue.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
One issue is that no x86 consumer CPU has, or will have, an NVLink controller built in (this will be limited to custom ARM and POWER architecture chips, unless Intel and/or AMD suddenly decide to include a proprietary controller for their competitor's HPC cards). This means that any consumer GPU will have to have both a PCIe controller AND an NVLink controller, or to add an outboard chip purely to translate between NVLink and PCIe.
An on-die controller would waste a noticeable amount of die-space that could otherwise be used for hardware that actually performs a rendering task, and thus is wasted for the vast majority of systems that use a single GPU.
An off-die translation controller would potentially impact performance, and would definitely add a significant extra expense to every card for a custom chip.
Nvidia could end up making a dedicated HPC chip with NVLink but no PCIe, putting a hard split between their consumer dies and HPC dies. Or they could eat the loss of die area due to the move to 14nm process giving some headroom, and use NVLink as a binning tool, disabling it for all (or all but very high end) consumer cards.

Either way, PCIe has not yet been a significant bottleneck to multi-GPU performance. Until DX12 and Vulcan become more ubiquitous and PCIe bus loads increase with the draw-call bottleneck removed, NVLink probably won't be an important factor for multi-GPU gaming. I also haven't heard much from Nvidia recently about Unified Memory coming with Pascal. It;s still on their slides, but it got pushed back from Maxwell to Pascal pretty quietly so I wouldn't be surprised to see it pushed back from Pascal to Volta too.
 

|||

King of Cable Management
Sep 26, 2015
775
759
I think Nvidia will have issues trying to do unified memory with an x86 CPU. However, unified memory between GPUs may be very possible. NVLink would give a low latency interconnect to the memory between all of the cards, like multiple processors connected by QPI, albeit with signal integrity to go through a couple connectors to a separate plane connecting the cards.

Also, if the PCI-e SLI bridge was dropped to make way for NVLink, that would remove two PCI-e controllers (one for each interface on top of the card) that are currently present. There would probably be four NVLink controllers in their place.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
Well there is already unified memory between CPUs on multi-processor boards, isn't there? I guess you couldn't really implement Unified Memory Access, but Non-Unified should be quite possible.

I didin't know the SLI connector on the top of the cards was PCIe, or did I understand that incorrectly?
 

|||

King of Cable Management
Sep 26, 2015
775
759
Well there is already unified memory between CPUs on multi-processor boards, isn't there? I guess you couldn't really implement Unified Memory Access, but Non-Unified should be quite possible.

Only AMD GPUs and CPUs can share memory pointers at this moment in their heterogeneous unified memory architecture. That is what is meant by unified memory here.

I didin't know the SLI connector on the top of the cards was PCIe, or did I understand that incorrectly?

They are based on PCI-e, yes. I would imagine the protocol has been extended, similar to how DMI is an extended PCI-e protocol connection.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
Only AMD GPUs and CPUs can share memory pointers at this moment in their heterogeneous unified memory architecture. That is what is meant by unified memory here.

They are based on PCI-e, yes. I would imagine the protocol has been extended, similar to how DMI is an extended PCI-e protocol connection.

Yes, I understood. What I meant was that there is already a protocol for unified memory access with Intel processors, so it's not like there's nothing to hook into if you wanted to make unified memory with the GPU. I don't see it happening with third parties though if there's no standard they can adhere to.
AMD has a huge advantage in that regard. Do we know whether HBM cards actually profit from that capability right now? It seems like that was hinted upon when the Fury, Fury X and Nano were unveiled.

Neat, didn't know that.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
CUDA also makes Unified Memory available for GPU compute on Nvidia cards already. It may be that the idea of adding a unified memory extension to OpenGL or Direct3D 11 is dropped entirely, and Nvidia and AMD simply go "use Vulkan/DX12 and manage the memory yourself" via UVA (unified addressing).
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
CUDA also makes Unified Memory available for GPU compute on Nvidia cards already. It may be that the idea of adding a unified memory extension to OpenGL or Direct3D 11 is dropped entirely, and Nvidia and AMD simply go "use Vulkan/DX12 and manage the memory yourself" via UVA (unified addressing).

If we're honest with ourselves, that sounds exactly like something they'd tell developers to do.
 
  • Like
Reactions: iFreilicht