Theoretical Kimera Cerberus builds

What platform will you probably be using ?

  • Intel Socket 1150 (Z87, Z97)

    Votes: 6 18.2%
  • Intel Socket 1151 (Z170, ...)

    Votes: 14 42.4%
  • Intel Socket 2011-3 (X99, ...)

    Votes: 9 27.3%
  • AMD Socket AM3+ (900-series)

    Votes: 1 3.0%
  • AMD Socket AM4 (???)

    Votes: 3 9.1%
  • AMD Socket FM2+ (A88X, ...)

    Votes: 0 0.0%
  • Other

    Votes: 0 0.0%

  • Total voters
    33

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
So this case might be upon us soon and with Intel's Skylake, AMD's R9 Nano and Fury X, even NVMe M.2 SSD's bringing new and exciting tech to the world of SFF, I'd like to do some theoretical builds. What's mainly interesting is how the Fury X will be able to seemingly allow a dual 120mm CLC in this case, while Intel's Skylake offers more PCIe lanes for dual cards. This might be the tipping point in which PCIe 3.0 x8 could be a bottleneck, with DDR4 allowing more than 100% increase in bandwidth *, this might be good timing to go with Skylake or Haswell-E. With NVMe PCIe 3.0 x4 SSD's, which can also be fed right from the CPU, we're lifting up some serious bottlenecks with Skylake alone.

With VR and 4K (probably at the same time) bringing a new jump in performance requirements to the table, for once this seems like a well-timed event. Maybe we could go one step further, since many very knowledgeable people are here in this forum and theorize about the possible performance and differences. While I'm not going to say the new AMD cards are better or worse, I'm very interested in their performance over size and the new tech it brings to the table. For SFF, it's a big deal.

Food for thought: http://www.legitreviews.com/12k-gaming-with-one-amd-radeon-r9-fury-x-graphics-card_166585

* I'm looking at DDR3-1600 vs DDR4-2400 as both are the mid-range in speed and about the lowest $/performance which most people usually get.
source: http://www.corsair.com/en-us/blog/2014/september/ddr3_vs_ddr4_synthetic
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
So I'll start, I have two future builds for 2015 in my mind: one based on Z170 and one based on X99. I'm not sure I will be upgrading this year, it might as well be in 2016 or 2017. I'd like to buy a house someday and feeding $2,000 worth into a hobby isn't helping that goal. But let's go back to business:

Intel Z170:
- Intel Core i7-6700K
- unknown Z170 mATX board
- 4x 8GB DDR4-2400 RAM
- dual AMD Radeon Fury X
- 500GB SSD NVMe PCIe 3.0 x4 M.2
- SilverStone SX700-LPT PSU
- not sure how the CPU would be cooled

Intel X99:
- Intel Core i7-5930K
- EVGA X99 Micro2 mATX board
- 4x 8GB DDR4-2400 RAM
- dual AMD Radeon Fury X
- 500GB SSD NVMe PCIe 3.0 x4 M.2
- SilverStone SX700-LPT PSU
- not sure how the CPU would be cooled

The main issue is I don't see the cooling working out just yet. With dual AMD Fury X cards, you'd have two 120mm x 65mm radiators (including fan), where one would be sitting on the lower front panel and the other would be on the side bracket. But what about the CPU ? The bottom would need 2x 120mm fans as intakes, so both GPUs can work as exhausts, so all that heat isn't ending up inside the case. But what about the CPU ? Maybe have a large heatsink that sits behind the Fury's radiator on the sidepanel and let it coast ? But how well would that fly with the 6-core behemoth ?
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
While Skylake can handle both DDR3L and DDR4, I'm guessing implementation on the boards is going to be one or the other, because supporting both ends up with a lot of quirks. Bandwidth concerns are negligible between the DDR3-1600 and DDR4-2400 due to how Intel processors handle it. That being said, I would actually expect you to go with DDR4, because in a couple of months with Skylake some ridiculously unnecessary 3333+ speed modules will come out, driving down prices. We may also say non-ECC 16GB dimms by then.

5930K --> I'd offer the Xeon E5-1650 alternative to you, but this is dependent on cost differences and availability in regards to where you live. ECC support, and multiplier overclockable. They are the same chip, but the Xeon will also be better binned. Both chips run cooler on average than the i7 4790K despite having 6 cores, you can thank the larger die being able to handle indium-solder for that.

Air cooling the CPU is an option. Personally, in the configuration you are thinking, I would actually be going custom loop and ditching the AIOs to watercool everything. If the regular Fury's are the same size PCB (incredibly doubtful since they will be aircooled), I'd actually grab those instead. If it was just one card I'd keep the AIO.

I also wouldn't discount running the GPUs as intakes. I am a huge proponent of positive pressure but more importantly bringing fresh, cooler ambient air through the rads to cool them. If you're got enough airflow in the case, case internals won't change as much as you may suspect.
 
  • Like
Reactions: Phuncz

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,781
At stock speeds even the little Noctua L9x65 handles the 5930K just fine so something like that may be a good option.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
Bandwidth concerns are negligible between the DDR3-1600 and DDR4-2400 due to how Intel processors handle it.
Ah that's not what I was expecting. Do you have an article or review about this to compare ?

Personally, in the configuration you are thinking, I would actually be going custom loop and ditching the AIOs to watercool everything.
After spending a lot of money on a simple custom loop in an Ncase M1 and finding out I could achieve less noise with air cooling, I'm not inclined to go this route again without a very good reason. The Fury X's solution will hopefully be more than adequate and will probably fit as well as a custom loop. The other advantage is that you can remove a single card without dismantling your entire loop. Fitting dual GPUs and keeping the noise in check will be hard to do with air-cooling, but the CLC solution for the Fury X might be the next best thing.

I also wouldn't discount running the GPUs as intakes. I am a huge proponent of positive pressure but more importantly bringing fresh, cooler ambient air through the rads to cool them. If you're got enough airflow in the case, case internals won't change as much as you may suspect.
I've had my previous setup work as positive pressure but it ended up performing worse than using the GPU as an exhaust and causing slight underpressure when the GPU is at full load.

A GPU generates a lot more heat than a CPU and this translates into all-around higher temperatures, whereas the configuration I have now (2x 120mm intake for CPU/case, 2x 120mm exhaust for GPU) works very well with minimal noise. The heat gets dumped immediately outside the case and the fraction of heat that the CPU generates is less of a problem than vice-versa.

I wasn't really convinced this was a good setup until I tried it after Cowsgomoo's experience with a very similar setup. But I'm glad I did.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
The scenarios in which you experienced watercooling and positive pressure are actually ones which would be the worst possible outcome.

I would touch on these two things regarding that:

1) The NCASE M1 isn't very good for custom loops in any scenario, and really wasn't made for watercooling, which both you and I found out the hard way with our setups. I'll admit I haven't investigated the Kimera close enough, but if it only fits a 2x120 rad in EITHER the bottom or front and not both, I likely would not pursue WC at all in the case. It's not enough surface area for the performance/acoustic balance I'd look for. The idea of having two separate card's pumps emitting noise and having two failure points doesn't sit well with me.

2) Your specific air cooling scenario does make sense for a neutral/slightly negative pressure in the M1, however, my positive air pressure advocation in this scenario is specifically for water cooling as it usually goes hand in hand with bringing in cooler ambient air to cool the rad. In a more general sense, the Silverstone cases I use are positive pressure oriented due to the massive AP181's they employ. I don't think it's a good setup in general for air cooled builds though.

Ah that's not what I was expecting. Do you have an article or review about this to compare ?

There's actually a fairly recent article on Anandtech that revisited this very topic. Essentially, beyond synthetic benchmarks when you move beyond 1866 speed ram it's very difficult to justify the additional cost. DDR3 or DDR4 doesn't really matter too much, mostly because DDR4 has significantly higher timings as the speeds increase.

DDR3 vs DDR4:

http://www.anandtech.com/show/8959/...to-3200-with-gskill-corsair-adata-and-crucial

Old article about DDR3:

http://www.anandtech.com/show/6372/...-to-ddr32400-on-ivy-bridge-igp-with-gskill/14
 
Last edited:
  • Like
Reactions: Phuncz

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,781
I'll admit I haven't investigated the Kimera close enough, but if it only fits a 2x120 rad in EITHER the bottom or front and not both, I likely would not pursue WC at all in the case.

With the PSU mounted to the rear (like how the Compact Splash has it but with ATX PSUs) then 240mm rads should fit on both the front and bottom. Tube routing and space for pump and reservoir is the tricky part.
 
  • Like
Reactions: Phuncz

Vittra

Airflow Optimizer
May 11, 2015
359
90
Thanks for that confirmation. That already gives me two ideas. One eschewing the reservoir entirely - as the top panel is removable, that front radiator's top would likely be highest point in loop, so using it to bleed and drain would be easy, the other possibly mounting a very small res/pump combo with a bracket to the front rad.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
The idea of having two separate card's pumps emitting noise and having two failure points doesn't sit well with me.
A valid point ! Something I want to add, many people also forget is that watercooling has a lot more failure points, like the sealing between a block (happened to me this year), the fittings and many other factors involving degradation over time (corrosion, tube damage). But I'm inclined to think that these CLC's are robust enough to last their expected usage time. Maybe I'm naive in this regard. In the end nothing beats air cooling for shear reliability.

There's actually a fairly recent article on Anandtech that revisited this very topic. Essentially, beyond synthetic benchmarks when you move beyond 1866 speed ram it's very difficult to justify the additional cost. DDR3 or DDR4 doesn't really matter too much, mostly because DDR4 has significantly higher timings as the speeds increase.
Thanks for the links, I may have read these in the past but in a different mindset. Something that I have yet to see is an article that shows the issue with 4K and running out of VRAM, if DDR4's increased bandwidth, along with the more modern cards like the GTX 780 and Radeon 290X and preferably in SLI/CF. Maybe the point is moot for the time 4K really digs in that most cards will have >4GB but I'm still curious about this. Because VR is all about lowest possible latency and consistent frame-time, this might be something to look out for.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
No problem. Based on my understanding (someone can correct me if I am wrong), when you run out of VRAM and the game actually requires more, communication then occurs through PCI-E which is abysmally slower then accessing the ram available directly on the card. As this is the source of the bottleneck, I'm not sure the increased bandwidth of DDR4/quad channel will matter.

I mentioned required specifically because a lot of games now cache all available ram, but do not necessarily use it. This causes a lot of misinformation as many people are led to believe they need more than they actually do.

I should note that integrated video (iGPU/APU) due to it's nature does see quite significant boosts to higher speed ram in gaming and benchmarks.
 

jsco

Average Stuffer
Feb 2, 2016
60
55
the dual fury x build is just a really tough fit for this case. there is no good answer for where to put those thick, hot radiators. using the exhaust from one of them to cool the CPU has three issues: one, you can't fit a very large heat sink under there because of the rad+fan thickness, two, moving the fan away from the cooler will significantly decrease its effectiveness, and three, you're blowing hot air on it. you might get away with stock speeds that way, but forget OC (which would be a shame on haswell-E).

a custom loop with two GPUs is also a bad fit in this case. just not enough radiator space, period. even 240+140 would be pushing it into high delta T, high fan speed territory for two 275W cards plus a 140W processor (=690W total). (isn't 700W the biggest SFX-L PSU anyway?)

GPU air coolers, on the other hand, work great with this case design. leaving a free slot between the cards and having a front fan as intake is the ideal cooling scenario for these cards and should give totally reasonable temps at average sound levels. this is pretty much the only viable SLI configuration for this case; for anything else, you'll be fighting the design and getting sub-par results.

my theoretical build is an i7-5820k and a single 980 Ti, with an nzxt kraken x41 AIO cooler for the CPU and air cooling for the GPU. this could be bumped up to 2x 980 Ti's and a 5930k, and you'd be just barely over the 700W ceiling after memory and disks. this might work as long as you don't run furmark and prime95 at the same time, or use it for folding...
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
What would the point of a dual fury x build even be? Just getting a Fury X2 seems to be a much more sensible solution, no? Except your going for the challenge, of course.

With a single 980 Ti, your build sounds like a perfect fit for the case. Not sure what your primary use case is, but if you wanted to make sure that the system will run stably even under unrealistically high loads, you have a few options:
  1. Getting a lower wattage processor
  2. Undervolt/Underclock the GPUs or CPU
  3. Upgrade to an ATX PSU with 800W+ and put the kraken on the front radiator mount
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
You get 2 separate GPUs so that you can sell one once you realize the futility of multi-gpu setups, and not have the hassle of trying to sell a dual-gpu.

:)
 
  • Like
Reactions: iFreilicht

jeshikat

Jessica. Wayward SFF.n Founder
Silver Supporter
Feb 22, 2015
4,969
4,781
A overclocked dual graphics card setup isn't ideal. If someone wants that with water cooling on everything they would be better served by @Jeffinslaw's Project mATX.

Air cooled dual cards or a single GPU is what the case was designed for. SLI/XFire custom loop is possible, but it'll be trickier.
 

jsco

Average Stuffer
Feb 2, 2016
60
55
so, all of this being said... i am thinking there might be a way to fit a dual gpu custom loop in here after all. not that i want to do this, but i think it could be done by someone sufficiently nuts.
  1. ATX PSU
  2. waterblocks on the GPUs, reducing their width to 1 slot (backplate still takes up 2 slots)
  3. CPU block
  4. 240mm rad on the bottom with slim fans to clear the bottom GPU
  5. 240mm rad on the front (!) as high up as possible, probably requires drilling new screw holes, possibly adding a little plate to put screw mount points where there is currently open space. full size fan on the top inside the radiator, slim fan on the bottom inside if it'll fit, otherwise slim fan on the outside of the case (!!) but still inside the front shield
  6. no res, or tiny res in the tiny area left inside the top front fan, or that frozenq m1 res
  7. pump wherever it fits
this would be a serious pain in the ass, but it just might work, and it would be damn near the theoretical lower bound on size for a full water dual GPU build. 2x240mm rads is not going to be silent for 750W+ TDP, especially with the compromised airflow in the crowded case, but it shouldn't be miserable either. dropping down to a 120mm front rad mounted at the top would clean the build up a lot, but that's going to get real hot real fast, like 20+ degrees C delta T hot. i think 2x240 would be the minimum.

now back to reality. i'm going to be very happy with my single air cooled 980ti. :)
 
  • Like
Reactions: iFreilicht

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
With VR stuff coming out real soon, dual GPU isn't as bad an idea anymore.
The major problem with multiple GPUs at the moment is issues with alternate frame rendering and inplimentations. First of all, support for it can cause issues porting to different platforms, but the other major issue is rendering pipelines that are dependent on previous and following frames which can cause dependency problems when the required frames aren't actually done.

VR sidesteps the issue since each eye needs the scene to be rendered separately from the other, so the GPUs can each have a different scene to render, and won't have as many issues with it. That said, though, this is obviously dependent on you using VR, and I would echo the question of why not just use a single dual GPU card.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
I also see the benefits for VR with SLI / Crossfire if the implementation is done correctly. If this is indeed so, I might just pick up another R9 290X, find a way to fit it in the case with a decent cooler and be good for a while. Unless VR doesn't interest me, since we're looking at a serious investment for something that most games I play won't be able to benefit.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I think that for the near future (next year or so) VR SLI/Crossfire is something that's only worth considering if you've already bought the fastest single card available and still have some money burning a hole in your wallet.

VR multi-GPU support needs to be explicitly exposed by the engine developer, and explicitly optimised by the game developer. And this is a different optimisation path than single-GPU, as you are now constrained in not just minimising render latency, but also trying to do so with the constraints to PCIe bus bandwidth and latency. It often works out faster to perform the exact same operation twice (once per GPU, with zero benefit to multi-GPU) than to do the work on one GPU and transfer the result across the bus. Pixel-dependant operations scale pretty well AND need to be done per-eye, so scale well wtih multi-GPU. But other operations (geometry, shadows, etc) are relatively independent of resolution and do not need to be re-done per-eye, so do not benefit (and may even be penalised by) multi-GPU.

Once display resolutions get a LOT higher along with higher refresh rates, the benefits of multi-GPU will become much more significant. For the time being, I suspect the majority of developers, particularly smaller ones, will focus on optimising to minimise latency for a single GPU. Re-doing all that optimisation work again for a small subset of a small subset of an audience likely won't make financial sense.
 
  • Like
Reactions: Vittra and Phuncz

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,853
4,912
Do you have experience or have done research with an engine or game/demo that can properly utilize dual GPU ?
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Not personally, but I've talked with a couple of developers at Epic about VR multi-GPU (as well some quicjk chats with few game developers, and some guys from Oculus and Valve) and paid close attention to what's been released and demonstrated by Nvidia and AMD.