msystems' gaming FC5 (Streacom FC5) 6.6L

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Welcome to my project. This build will be using the Streacom FC5 Alpha chassis



The FC5 is marketed primarily as an HTPC chassis. It is designed for a passively cooled 65W processor and supports one single slot expansion card. Its outer size is is 8.5L, and 6.6L internally.

Streacom FC5 Alpha Specifications
Chassis Material: Premium Grade (6063) All Aluminium, 10mm Thick Front Panel
Available Colors: Silver / Black – Sandblast Finish
Motherboard Support: Mini-ITX, Micro-ATX and Full ATX
Drive Capacity: Minimum 1 x 3.5″ + 2 x 2.5″, Maximum 3 x 3.5″ or 6 x 2.5″*1
Optical Drive Capacity: 1 x Slim 12.7mm Slot Loading Drive (Optical Version Only)
USB Ports: 2 x USB3.0, Left & Right Side, 19/20 PIN
Expansion Slot: 1 x Full Height Expansion Card*2 (Flexible Riser Required)
Cooling: Heatpipe Direct Touch 4 Pipes – Recommended CPU TDP 65W, Max TDP 95W*3
External Dimensions: 435 x 325*4 x 60mm (W x D x H, 69mm including feet) (8.5L)
Internal Dimensions: 380 x 318 x 55mm (W x D x H) (6.6L)
Power Supply: ZF240*5 or Nano Series PSU (Not Included)
Remote Control: FLirc or IRRC Solutions (Not Included)
Net Weight: 4.2KG
*1 Depending on hardware configuration, i.e. motherboard size
*2 Limited to Half Height Cards when using the ZF240 PSU
*3 As with any passive cooling solution, environmental conditions will have an impact on the performance. A TDP of 95W is only recommended when the chassis is placed in a location with adequate air flow and moderate ambient room temperature
*4 The front panel is wider that the rest of the body by 15mm, making it 450mm wide
*5 See notes for use with OPTICAL version

Cool! Now lets put all this in there and make a gaming PC:




_____________________________________________________________________________________

Index

1. General Background and Component Selection
2. Component Selection, continued
3. Thermal limits of the FC5 chassis and Hybrid Passive-Air Design
4. Initial Boot and assembly
5. Voiding Three Warranties (Lapping)
6. Voiding One more Warranty and further assembly
7. Initial Performance Testing
8. Flat Heatpipes possibilities considered
9. PCIe Riser Installation and testing
10. How to destroy your PSU, and more testing
11. Cable Management problem
12. Modding the case to accept GTX 1080
13. Prime95 and customizing Turbo Boost on the 6700K
14. Air Intake considerations, and switch to DC-ATX power supply
15. 1st Version of fan duct and GPU designed
16. Flat Heatpipe test experiments
17. 3D Printed fan duct designed and tested
18. China
19. Will it bend?
20. New Loot!
21. GPU Part I: GTX 1080 meets the FC5 chassis
22. GPU Part II: The Accelero S3
23. GPU Part III: Unforseen Consequences
24. GPU Part IV: Dr. Frankenstein's Laboratory
25. GPU Airflow planning
26. Overclocking (for the hell of it)
27. Exit fans Implemented
28. GPU Intake Part 1: Testing Axial Fan
29. GPU Intake Part 2: Testing Radial Fan
30. Long term update and what could be improved on the design
 
Last edited:

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Current build (after 4 months of modding):

 
Last edited:

Necere

Shrink Ray Wielder
NCASE
Feb 22, 2015
1,720
3,284
Are you planning to decase the PSU? SFX is 63.5mm, while you've only got 55mm internally...

Also I hope you're not planning to try to passively cool the GPU using the heatsinks built into the side of the case (it will never work). The 6700K alone will give you problems, unless you undervolt/underclock it.
 

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
General Background

I didn't start off intending to do a SFF build, actually I've ended in the SFF realm completely by accident.

The first step that lead to this build was the observation of how much space was simply unused in my current Mid-ATX tower. My past builds have used the venerable Lian-Li cases which roughly look like this on the inside:



  • The use of 5.25" drive bays has faded sharply over the past decade but at least served some utility for an enthusiast such as myself. However now with a complete transition to streaming content, and excellent support for bootable USB media on current motherboards, this leaves little reason to keep them around for media or OS support.
  • 3.5" drive bays are completely useless. The 3.5" drives on my existing cases are simply acting as protection for disconnected storage disks. SATA SSDs and M2 NVME storage are far superior except for the cost per gigabyte, which continues to fall.
  • The ATX and even the Micro ATX form factors don't seem necessary anymore. Most expandable devices can be now be added through a USB or SATA interface. Even inexpensive DACs support external USB/SPDIF, so an audio card isn't a requirement anymore, and on-board audio has improved significantly. As for SLI/Crossfire, they just aren't that useful except on very high budgets. With a given budget, buying a single x16 video card is better than buying two lesser priced cards, due to the efficiency loss of SLI. Even if SLI performance somehow is better, your increased power consumption, noise, and thermals make it a questionable choice. So SLI/Crossfire is only worth it if your budget exceeds how much the best single card costs. The only reservation left could be whether a smaller motherboard impacts performance, and to my surprise I discovered that is simply not a factor anymore. Larger boards do have a more robust VRM power design for overclocking, but this is not important for my purposes.
  • The improvements in PSU design mean the full ATX size isn't needed anymore. SFX is smaller and just as capable.

Given the above, it made perfect sense to explore a SFF build.

Component Selection, Part 1

Initially I did not plan to go higher than a GTX 1060 or RX480 on GPU power. I tend to stay away from latest generation GPUs because their their thermals are out of control and I like to go with the "crippled" versions of the chips which run much cooler and quieter.

That was the plan at least, until I saw this:





This is the Lini H1 which is a custom build from a boutique builder. They achieved a fully passively cooled GTX 1080 inside a HD Plex H5 enclosure. It's quite an accomplishment.



The H5 is a fantastic case, but I decided it was unnecessarily tall.

The Streacom FC9 is slightly smaller and has a nicer build quality in my opinion. The downside is that it does not include the superbly designed CPU and GPU heatpipe assembly offered by HDplex. In fact it does not offer any GPU heatpipe assembly at all.




At this point, I should have just ordered the FC9, which would have been enough of a challenge.
But I decided that was too big too. Something was nagging me... something from the past perhaps.




Once upon a time, Apple designed a computer called the Macintosh LC. It had a wonderful low profile, perfectly designed for a monitor to rest on top.



This was the dimension I was looking for and is reincarnated by the FC5 chassis.



Still, there were about 10 really really bad reasons to select the FC5.


Spoiler: I did not take the above advice.
 
Last edited:

Necere

Shrink Ray Wielder
NCASE
Feb 22, 2015
1,720
3,284
Two things:
  1. The HDPlex H5 looks to have about twice the heatsink fin surface area of the FC5, yet is still only rated for a 95W GPU.
  2. I can almost guarantee the "custom tuning for maximum silent fanless performance" of the GTX 1080 in the Lini system means it's heavily underclocked/undervolted to keep the thermals under control.
There's no way around the laws of thermodynamics, and in this case there's just too little surface area and/or too little airflow to properly dissipate that quantity of heat. Not to rain on your parade or anything :p
 
  • Like
Reactions: Phuncz

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Internal Component Selection

Internal Components:
Asus Rog Strix 270i mITX
Intel I7 6700K (95w TDP)
EVGA GTX 1080 (180w TDP)
Corsair DDR4 3200 8gb x2
Samsung 960 EVO M.2

  • The Asus Rog Strix was selected because of its excellent benchmarks consistent with or exceeding ATX performance and ability to house two full size M.2 drives and skip any SSDs.
  • The 6700K was selected just because the additional cost of the 7700K wasn't really necessary, and there were reports of the 7700K having bad TIM compound which was a major concern given the limited thermal headroom. It could be argued that 6700K is overkill but I decided to start with the nicest chip and undervolt as far as necessary. I also do encode video from time to time, so it was justified.
  • The GTX 1080 was decided upon for maximum performance per watt, even if undervolting was required The 1070 and 1060 have crippled cores. This is proven here where the 1080 has a 10-30% advantage over a 1070 given the same wattage:


Before deciding on a case, I estimated a 275w+ TDP target. The GTX 1080 benchmark charts I looked at had showed occasional spikes above 250 watts, so my expectation was needing 350 watts to be safe. This exceeded all the nano psu fully internal solutions I was aware of. Streacom sold a 240w nano ATX PSU, but it was a little low to be considered. I considered the 330w DC brick to pico ATX option, but I did not want to commit to that without knowing what the power draw/spikes would ultimately look like. I also was questioning how reliable that solution would be because I could only find one 400W DC-ATX, and it was from China.

It's important to note that I was still in the brainstorming phase at this point and I was planning to go with the FC9 for the above reasons. The FC9 can accommodate an SFX PSU, and in theory all of the above components, while the FC5 cannot. The FC5 internal dimensions allow only 55mm height. A SFX power supply is 63.5mm tall.



It wasn't until I started looking at the disassembly of SFX power supplies in jonnyguru (while deciding on a SFX PSU for the FC9) that I noticed that there appeared to be a good amount of clearance once the PSU fan was removed.



The top 15mm of most of the SFX supplies appeared to be just housing the fan.

Removing the fan can provide a clearance of 48.5mm, but it's actually not that simple. Removing the fan does not mean that other components along side of the fan do not interfere with the clearance. There are vertically placed boards and thick wires to consider which can intrude into the upper 15mm. Even if there is no interference, this still introduces a new set of problems: There is now an exposed high voltage circuit inside of the case, and a cooling solution must be provided since the fan is removed.

For these reasons, I decided I would be modifying the SFX psu on an experimental basis, but it would not be essential to the build. If it worked well, I would keep it, but if not it would be replaced. The final power solution could be changed later after the total system power draw was known. The main purpose of the SFX unit would be a placeholder to dial in the real power draw.



The Enermax Revolution 650w SFX (80+ efficiency rated) was selected. While it is a lesser known product, it has a good review from jonnyguru. It also has a hugely valuable feature which you can see above: The AC filter (left side) can be moved away from the PSU and placed in any orientation. The wires terminate on the bottom right. This means that the Enermax PSU can now be mounted with its backside facing a wall of the a case. The Corsair SF600 can not do this because the AC input is soldered on to a vertical daughterboard.

With the components selected, a basic CAD model was built which confirmed that a fit was possible, with no regard to thermal design at this point, of course.



First actual mockup. One of the VRM heatsinks has been removed here to accomodate the heatpipes.
 
Last edited:
  • Like
Reactions: owliwar and Phuncz

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Thermal Design Limits of the FC5 chassis

Fully passive gaming rigs are difficult to do right and often lead to thermal throttling. Even when it technically works, the temps tend to be at a level above 80C which I do not feel comfortable with for long term use. Below is the only attempt I found for a fully passive FC5 build from someone crazy enough to try it. As you can see, it looks absolutely insane.

Heatpipes are connecting both sides of the case together to balance the thermals, but it's still not enough. There's a CPU heatsink outside of the case which is also connected to the GPU. The power supply is permanently outside of the case as well.





So Passive or not?

Obviously, there is no way that the FC5 will be able to passively cool a 6700K and GTX 1080. Therefore active and passive cooling will be used on this project.

At this point, you might be wondering why the hell a fully passive case was selected for this project if the whole "fully passive" concept is going to be thrown to the wind. Well, to answer that, the actual goal of the project is silence, and any passive cooling elements will act to reduce the performance requirements of the active cooling. The enthusiasm for passive cooling really comes from silent computing, when fans used to be noisy and there was no one designing silent fans yet. Thanks to the fine work by the folks at SPCR, we know that merely adding active cooling does not prevent achieving a completely silent build. Whether or not it is fully passive is irrelevant if the goal was silence. I will aim for complete silence and undervolt as far as necessary to reach a nice compromise. Hopefully none will be needed.

Hybrid Air-Passive Design

For the purposes of my build i've decided to call this thermal management strategy Hybrid Air-Passive Design.
This means layering heatsinks on top of the CPU and GPU heatpipe assemblies to supplement their passive cooling ability with on-demand active cooling. The active cooling will be either be off or tuned to inaudible levels until temperatures exceed the passive cooling capacity after which the active cooling will scale up appropriately. This allows us to exceed the thermal design limit of the case without a terrible performance impact.

CPU Socket Area with heatsink on top of heatpipe assembly:


I have tried to illustrate how this differs from Fully Passive and plain Air Cooling with these conceptual charts below.


By adding active cooling the goal is to avoid any thermal throttling typically suffered by a passive system...


...while largely avoiding any noise from a traditionally air-cooled system.


You might notice the "Air Only" fan on the Hyper 212 Plus is running at 2050 RPM @ 33db(A) and is being compared to a Nexus fan at 1100 RPM. The point of this is not to say that the Hyper 212 would ever need to run anywhere near that fast and loud. The purpose is demonstrate that we have lowered the fan requirement significantly by having so much more heat capacity compared to the "Air Only" system. The passive cooling has reduced our fan requirements.



This is to show how much heat is required to cause an increase in temperature in each of our system types (without active cooling). The Hybrid Air-Passive system can absorb more heat than the purely Passive because it will not only have extra heatsinks added but is also using all available surface areas for cooling (five surface faces of the FC5 case versus only one face of the sides). So we are improving its passive cooling ability also. The active cooling requirement is lowered, proportional to how much of the heat is dissipated by the passive surface area, and that's how we end up with a Nexus fan cooling a 6700k. (At low RPM hopefully :))


Specific Application of the above to this build:

CPU Improvements​
  • A 95mm heatsink will be added on top of the CPU heatpipe assembly
  • An intake hole and fan (80-120mm, TBD) will be added to the bottom of the chassis
  • A fan duct will be added to channel airflow to the CPU heatsink & exit through the I/O area
  • Heat from the underside M.2/Chipset area will be conducted to the bottom panel
  • Heat from the surface of the M.2/Chipset area will be cooled by additional copper heatsinks
  • Heat from the CPU VRM area will be cooled by additional copper heatsinks
GPU Improvements​
  • Heat from the GPU VRM area will be conducted to the top panel
  • Heat from the GPU side panel will be conducted to the face panel*
  • A heatpipe assembly will be added to the GPU and routed to the internal wall*
  • A larger heatsink will be added to the GPU over the heatpipe assembly*
  • A second intake (80-120mm, TBD) will be added under the GPU area
  • A second intake fan will be added under the GPU area*
  • A second fan duct will be added to channel airflow along the entire GPU area to the PCI slot cutout*
*TBD, still designing these parts
 
Last edited:

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,938
4,951
It's quite the challenge you set out for yourself. I'm seeing the logic in your thinking, considering you are still updating your project build log, I'm looking forward to see how you've hopefully overcome those challenges.
 

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Initial Boot

Getting the copper heatsink to fit over Streacom's heatpipe retaining bracket wasn't that difficult but took a trip to the hardware store.



Longer m3 screws were used as the stock pins cannot hold the entire assembly into the socket. The alignment into Streacom's bracket was perfect.



I could have discarded Streacom's bracket and mounted the heatsink over the heatpipes, but I thought it would be safer to keep the bracket on to add lateral stability and help contain the TIM mess of the heatpipes. I also theorized that by keeping this piece, it might cause more heat to go through the heatpipes and into the case before reaching the heatsink, because it serves as a little thermal resistance. The inverse of this is that airflow over the heatsink will be slighly less effective.


Initial setup



As you can see thermals are really really bad. This is just how it was looking in the BIOS at idle.

The processor had TIM, but this is without TIM on the heatpipe assembly yet.


The VRM heatsink had to be removed to make space for the heatpipes. This was only temporary though and I figured it would be okay to idle in the bios with it off.

The primary concern was actually heatpipe vertical clearance over the caps as the heatpipes only have 10mm clearance and the caps are around 8mm tall. Fortunately there was no problem there.
 
Last edited:

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Voiding Three Warranties

It was pretty much guaranteed from the get-go that this build was going to void every warranty. I decided just to go headlong into the lapping right away because what was the point of delaying it since it was going to need it.



The worst offender of the bunch was Streacom's parts. The aluminum base block was embarrassingly rough for something that is supposed to contact a modern CPU. They should be ashamed of themselves.



You can see right away how uneven the surface is as only the sides have worn. Was the curvature intended? I really don't know, but the aluminum surface was very rough.


50% mark or so


I cleaned it up a little more after this, but that was basically good enough.


Goodbye


After finishing 320 grit


800


1500


2000



I didn't take any pictures of the time I spent on the top bracket since it was just a quick effort to smooth out the part that the copper heatsink will contact and probably won't make any difference at all on the thermals.

The copper heatsink itself was already machined really well and didn't need much besides a few swipes on the 1500.

The whole thing was re-assembled and TIM was applied.





The heatpipe block retaining screws were slightly touching the underside of the heatsink, so I sanded off about .5mm. You can see this screw in the very center below the heatsink.



The stable bios temp is now 54 C.
 
Last edited:

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Voiding 1 More Warranty (Asus 270i) and further assembly

For some reason, Streacom's heatpipes did not align perfectly with the case. I should have been on the lookout for this as apparently the vertical height of the socket/processor can vary.



I didn't notice this until after I had re-assembled everything. This was causing the heatpipes to apply additional pressure to the CPU bracket. I don't think any permanent damage was caused but it might have caused the heatsinks not to make even contact pressure. I bent each heatpipe by hand to reach the alignment correct.


After re-alignment.





To revist the motherboard VRM issue, I decided to keep this stock heatsink and just "machine" away enough material so the heatpipes may pass through.


This will impact the cooling capacity of this part, but it will be compensated for later.






Complete reassembly.



This might not look that great but I am very happy with how it turned out given the tools I had available.


For now the M.2 drive is loaded in this slot directly above the chipset. It's covered by what only deserves to be called a "piece of metal".


I really don't know how ASUS thought this would be enough cooling.




Throw the GTX 1080 in there too, whatever. It's makeshift braced on the backside because there was no bracket to support it in this orientation. A Bd-rom drive was added to install Win 7.
 
Last edited:
  • Like
Reactions: Soul_Est and Phuncz

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Why stop at lapping the CPU heat spreader...why not go all the way and delid it?

Actually, I've never tried this because I couldn't figure out how risky it was. Lapping was basically zero risk so I did it immediately. I will add delid on my potential to-do! Could you recommend a good guide?
 

Phuncz

Lord of the Boards
SFFn Staff
May 9, 2015
5,938
4,951


As you can see thermals are really really bad. This is just how it was looking in the BIOS at idle.
BIOS idle is often not the best way to gauge idle temperature as it tends to keep the clocks at max and the voltage up high. So even though it's not doing anything, it can still show over 50°C. The lapping probably helped a lot, along with adding thermal paste to the heatpipe assembly.

Maybe you already did it and when I finish this thread, you might have already posted it, but you can probably lower the voltage on the CPU without affecting performance. Or even lower it considerably and only give back a little performance.
 

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Initial Performance Testing


This is the KILL A WATT and it's going to be our best friend by telling us how much power we are using at the wall. This will help later when deciding to ditch the SFX unit or do something crazy with it.


This is the idle power consumption in the bios. This partially explains why the idle temp was so high earlier as the processor can't enter the right low-power state in the bios, or something.




All the performance tests are being run with the system like this. These are low rpm scythe 80mm fans which don't have that much airflow and will optimistically approximate what we can get from our intake.


Heaven was fired up with Afterburner to supervise GPU voltage, boost and temps.

First testing for maximum performance.

Here we are with what I believe is at or slightly under stock voltage at 1.042mv, and 1949mhz boost, slightly higher than the GTX 1080 FE stock boost which is 1898MHz. This is not that impressive, but it is nice considering this card a no frills GTX 1080.


281w draw here. Thats pretty much right on the money. However we aren't really loading the CPU much.

And now for a word on how Nvidia rigged the 1080.

Nvidia's default fan curve is really a crime. It tells the chip to throttle itself at 84C instead of increasing the fan speed instead, so you lose performance by default. That's fraud in my opinion. No GPU should ever do that because a GPU is supposed to be designed for performance first. Nvidia's "Boost 3.0" will increase clocks freely until it slams into this throttle temperature. This takes a few minutes for the temps to ramp up before it happens, but it's short enough to run a benchmark. This allowed Nvidia to cheat any short benchmarks in reviews and it also makes tweaking more difficult. Bottom line is the GTX 1080 runs a lot hotter than Nvidia wanted to admit, but since ramping up the fan would have caused bad reviews on thermal issues, they just set a weaker fan curve and set the chip to throttle itself when it hits 84C. Reviews will mostly say that the GTX 1080 FE is a quiet card (which is false because it thermal throttles) = Mission accomplished for Nvidia.

The point of all this is that I had to set a much more aggressive fan curve to keep temps under 80C, and it has nothing to do with the FC5 case: it's just the way the card is. This could also be EVGA's blower design but I have no way of knowing. This pretty much destroyed any hope that this card will be able to run at an acceptable noise level with stock cooling and it will take a while to come up with my own solution. This isn't an immediate problem and will be dealt with later.


Next- some scaling tests.



1873 mhz @ 899 mv and 215W at the wall with an estimated 82% GPU TDP.

Our 1080 is at -163mv here but still have higher than stock clocks of this particular EVGA model which is 1607mhz / 1733mhz boost, and actually in line with GTX 1080 FE stock 1898MHz max boost. This is an excellent result and could possibly be handled by a 300W power supply with a 400W peak.




1733 mhz @ 800mv (Minimum voltage) and 179w at the wall. Estimated 64% GPU TDP.

Now at -262mv, this is the lowest voltage the 1080 will accept without a modified bios, however we still are able to maintain the quoted max boost of this EVGA model and match the stock boost of a GTX 1080 FE. This is another good result and makes our power estimate closer to 250W / 350W peak if we decide to go with this voltage. The fan doesn't need to be as aggressive either because temps are much lower. The fan can run around 60% and still keep temps under control but it is still too loud in my opinion.



1885mhz @ 875mv (-.187mv) and 201w at the wall. This is another possibility.



This is the last test for now.
 
Last edited:
  • Like
Reactions: Soul_Est

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
BIOS idle is often not the best way to gauge idle temperature as it tends to keep the clocks at max and the voltage up high. So even though it's not doing anything, it can still show over 50°C. The lapping probably helped a lot, along with adding thermal paste to the heatpipe assembly.

True! I found this out later.

Maybe you already did it and when I finish this thread, you might have already posted it, but you can probably lower the voltage on the CPU without affecting performance. Or even lower it considerably and only give back a little performance.

Indeed, I certainly did explore this but have only scratched the surface while leaving all four cores synced. It seems to be quite a can of worms when it comes to tinkering with 1/2/3/4 core multiplier settings and how this plays out on performance and voltage. For example, the single core multiplier can be overclocked while four core multiplier is underclocked. There's also the turbo boost configuration which can be customized.
 

msystems

King of Cable Management
Original poster
Apr 28, 2017
804
1,405
Flat Heatpipes



These are the flat heatpipes made by AMEC Thermasol. These particular ones are 1.6mm thick and 30mm wide but there is a wide range of length, width and thickness options to choose from :cool:.

Part Number/ Thickness (mm) / Width (mm) / Length (mm) / Bending Radius/ Min.(mm)/ Heat Transfer/ Rate Max.(Watt) / Datasheet
MHP-1220A-100A 1.2 20 100 R2 5 ~ 18
MHP-1220A-150A 1.2 20 150 R2 5 ~ 18
MHP-1220A-200A 1.2 20 200 R2 5 ~ 18
MHP-1630A-100A 1.6 30 100 R4 11 ~ 50
MHP-1630A-150A 1.6 30 150 R4 11 ~ 50
MHP-1630A-200A 1.6 30 200 R4 11 ~ 50
MHP-2040A-100A 2.0 40 100 R5 40 ~ 170
MHP-2040A-150A 2.0 40 150 R5 40 ~ 170
MHP-2040A-200A 2.0 40 200 R5 40 ~ 170
MHP-2040A-250A 2.0 40 250 R5 40 ~ 170
MHP-2040A-300A 2.0 40 300 R5 40 ~ 170
MHP-2040A-350A 2.0 40 350 R5 40 ~ 170
MHP-2040A-400A 2.0 40 400 R5 40 ~ 170
MHP-2040A-450A 2.0 40 450 R5 40 ~ 170
MHP-2040A-500A 2.0 40 500 R5 40 ~ 170
MHP-2550A-100A 2.5 50 100 R6 75 ~ 300
MHP-2550A-150A 2.5 50 150 R6 75 ~ 300
MHP-2550A-200A 2.5 50 200 R6 75 ~ 300
MHP-2550A-250A 2.5 50 250 R6 75 ~ 300
MHP-2550A-300A 2.5 50 300 R6 75 ~ 300
MHP-2550A-350A 2.5 50 350 R6 75 ~ 300
MHP-2550A-400A 2.5 50 400 R6 75 ~ 300
MHP-2550A-450A 2.5 50 450 R6 75 ~ 300
MHP-2550A-500A 2.5 50 500 R6 75 ~ 300

I have only decided on implementing these in two areas so far. First is the backside GPU VRM area.


It's not actually this bad. The above is from a thermal issue on one of EVGA's cards which has since been corrected.

Nevertheless, the entire VRM area on the backside of the card is an ideal candidate for cooling and should be able to be assisted with the flat heatpipe pressed against the lid of the case and/or curved 90 degrees to be secured into the side wall of the chassis. The -90 degree angle is very harmful to the performance of a heatpipe like this, but it's better than nothing and will remove some heat passively.

The second area for cooling will be the underside of the motherboard's M.2 slot, against the floor of the chassis. This should transfer not only the M.2 heat but also additional ambient heat from the motherboard.
 
Last edited:
  • Like
Reactions: Soul_Est