Enclosure Can case panels serve as heatsinks without a "block" and "pipes" connecting them to components?

zovc

King of Cable Management
Original poster
Jan 5, 2017
852
603
Hey!

I'm giving some consideration to building my own case, perhaps by hand or perhaps through a place that would "one-off" manufacture the pieces for me. Seeing if I can get familiar with SketchUp, but I keep hitting unintuitive speed bumps that I'm having a hard time finding the patience for with my busy schedule right now. Haha

Anyways, I was wondering if it's possible to get a worthwhile amount of heat dissipation out of case panels without using "block" and "pipe" solution like a lot of the custom "passive cooling" kits. Obviously, I get what the pipes and block are there for, but I'm wondering if a meaningful amount of heat can be transferred out of the case if, perhaps, a side panel was all copper with some sort of "fin" design and perhaps increased airflow.

Alternatively, is a large-scale, (larger than, say, the square 17cm of an ITX motherboard) plate of copper with some sort of fin/"heatsink" design better able to cool GPUs than a conventional GPU heatsink? There's obviously a larger surface area, but I also obviously can't necessarily guarantee a more 'specialized' or engineered design than manufactures come up with. Unlike Streacom cases, I would be trying to figure out a way to have some airflow over the heatsink, I'm thinking of having a small cavity between the back panel and the copper plate where top/bottom fans could push/pull air through.

Excuse the paint drawing I made right after waking up (and before coffee):

That was me trying to illustrate a conventional case design with a copper plate spaced a little ways from the back panel of the case, with a top intake fan pushing some air down past the plate and some into the main compartment of the case.
 

Necere

Shrink Ray Wielder
NCASE
Feb 22, 2015
1,719
3,281
So is the side panel attached to the CPU and/or GPU in some way? Like via a copper block or something? If not, then no, it's not going to accomplish much of anything.
 
  • Like
Reactions: zovc

aquelito

King of Cable Management
Piccolo PC
Feb 16, 2016
952
1,123
If you can get to know the thermal resistance of your aluminium panel, then you can get a rough idea of how much heat it can dissipate.
I guess it could be useful for a M.2 drive but no more.
 
  • Like
Reactions: zovc

masteraleph

SFF Lingo Aficionado
May 28, 2017
91
64
Right- in terms of M.2 drives, several people using Sentries have found that attaching a thermal pad to a rear-mounted M.2 drive to dissipate heat through the case wall works nicely. But they don't generate that much heat in the first place.
 
  • Like
Reactions: zovc

zovc

King of Cable Management
Original poster
Jan 5, 2017
852
603
So is the side panel attached to the CPU and/or GPU in some way? Like via a copper block or something? If not, then no, it's not going to accomplish much of anything.

I was trying to conceive of a way to achieve this without a custom GPU/CPU block assembly, but if that sort of thing is necessary, I could give it consideration. I'm not sure I'd trust myself to out engineer and manufacture proper GPU/CPU heatsinks... maybe through sheer mass I could accomplish something?

Re: Cooling M.2 drives, I heard something in passing on a Gamer's Nexus video that cooling M.2 drives can actually worsen their lifespan/performance. I think it was, cooling the memory modules shortens the lifespan but I don't remember with certainty. They alluded to it in some of their Computex coverage, but probably they have a full article or video dedicated to something like that.
 
  • Like
Reactions: Biowarejak

ChainedHope

Airflow Optimizer
Jun 5, 2016
306
459
cooling M.2 drives can actually worsen their lifespan/performance.

Correct. NAND likes to run hotter because its built in a way that it has its best performance at higher temperatures. This is because the components are at their top efficiency in this temp range and so the controller tries to keep it there. Going too low brings it out of range which causes performance to decrease and the life span of the drive to drop because the NAND keeps trying to heat back up to its nominal temp by doing more null-type operations that is just transferring blocks of 1's and 0's over each other in unused cells which slowly kills the life of that cell. Cooling the controller is completely fine tho if you want to do that, just not the actual NAND chips themselves.

-- clarification because that was kind of loopy --
Nand runs best at a certain temperature because of component efficiency.
The controller will do void/null operations to try to heat NAND to that temp.
These operation are basically overwriting cells with 1's/0's that arent being used.
Nand cells have a limited write amount before they cannot be used anymore.
Therefore the void/null op being used to heat the Nand kills the drive meaning you don't want to cool it.
 

Josh | NFC

Not From Concentrate
NFC Systems
Jun 12, 2015
1,869
4,467
www.nfc-systems.com
Correct. NAND likes to run hotter because its built in a way that it has its best performance at higher temperatures. This is because the components are at their top efficiency in this temp range and so the controller tries to keep it there. Going too low brings it out of range which causes performance to decrease and the life span of the drive to drop because the NAND keeps trying to heat back up to its nominal temp by doing more null-type operations that is just transferring blocks of 1's and 0's over each other in unused cells which slowly kills the life of that cell. Cooling the controller is completely fine tho if you want to do that, just not the actual NAND chips themselves.

-- clarification because that was kind of loopy --
Nand runs best at a certain temperature because of component efficiency.
The controller will do void/null operations to try to heat NAND to that temp.
These operation are basically overwriting cells with 1's/0's that arent being used.
Nand cells have a limited write amount before they cannot be used anymore.
Therefore the void/null op being used to heat the Nand kills the drive meaning you don't want to cool it.

Neato! Love this forum.
 
  • Like
Reactions: Biowarejak and zovc

ChainedHope

Airflow Optimizer
Jun 5, 2016
306
459
Neato! Love this forum.

The more you know lol.

-- A little more info for those curious why some nand has heatsinks --

The common misconception people make is that they see NAND with heatsinks in small electronics, but with a M.2 drive you have enough airflow to keep the component cooled off enough to dissipate the temperature increase from doing operations (reading/writing data to it). In small electronics you don't have that space luxury so they try to dissipate the heat into the casing to help with the cooling. These heatsinks are rated for 2-5w which will cool it down when they get too hot but keep it at the nominal temperature when its at idle so it doesn't cause any stress on the nand. Doing the same on a M.2 drive brings the temperature down to under nominal temperature and starts the null/void ops that cause the performance/life issues.
 
  • Like
Reactions: zovc

ScarletStar

Caliper Novice
Jan 17, 2018
28
37
Nand runs best at a certain temperature because of component efficiency.
The controller will do void/null operations to try to heat NAND to that temp.
These operation are basically overwriting cells with 1's/0's that arent being used.
Nand cells have a limited write amount before they cannot be used anymore.
Therefore the void/null op being used to heat the Nand kills the drive meaning you don't want to cool it.


Could you link me to your source? I can't find any information in various datasheets of different NAND flash chips that indicate noticeably better efficiency at certain temperature ranges. Also burning energy to keep the chip at a certain temperature at idle (which is probably most of the time) will nullify any energy savings from more efficient operation, I reckon.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Could you link me to your source? I can't find any information in various datasheets of different NAND flash chips that indicate noticeably better efficiency at certain temperature ranges. Also burning energy to keep the chip at a certain temperature at idle (which is probably most of the time) will nullify any energy savings from more efficient operation, I reckon.
Optimum temperature is not just for power efficiency for battery life purposes: writing and erasing NAND requires pumping energy through it to shift charge states. Each time you do this (erases particularly) you slightly degrade the NAND cells. By operating outside the optimum range, you need to pump in more energy to perform the same operation, so degrade the cells more than necessary and reduce the overall life the the NAND.
 
  • Like
Reactions: ChainedHope

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
oh wow TIL actively cooling NAND actually indirectly kills them O_O
so those EK M.2 heatsinks...? And also up to which point would heat directly damage NAND cells?

(sorry for keeping to this M.2 tangent ,_,)
 

ScarletStar

Caliper Novice
Jan 17, 2018
28
37
Optimum temperature is not just for power efficiency for battery life purposes: writing and erasing NAND requires pumping energy through it to shift charge states. Each time you do this (erases particularly) you slightly degrade the NAND cells. By operating outside the optimum range, you need to pump in more energy to perform the same operation, so degrade the cells more than necessary and reduce the overall life the the NAND.

Yes, programming a flash cell at higher temperatures requires a lower voltage pulse and thus might lower degradation of the oxide layer.
BUT
1) at the same time reduces retention time since electrons can leak through the oxide layer more easily.
2) why program the cell (pulse onto the control gate) if you could also just read the cell (current on the bit line). Wouldn't it be much easier to just drive a continuous current through the drain-source channel then do multiple program/erase operations on the gate with the added bonus of less degradation of endurance? I'm in the field of power electronics so the uses of transistors might be slightly different, but in our cases even with high switching frequencies, power used on dis/charging the gate is way lower than power dissipated through R_dson. Of course we also drive way higher currents too..


Again, I'm very interested in this, so could you link me to your source? If that technology is widely used there must be papers on it. I couldn't find any.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
at the same time reduces retention time since electrons can leak through the oxide layer more easily.
Counterintuitively, NAND retention time increases with temperature during active use. Because the gate is warmer, conductance increases so the damage from tunneling to/from the gate decreases. From the JEDEC presentation on NAND endurance:

Power-off retention times do decrease with temperature, but these will rarely be above room-temperature.
 

ScarletStar

Caliper Novice
Jan 17, 2018
28
37
Counterintuitively, NAND retention time increases with temperature during active use. Because the gate is warmer, conductance increases so the damage from tunneling to/from the gate decreases.(...)
Power-off retention times do decrease with temperature, but these will rarely be above room-temperature.

Tunneling to/from the floating gate is only performed for write operations. So I assume you mean a "cleaner" write operation results in a less ambiguous state which in turn has higher retention. Which might be true. But as stated higher temperatures also mean the charges leak out of the floating gate more easily. The presentation didn't really state it but I think the table you provided just means they wrote a bit at active temp then turned it off and checked every week if it was still readable (That's an archive type use case). But that's not a typical use case. When you have your OS on your SSD, it's not being rewritten often. Your power on time is significantly higher too and if the SSD is artificially kept at an elevated active temp as you suggest, the active temp becomes basically the power off temp for that data. So if you have your PC on for let's say 10 weeks a year without rewriting the OS, it's 10 weeks of active temp for that data (+42 weeks of power off temp). And power users and enterprise use cases probably have way more than that.

Also my second point about just using reads to warm up the chip still stands.
 

Choidebu

"Banned"
Aug 16, 2017
1,198
1,205
Idk man.. win10 keeps saying it needa to be updated.. and my default app keeps getting resetted.. they must change something in there..

Lol I jest.

Love the discussion! TIL a lot about this. So what's the optimal temp we're looking at here? 50-ish?
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Your power on time is significantly higher too and if the SSD is artificially kept at an elevated active temp as you suggest, the active temp becomes basically the power off temp for that data.
NAND devices in operation refresh their cells actively ('top up' the charge). Only when in complete cold shutdown is passive retention the limiting factor.
 

ScarletStar

Caliper Novice
Jan 17, 2018
28
37
NAND devices in operation refresh their cells actively ('top up' the charge). Only when in complete cold shutdown is passive retention the limiting factor.

Fair enough. I think we can agree that higher temps are better for program/erase operations and reading does not benefit from it.
So actually it makes even less sense to use P/E cycles (at non optimal temperaturse) to increase the temperature so you can do P/E cycles with less degradation. Degrading the chip so you have less degradation?

Furthermore, current SSDs use controller based adaptive read/write, not just a fixed pulse geometry as outlined in this presentation by SK Hynix. So the damage to the oxide layer is minimized specifically for every cell anyways, so:

So what's the optimal temp we're looking at here? 50-ish?
As a consumer it probably does not matter. By the time your SSD reaches it's endurance rating you're likely to have moved on to a bigger and more modern SSD long ago. Just don't buy aftermarket heatsinks for your SSD, but don't try to heat it up either. Engineers usually put a lot of thinking into their products so you don't have to. Just fit it and forget it.
 

Reldey

Master of Cramming
Feb 14, 2017
387
405
Could check my build log, used just a thermal pad to make contact with my aluminum finned GPU heatsink and my S4 mini front bezel. But that honestly was just due to luck that those two parts lined up.
 

zovc

King of Cable Management
Original poster
Jan 5, 2017
852
603
Hey, that looks kind of nifty! Did you do any before & after testing?