Jump to content

The God of Computer Coolers

AlexTheGreatish
9 hours ago, tkitch said:

Why?

Beside the really long and academic answer that @ImorallySourcedElectrons just gave. 
 

Modern CPUs have built in constraints that does not allow them to run at temperatures that would damage them. So if a CPU (or GPU) gets to hot it will shut itself off before it gets damaged. 
 

I mean you can even boot your OS with most modern CPUs without a heatsink. Sure it will throttle like hell but it will run. 
 

Back when I was building computers and overclocking (late 90’s early 00’s) you had a real risk of killing your CPU if you OCd with insufficient cooling or tried to boot without a heat sink. And those CPUs wheren’t 150+ W parts. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Spindel said:

Back when I was building computers and overclocking (late 90’s early 00’s) you had a real risk of killing your CPU if you OCd with insufficient cooling or tried to boot without a heat sink. And those CPUs wheren’t 150+ W parts. 

That's actually not that big of a risk anymore for *academic reasons*. 😅 

 

Basically, folks have gotten a lot better at designing temperature measurement circuits using the bits you have available in a digital CMOS process, those temperature sensors are a lot quicker, more recent process nodes also try to be less susceptible to thermal runaway effects (which do exist in FETs, unlike what a lot of folks on reddit and stackexchange claim, it are just very different mechanisms), the inclusion of things like a management engine means that you can implement far more complex thermal throttling that doesn't take hundreds to thousands of clock cycles to start acting, it's also quite doable at this point to do some predictive modelling to figure out what your power draw and temperature is going to be in the near future, on die voltage regulation, etc.

 

It's gotten quite hard to kill CPUs, where are the days where you could physically cause a die of a budget CPU to crack? 😄 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, ImorallySourcedElectrons said:

That's actually not that big of a risk anymore for *academic reasons*. 😅 

 

Basically, folks have gotten a lot better at designing temperature measurement circuits using the bits you have available in a digital CMOS process, those temperature sensors are a lot quicker, more recent process nodes also try to be less susceptible to thermal runaway effects (which do exist in FETs, unlike what a lot of folks on reddit and stackexchange claim, it are just very different mechanisms), the inclusion of things like a management engine means that you can implement far more complex thermal throttling that doesn't take hundreds to thousands of clock cycles to start acting, it's also quite doable at this point to do some predictive modelling to figure out what your power draw and temperature is going to be in the near future, on die voltage regulation, etc.

 

It's gotten quite hard to kill CPUs, where are the days where you could physically cause a die of a budget CPU to crack? 😄 

I did that with my Athlon Thunderbird, but not from heat but because i dropped the cooler on it and chipped it 😛

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/8/2023 at 1:52 PM, AlexTheGreatish said:

It is nearly impossible to cool Intel’s 13th gen CPUs.. so we got a 5000W industrial chiller.

 

Check out the Thermo Scientific NESLab ThermoFlex 5000 Chiller: https://lmg.gg/7IIR9

Purchases made through some store links may provide some compensation to Linus Media Group.

 

 

The issue is the IHS. 

 

I'm running a supercool computers direct die block, a 420, x2 360's, and a 140mm radiator, and x2 D5's but also have an RTX 3090 STRIX with the 1000W bios which draws 600+ watts at load so...

 

13900k with SP Performance core score of 107 (I think 100 overall)

 

Overclocked to 60, 60, 60, 60, 59, 58, 57, 57. 

I could have gone further but my PSU (Seasonic 1300 PRIME) was running out of watts...  At this overclock, I got random hard crashes in games (PSU trip) until I undervolted the GPU.  I run it stock with an undervolt.

 

At 5.7 all core, 300 max watts.  The loop isn't heat soaked but I generally get temps to level out in the lower to mid 70's in R23 after looping it. 

 

R23.thumb.png.4783a0ff0d3dbc269e254a3eb221eccc.png

 

If you want, this is a 12th gen block and I just received a 13th gen.  I can send you the 12th gen for your own testing and you won't have to wait for shipping from Asia.  As you can see, works just fine on a 13th gen CPU.

 

Btw, for those wondering, this is all stock, ASUS Multicore Enhancement disabled, and undervolted.  Not a huge difference between the two except for 100+ additional watts.

 

179113669_R23NoOC.thumb.png.d7ed3daabc5ad8f1a278ea54f0cd9c42.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

I love Alex' janky cooling experiments.

System Specs: Second-class potato, slightly mouldy

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×