Jump to content

Help Moving forward on 9900K overclock

Okay, it has been awhile since I put together a new machine and I'm working on overclocking it manually as the settings in the UEFI put way too much voltage and I hit thermal max pretty quick during stress testing.  I have managed to get it stable at 4.9 with 1.24 volts, LLC 7, stock cache speed and an AVX offset of 2.  However, when doing certain tests (LINPACK in OCCT) it will eventually stop because a couple of the cores will eventually reach over 96c even though the AIO coolant is still 35c with an ambient temp of 21c.  I am using a Corsair H115i and it idles around 31c and during anything other than small sets or the linpack, it never gets above 82 degrees.

 

My goal is to get to 5GHz with an AVX offset of 2.  I think I can get there without a major bump in voltage, but I'm wondering why the AIO can't keep those couple of cores from reaching higher temps over a period of about 10 minutes of hardcore stress testing.  If I can get 5G and keep the temps down during less aggressive testing (larger sets and an offset for AVX), I think I'll be okay.

 

Since I have gotten to 4.9 on 1.24 volts, does that mean I got lucking in the silicon lottery for that, but not in the heat generation area?  Is it normal for the cores to keep jumping up a degree or so every iteration of the test until thermal max and the AIO's coolant staying so low (35c)?  

 

Any input would be great.  The last OC I did was a 5820K that I got to 4.7 on an H100i.

 

I put my system information in the signature.  I hope it shows up. 

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

What voltage do you get during tests? Try lowering LLC, it might be pushing your voltage above and beyond 1.24V, hence the temperature spike.

Main system: Ryzen 7 7800X3D / Asus ROG Strix B650E / G.Skill Trident Z5 NEO 32GB 6000Mhz / Powercolor RX 7900 XTX Red Devil/ EVGA 750W GQ / NZXT H5 Flow

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, PopsicleHustler said:

What voltage do you get during tests? Try lowering LLC, it might be pushing your voltage above and beyond 1.24V, hence the temperature spike.

HWiNFO64 shows the voltage at 1.23V.  Just ran the LINPACK again and after 5 minutes some of the cores jumped up to 101c for a moment and so the the CPU Package.  It seems only during the part of the test where AVX is done as the test while at 5G has the cores running under 70c, but when the speed drops to 4.8, they start to climb and start to peak really high.  I know this must be the AVX part as the offset is taking place and the speed drops by the offset.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

with an avx offset of 2, ur basically at stock, can't tell if it's good on the silicon lottery or not at 1.24v, though i'm pretty sure at 96C it's not really running at 1.24v, likely alot higher when LLC kicks in, i'm gonna guess 1.3+ with 150w~ going through the cpu. 

 

I'd try overclocking the ram first, even though it's intel 2400 with bad timings is gonna massively hold you back, no amount of minmaxing on the cpu will solve that.

 

Edit: it could be that the 115i isn't able to handle the 9900k at 4.7, i have a 115i on my old delidded 8600k build and that thing runs at around 80C at only 100w, 4.7 1.25v 9900k is around 135w on full avx stress, which would explain the temps in the 90s. Try to optimize your voltage as much as possible.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, xg32 said:

with an avx offset of 2, ur basically at stock, can't tell if it's good on the silicon lottery or not at 1.24v, though i'm pretty sure at 96C it's not really running at 1.24v, likely alot higher when LLC kicks in, i'm gonna guess 1.3+ with 150w~ going through the cpu. 

 

I'd try overclocking the ram first, even though it's intel 2400 with bad timings is gonna massively hold you back, no amount of minmaxing on the cpu will solve that.

 

Edit: it could be that the 115i isn't able to handle the 9900k at 4.7, i have a 115i on my old delidded 8600k build and that thing runs at around 80C at only 100w, 4.7 1.25v 9900k is around 135w on full avx stress, which would explain the temps in the 90s. Try to optimize your voltage as much as possible.

I'm getting faster ram this week.  That's why I haven't tried any RAM stuff.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

Okay, a bit of an update.

I switched to adaptive CPU voltage and moved the LLC to 5.  I can now run OCCT large and small without errors and got a max temp of 81c doing the small.  However, when I did the LINPACK, it racked up 15 errors in a 30 minute run.  I wish I knew what those errors were, but I couldn't find a way to view them.  I thought the end of the run would produce a report in OCCT, am I wrong?  How are you supposed to know what errors it found?

 

I was also able to run an hour long test of AIDA64 without issue and the max temp was 92c with an average of 87c.  

 

Any suggestions on other tests I should do?  Should I move the AXV offset up one?  During the testing, HWiNFO64 showed an average of 1.203V and a max of 1.296V.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, tincanalley said:

Okay, a bit of an update.

I switched to adaptive CPU voltage and moved the LLC to 5.  I can now run OCCT large and small without errors and got a max temp of 81c doing the small.  However, when I did the LINPACK, it racked up 15 errors in a 30 minute run.  I wish I knew what those errors were, but I couldn't find a way to view them.  I thought the end of the run would produce a report in OCCT, am I wrong?  How are you supposed to know what errors it found?

 

I was also able to run an hour long test of AIDA64 without issue and the max temp was 92c with an average of 87c.  

 

Any suggestions on other tests I should do?  Should I move the AXV offset up one?  During the testing, HWiNFO64 showed an average of 1.203V and a max of 1.296V.

What were the -exact- settings you were using in OCCT Linpack?  And for the data (large/small) tests?  I assume this was OCCT 5.0.0 ?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, tincanalley said:

Okay, a bit of an update.

I switched to adaptive CPU voltage and moved the LLC to 5.  I can now run OCCT large and small without errors and got a max temp of 81c doing the small.  However, when I did the LINPACK, it racked up 15 errors in a 30 minute run.  I wish I knew what those errors were, but I couldn't find a way to view them.  I thought the end of the run would produce a report in OCCT, am I wrong?  How are you supposed to know what errors it found?

 

I was also able to run an hour long test of AIDA64 without issue and the max temp was 92c with an average of 87c.  

 

Any suggestions on other tests I should do?  Should I move the AXV offset up one?  During the testing, HWiNFO64 showed an average of 1.203V and a max of 1.296V.

that voltage range is kinda big, try to keep it at 1.25-1.3 and it should be stable, as for upping the avx offset to -1, i wouldn't do it, you could be pushing 100C again, but know that regular workload and gaming wouldn't reach AIDA temps.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Falkentyne said:

What were the -exact- settings you were using in OCCT Linpack?  And for the data (large/small) tests?  I assume this was OCCT 5.0.0 ?

OCCT 5.0.1.  It was using the 2019 set with AVX, Logical Cores and 90% memory.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, xg32 said:

that voltage range is kinda big, try to keep it at 1.25-1.3 and it should be stable, as for upping the avx offset to -1, i wouldn't do it, you could be pushing 100C again, but know that regular workload and gaming wouldn't reach AIDA temps.

Might be a stupid question, but I am able to set the max voltage I want, but with adaptive, how do I make sure it doesn't dip below a certain point?  So if it is dipping to 1.02 and it shouldn't go below 1.25, where would that setting be?

Thanks!  This is the first time I have gotten into tweaking voltage settings beyond a simple manual entry.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, tincanalley said:

OCCT 5.0.1.  It was using the 2019 set with AVX, Logical Cores and 90% memory.

LOL !  Linpack with 90% memory?  Good luck dude.  You're not going to pass that without direct die cooling.

I actually figured you tried 5.0.1 so I tried it myself at 5 ghz, 1.275v, Loadline calibration=Turbo (LLC6 on Asus).

After like 20 seconds, my fans ramped up fast and I looked at the amps draw...

 

187 amps!!!!!  I instantly stopped the test because I knew it would either crash or I would reach 100C as small FFT Prime95 AVX is not even close to stable at that amps load!

 

my minimum stable load voltage with loadline calibration *DISABLED* (requires using auto voltage, with Vcore Loadline Calibration set to "standard", so the "AC Loadline" willi boost the input voltage to 1.52v then drop it at 1.6 mOhms) at 5 ghz with small FFT AVX/Linpack, is 1.240v, at 185 amps measured with on-die sense voltage (VR VOUT on gigabyte; your maximus XI extreme can read this as Super I/O Vcore accurately).  But my VR VOUT was 1.199v (!!!) at 187 amps measured from the VRM...meaning it was about to crash! (I did this test at 1.270v LLC Turbo using LinX 9.0.1, and it crashed earlier).

 

The reason why loadline calibration has to be set to "standard" (maximum or "Intel defaults" vdroop enabled) is because this improves transient response at heavy amps draw, thus allowing you the minimum possible vcore required to be stable at that load/clocks, because the voltage won't "transient drop" below that VR VOUT.  Tighter LLC (less vdroop) will cause your vcore to drop below what's shown on sensors, sometimes as much as 50-100mv below, causing a crash.

 

Keep in mind that max supported amps for CPU not to start slowly degrading is 193 amps, and 1.230v true load voltage at those amps (meaning if your load voltage is higher, the amps has to be lower. If the load voltage is lower, its unclear whether >193 amps will still be safe).  Since your HWInfo on your MXI showed 1.203v thats about the same as mine.  

 

If you can keep the temps under 100C, try +15mv higher voltage, but I doubt you can do that without direct die.  Linpack at 90% memory pushes things harder than Prime95 small FFT with AVX! (I think it's similar to small FFT with FMA3 enabled, using 29.8 build 3).

 

I actually did this test on a post edit just right now:

 

Auto voltage, IA AC Loadline=1.6 mOhms, IA DC Loadline=1.6 mOhms, Loadline Calibration=Standard (1.6 mOhms--this matches the DC Loadline value so CPU VID= VR VOUT--the DC loadline value however is not important here on auto voltages, only the AC loadline and LLC is).

5 ghz core, 4.7 ghz cache.

 

I stopped this run when temps reached 99C and amps draw was at the limit of the safe 9900K specification (On all auto voltages and default loadline calibration (1.6 mOhms), max amps must not exceed 193 at 100c).  It didn't crash (since I was not hurting transients with LLC) but it would have at those temps :/)

 

Idle (before running), Max load (VR VOUT, max amps) and max temps.

 

 

 

occt_idle.jpg

occt_load_vrvout.jpg

occt_temps.jpg

 

*edit* because why not...here's some prime95 small FFT avx tests too (I tested 1.230v VR VOUT by changing AC Loadline to 0.90 mOhms and enabling SVID Offset=Enabled, to reduce the idle voltage to 1.332v from 1.404v, but 1.230v VR VOUT was not stable...1.238v is the absolute minimum to pass 24k-64K range):

 

 

prime95_vrm.jpg

prime95_temps.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Falkentyne said:

LOL !  Linpack with 90% memory?  Good luck dude.  You're not going to pass that without direct die cooling.

I actually figured you tried 5.0.1 so I tried it myself at 5 ghz, 1.275v, Loadline calibration=Turbo (LLC6 on Asus).

After like 20 seconds, my fans ramped up fast and I looked at the amps draw...

 

187 amps!!!!!  I instantly stopped the test because I knew it would either crash or I would reach 100C as small FFT Prime95 AVX is not even close to stable at that amps load!

 

my minimum stable load voltage with loadline calibration *DISABLED* (requires using auto voltage, with Vcore Loadline Calibration set to "standard", so the "AC Loadline" willi boost the input voltage to 1.52v then drop it at 1.6 mOhms) at 5 ghz with small FFT AVX/Linpack, is 1.240v, at 185 amps measured with on-die sense voltage (VR VOUT on gigabyte; your maximus XI extreme can read this as Super I/O Vcore accurately).  But my VR VOUT was 1.199v (!!!) at 187 amps measured from the VRM...meaning it was about to crash! (I did this test at 1.270v LLC Turbo using LinX 9.0.1, and it crashed earlier).

 

The reason why loadline calibration has to be set to "standard" (maximum or "Intel defaults" vdroop enabled) is because this improves transient response at heavy amps draw, thus allowing you the minimum possible vcore required to be stable at that load/clocks, because the voltage won't "transient drop" below that VR VOUT.  Tighter LLC (less vdroop) will cause your vcore to drop below what's shown on sensors, sometimes as much as 50-100mv below, causing a crash.

 

Keep in mind that max supported amps for CPU not to start slowly degrading is 193 amps, and 1.230v true load voltage at those amps (meaning if your load voltage is higher, the amps has to be lower. If the load voltage is lower, its unclear whether >193 amps will still be safe).  Since your HWInfo on your MXI showed 1.203v thats about the same as mine.  

 

If you can keep the temps under 100C, try +15mv higher voltage, but I doubt you can do that without direct die.  Linpack at 90% memory pushes things harder than Prime95 small FFT with AVX! (I think it's similar to small FFT with FMA3 enabled, using 29.8 build 3).

 

I actually did this test on a post edit just right now:

 

Auto voltage, IA AC Loadline=1.6 mOhms, IA DC Loadline=1.6 mOhms, Loadline Calibration=Standard (1.6 mOhms--this matches the DC Loadline value so CPU VID= VR VOUT--the DC loadline value however is not important here on auto voltages, only the AC loadline and LLC is).

5 ghz core, 4.7 ghz cache.

 

I stopped this run when temps reached 99C and amps draw was at the limit of the safe 9900K specification (On all auto voltages and default loadline calibration (1.6 mOhms), max amps must not exceed 193 at 100c).  It didn't crash (since I was not hurting transients with LLC) but it would have at those temps :/)

 

Idle (before running), Max load (VR VOUT, max amps) and max temps.

 

 

 

 

 

 

 

*edit* because why not...here's some prime95 small FFT avx tests too (I tested 1.230v VR VOUT by changing AC Loadline to 0.90 mOhms and enabling SVID Offset=Enabled, to reduce the idle voltage to 1.332v from 1.404v, but 1.230v VR VOUT was not stable...1.238v is the absolute minimum to pass 24k-64K range):

 

 

 

 

Good to know.  I will stop trying to OC to pass this test.  Sure wish they would better note what actually is happening in their stress tests.

Now I have a new one to figure out.  When I use the settings I have, I can pass the OCCT small and large and Aida64 runs for hours and shows an average of 88c with some spikes at 94c.  I figure that is good as I know in real world use I won't get there very often, if ever.  The issue I have is with Realbench.  The stress test gets going, but blue screens after about 2 minutes with Clock_Watchdog_timeout.  IT is so bad that Windows can't even create the dump file.  I have to do a reset or hard shutdown or it will set on the screen at 0% complete.  I'm not sure what part of my OC is causing it and if it would crossover to my daily use.  So far nothing I have done with regular use in Windows has caused any BSODs.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, tincanalley said:

Okay, a bit of an update.

 

 

I was also able to run an hour long test of AIDA64 without issue and the max temp was 92c with an average of 87c.  

 

.

You see, me personally would want my average 80c<

 

MSI B450 Pro Gaming Pro Carbon AC | AMD Ryzen 2700x  | NZXT  Kraken X52  MSI GeForce RTX2070 Armour | Corsair Vengeance LPX 32GB (4*8) 3200MhZ | Samsung 970 evo M.2nvme 500GB Boot  / Samsung 860 evo 500GB SSD | Corsair RM550X (2018) | Fractal Design Meshify C white | Logitech G pro WirelessGigabyte Aurus AD27QD 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, tincanalley said:

Good to know.  I will stop trying to OC to pass this test.  Sure wish they would better note what actually is happening in their stress tests.

Now I have a new one to figure out.  When I use the settings I have, I can pass the OCCT small and large and Aida64 runs for hours and shows an average of 88c with some spikes at 94c.  I figure that is good as I know in real world use I won't get there very often, if ever.  The issue I have is with Realbench.  The stress test gets going, but blue screens after about 2 minutes with Clock_Watchdog_timeout.  IT is so bad that Windows can't even create the dump file.  I have to do a reset or hard shutdown or it will set on the screen at 0% complete.  I'm not sure what part of my OC is causing it and if it would crossover to my daily use.  So far nothing I have done with regular use in Windows has caused any BSODs.

Clock watchdog timeout is always insufficient vcore, all the time.  

Sometimes it's simply the cache ratio being too high.

 

Try the following.

1) set your AVX offset to 0 (yes, zero) then re-do realbench 2.56. (AVX offsets trigger guardband penalties as the PLL's go to sleep when the ratio change is triggered, triggering the -worst case- guardband penalty; that's why some people have instability with AVX offsets and none without offsets.

 

2) (or) Set your cache ratio to x43 and re-do the realbench 2.56 run.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Stormseeker9 said:

You see, me personally would want my average 80c<

That would be ideal, but I'll be damned if it can be done.  I swapped out an H100 for an H115 and it helped a bit, but the only other option would be a custom cooler, but even then I don't know if I can keep it cool.  I will be messing with voltage more tomorrow and hope to get it lower.

 

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Falkentyne said:

Clock watchdog timeout is always insufficient vcore, all the time.  

Sometimes it's simply the cache ratio being too high.

 

Try the following.

1) set your AVX offset to 0 (yes, zero) then re-do realbench 2.56. (AVX offsets trigger guardband penalties as the PLL's go to sleep when the ratio change is triggered, triggering the -worst case- guardband penalty; that's why some people have instability with AVX offsets and none without offsets.

 

2) (or) Set your cache ratio to x43 and re-do the realbench 2.56 run.

I can do the first one, but not the second.  I'm running the base clock for the cache.  Didn't want to mess with it until I made progress in other areas.  Don't know if that was necessary, but it was the route I took.  OCing isn't new to me, but getting this far into the weeds is.?

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

I think you're on to something about the AVX offset.  Realbench stress test would end up in a BSOD every time no matter what the voltage I set until I turned off AVX offset.  Even when I bumped the core multiplier to 48.  Once I turned it off, I did a test at 4.9 with a voltage of 1.25V and it ran til completion.  The average CPU temp was 85c with a max of 92c.  So I'm back on track and will stay away from the LINPACK and try other tests.

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

Well I hit a wall.  I am at 4.9 with no AVX offset at 1.25V.  I ran and completed a 30 minute Realbench, 3 hrs of Aida64 (CPU, FPU, Cache and Memory) and 12 hours of OCCT large sets all these tests had a max temp of 82c and average of 70c.  When I try 5GHz, I get errors in OCCT large sets, Realbench blue screens about 60 seconds into it and Aida64 either blue screens or the system freezes.  I tried upping voltage all the way to 1.39 in .005 increments, but gave up.  Even if that voltage had worked, Realbench put me over 100c in under 30 seconds and so did Aida64.

So I'm going to assume I am stable at 4.9 and see how it goes under daily use.  As for getting to 5, unless I'm missing something, I can't see why this speed is not attainable.  Am I missing something?  Am I doing something wrong?  I know I can set the AVX offset, but some recommend as much as 3 or 4, while others say don't do it because it can introduce instability as it throttles up and down.

 

Oh, I ordered some G Skill Trident Z 3200 cas 14, to help in that area.  I don't think my current memory, G Skill Ripjaws 2400 cas 15 are an issue.  I ran all the memory tests I could find and I'm not overclocking them.  Still, It will be nice to have a little faster RAM.

At least I'm higher than the out of box turbo settings.  That gives you 5 on a couple cores while the rest are at 4.7 and the voltage is so high it runs too hot during stress testing. 

Asus ROG Maximus XI Extreme, Intel i9 9900K (5.1GHz, no AVX offset, 4.9GHz cache and 1.295V), 16G G. Skill Trident Z RGB 3200 CL14, EVGA GTX 760, Corsair H115i Pro, Samsung 970 EVO Plus 512 GB (x2), Corsair RM1000i PSU, Windows 10 Pro (1903).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×