Jump to content

Status: UNRESOLVED

Computer Type: custom built Desktop

GPU:  ZOTAC 2080 TI AMP MAXX  11GDDR6 Driver Version 446.14  Used overclock scanner with Firestorm (zotac's OC software)  added 75 MHz GPU clock // says 1740 BOOST clock

CPU:  Intel Core i7 6850k 3.40GHz, no overclock

Motherboard: Asrock X99 Extreme4, no overclock, BIOS mode Legacy

RAM: Crucial DDR4 2133MHz 32GB(4),xmp not enabled, no overclock

PSU:  Corsair RM750x

Operating System & Version: Windows 10, latest update as of this post.  I did a clean install less than a week ago on the new NVME drive

GPU Drivers: GeForce Game REady Driver Version 446.14, 

Latest UserBenchmark: Link

 

 

ya, so basically I'm trying to find out if this card I got is defective in anyway.  I've had this card for about 2 days now and I wanted to know if GPU Memory Clock / Utilization spikes are normal for the 2080 TI series or if this card is defective.  I've tried disabling hardware acceleration on every program and it still seems to persist.  The culprit seems to be Desktop Window Manager and it happens whenever I open a program or file explorer, or launch a game.  If there are any other benchmarks you'd like me to run I can do that.

 

Games I've tested this card on so far

 

Sea of Thieves: Max settings, 144 fps consistent at sea, 60-90 on Islands/harbors.

 

Mordhau: Solid 165 fps (Monitor is 165 Hz 1ms)

 

GTAV: Max Settings, 60-63 fps consistent when Driving around in game. Dips to 50 fps on the explosion scene in the benchmark.

 

 

Latest Heaven Benchmark: Link

 

I'm getting Corsair Vengence 3200 (x2) 16gig sticks in later today as I found out my ram sticks are slow af for DDR4.  Hoping to see better FPS once they're in.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's working as intended. Those spikes are there for when it quickly needs to process something like opening a window which is what it's supposed to do. I would also remove that auto oc as the zotac firestorm software gives a bunch of issues for a lot of people.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, jaslion said:

It's working as intended. Those spikes are there for when it quickly needs to process something like opening a window which is what it's supposed to do. I would also remove that auto oc as the zotac firestorm software gives a bunch of issues for a lot of people.

I normally use MSI Afterburner but thought i'd try out the OC scanner.  From what I understand is that Firestorm is going to lean on the safe side and put it at a lower clock than It can actually go for stability reasons.  What problems do people tend to run into?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Wizwerd said:

I normally use MSI Afterburner but thought i'd try out the OC scanner.  From what I understand is that Firestorm is going to lean on the safe side and put it at a lower clock than It can actually go for stability reasons.  What problems do people tend to run into?

Crashing, unstable oc, weird useage issues,...

Link to comment
Share on other sites

Link to post
Share on other sites

I'm more than sure that spikes will be related to your CPU as you are running on stock and aiming for high refresh. What is the monitor resolution and in-game settings? GTA fps looks quite low. Any plans to OC the CPU?

 

Besides that, would be great if you could post stats of your GPU in games where you experience it (core clock, temp)

 

Cheers

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Envit0 said:

I'm more than sure that spikes will be related to your CPU as you are running on stock and aiming for high refresh. What is the monitor resolution and in-game settings? GTA fps looks quite low. Any plans to OC the CPU?

 

Besides that, would be great if you could post stats of your GPU in games where you experience it (core clock, temp)

 

Cheers

Link

 

1920x1080p

16:9

I usually play window borderless but for these tests I'm doing Fullscreen.

Not currently but if it would increase my fps then I could do it.  I've never OC'd anything other than the GPU so I'll have to figure out how to do it.

 

GTAV Ingame stats

GPU: 9% Utilization // 55 low 65 consistent 95 High/Peak FPS || Temps: 53c

CPU: 35-45% Utilization // 40c Package, 35c Core temps

 

I don't see any spikes in gaming, just when I'm at idle.  It seems that GPU spikes to 7k on the memory clock at Idle are a feature of the 2080 TI

 

Edit:  forgot to mention that my monitor has Freesync enabled and I enable Gsync compatible in Nvidia Control Panel.  Monitor

Link to comment
Share on other sites

Link to post
Share on other sites

Usage and clocks hopping around is normal. But at 1080p, a stock 6850K will choke a 2080 Ti by a noticeable amount. These chips perform much better around 4-4.2Ghz, some will go up to 4.4-4.5. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Zando Bob said:

Usage and clocks hopping around is normal. But at 1080p, a stock 6850K will choke a 2080 Ti by a noticeable amount. These chips perform much better around 4-4.2Ghz, some will go up to 4.4-4.5. 

I didn't know the 6850k could get above 4.0 GHz.  I have a big Noctua NH-U12a sitting on it.  Sounds like I'm about to spend the rest of today OC it lol.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Wizwerd said:

I didn't know the 6850k could get above 4.0 GHz.  I have a big Noctua NH-U12a sitting on it.  Sounds like I'm about to spend the rest of today OC it lol.

Just don't go over 1.3-1.35v vCore and you should be fine. Safe max for dailying a Haswell chip is 2.1v vInput and 1.35v vCore, these Broadwell chips should be ran a little lower. I know peeps that have ran them at up to 1.4v and had no issues, but also seen people saying they burn out above 1.3v, so I tend to stick in the middle, that 1.3-1.35v max range. My 6950X rn I run at 1.24v, my 5960X and 5820K (Haswell chips) I ran at 1.2-1.3v vCore. All my chips I could run a little higher than I had them, but the extra 100-200Mhz wasn't worth the voltage (and massive heat) increases. 

For your chip I'd try 1.9-1.95v vInput (closer to 1.9 is better for stability AFAIK) and 1.3v vCore, see if it'll do 4.2 at that. If it does, try and see if it goes higher, if it doesn't then give it a little more voltage. Do make sure it isn't going over 80C in normal use though, but if it hops up to 85-90C in Prime95 smallFFT that's fine, it'll almost never hit that level of load in actual use, especially not in gaming. If your mobo has the RING multiplier and voltage control available, putting RING voltage to 1.1-1.2v and trying for 3.5Ghz is worth a shot (this is your uncore, tightens up the chip and makes it a bit snappier, effects 1% lows and such more than max fps like core clocks do). I wouldn't reccomend BCLK OCing on X99 since you have the multipliers easily available, it's trickeir to work with and only worth it for memory OCing - which these chips don't do well anyways - and XoC stuff, neither of which you want for a daily. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Zando Bob said:

Just don't go over 1.3-1.35v vCore and you should be fine. Safe max for dailying a Haswell chip is 2.1v vInput and 1.35v vCore, these Broadwell chips should be ran a little lower. I know peeps that have ran them at up to 1.4v and had no issues, but also seen people saying they burn out above 1.3v, so I tend to stick in the middle, that 1.3-1.35v max range. My 6950X rn I run at 1.24v, my 5960X and 5820K (Haswell chips) I ran at 1.2-1.3v vCore. All my chips I could run a little higher than I had them, but the extra 100-200Mhz wasn't worth the voltage (and massive heat) increases. 

For your chip I'd try 1.9-1.95v vInput (closer to 1.9 is better for stability AFAIK) and 1.3v vCore, see if it'll do 4.2 at that. If it does, try and see if it goes higher, if it doesn't then give it a little more voltage. Do make sure it isn't going over 80C in normal use though, but if it hops up to 85-90C in Prime95 smallFFT that's fine, it'll almost never hit that level of load in actual use, especially not in gaming. If your mobo has the RING multiplier and voltage control available, putting RING voltage to 1.1-1.2v and trying for 3.5Ghz is worth a shot (this is your uncore, tightens up the chip and makes it a bit snappier, effects 1% lows and such more than max fps like core clocks do). I wouldn't reccomend BCLK OCing on X99 since you have the multipliers easily available, it's trickeir to work with and only worth it for memory OCing - which these chips don't do well anyways - and XoC stuff, neither of which you want for a daily. 

ya 80C is pretty hot to be running at all the time.  I'll follow your advice here when the Corsair vengence sticks come in today.  I don't think I want to OC the ram sticks as I hear its very easy to mess up and cause instability issues.  I'd like to hit at least 4.0 GHZ on the CPU if i'm going to OC it.  I distinctly remember when I bought the chip years ago I was disappointed that they didn't have 4.0 GHZ chips available in the i7 series so I'm excited to try it.

Link to comment
Share on other sites

Link to post
Share on other sites

Apart from low CPU frequency, memory frequency is also really low...

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Wizwerd said:

ya 80C is pretty hot to be running at all the time.  I'll follow your advice here when the Corsair vengence sticks come in today.  I don't think I want to OC the ram sticks as I hear its very easy to mess up and cause instability issues.  I'd like to hit at least 4.0 GHZ on the CPU if i'm going to OC it.  I distinctly remember when I bought the chip years ago I was disappointed that they didn't have 4.0 GHZ chips available in the i7 series so I'm excited to try it.

Yeah HEDT chips all run much lower clocks when stock, than the ones they can actually do. Something like a 7980XE comes in around 2.8Ghz since Intel has to hit a "yeah this is coolable by Joe Shmoe with his uhhhh random AIO", but if you have the cooling they'll do 4.5 no problem. X58 chips did 4.2-4.5, X79 chips did 4.2-4.7 fine, Haswell chips on X99 do 4.2-4.7 pretty reliably, Broadwell chips can usually do 4.0-4.2, 4.4+ if they're golden. Always worth OCing if you have a K/X chip and are a gamer, if you're doing actual production work then stability could be a concern, and running stock is better because of that. 

80C is fine as a max, but yeah if it runs that in normal use, that's a yikes. My 6950X sits in the 40s-60s depending on load, it'll only hop up to the 80s after a few hours of Prime95 smallFFT. But then I have it on a dual 360mm rad custom loop, it'd be much hotter on an air cooler. Yours is a 6c/12t chip vs this 10c/20t behemoth though, so it should run cooler at similar voltages. 

What RAM do you have coming in? Since you're on X99, running a 4x kit is always worth it so you can grab that extra bandwidth quad channel RAM offers. I run 4x8GB + 4x4GB for a total of 48GB across 8 sticks, pushed manually to 3200Mhz CL16-18-18-36, which is the XMP profile for the 8GB sticks. If you're using a single matched kit, just enable XMP and bam ez. 

 

 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Zando Bob said:

Yeah HEDT chips all run much lower clocks when stock, than the ones they can actually do. Something like a 7980XE comes in around 2.8Ghz since Intel has to hit a "yeah this is coolable by Joe Shmoe with his uhhhh random AIO", but if you have the cooling they'll do 4.5 no problem. X58 chips did 4.2-4.5, X79 chips did 4.2-4.7 fine, Haswell chips on X99 do 4.2-4.7 pretty reliably, Broadwell chips can usually do 4.0-4.2, 4.4+ if they're golden. Always worth OCing if you have a K/X chip and are a gamer, if you're doing actual production work then stability could be a concern, and running stock is better because of that. 

80C is fine as a max, but yeah if it runs that in normal use, that's a yikes. My 6950X sits in the 40s-60s depending on load, it'll only hop up to the 80s after a few hours of Prime95 smallFFT. But then I have it on a dual 360mm rad custom loop, it'd be much hotter on an air cooler. Yours is a 6c/12t chip vs this 10c/20t behemoth though, so it should run cooler at similar voltages. 

What RAM do you have coming in? Since you're on X99, running a 4x kit is always worth it so you can grab that extra bandwidth quad channel RAM offers. I run 4x8GB + 4x4GB for a total of 48GB across 8 sticks, pushed manually to 3200Mhz CL16-18-18-36, which is the XMP profile for the 8GB sticks. If you're using a single matched kit, just enable XMP and bam ez. 

 

 

RAM

 

I was thinking about grabing another set but I thought 32 gigs would be fine.  As for a workstation, I don't do a lot of video rendering but I'll occasionally upload a YT vid of me and my friends in games.  TBH yesterday I didn't think my ram or CPU could have been a bottleneck as the usage never goes above 50% on CPU and 30% for ram.  If im gonna go for the quad channel I could order another set and have it tomorrow but atm having 64 gigs of ram is kinda overkill for gaming no? 

 

I get the idea of having extra bandwith but the only CPU intensive game I play atm is Mordhau.  How much performance would I really get in fps with the quad channel 64gig anyway?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Wizwerd said:

RAM

 

I was thinking about grabing another set but I thought 32 gigs would be fine.  As for a workstation, I don't do a lot of video rendering but I'll occasionally upload a YT vid of me and my friends in games.  TBH yesterday I didn't think my ram or CPU could have been a bottleneck as the usage never goes above 50% on CPU and 30% for ram.  If im gonna go for the quad channel I could order another set and have it tomorrow but atm having 64 gigs of ram is kinda overkill for gaming no? 

 

I get the idea of having extra bandwith but the only CPU intensive game I play atm is Mordhau.  How much performance would I really get in fps with the quad channel 64gig anyway?

Yeah 32GB is fine for editing/gaming. I just added in the 4x 4GB kit on my rig because I had it lying around, not that I ever hit the limit of the 32GB kit I was running before. 

Depends on the game, it's usually not a large difference so probs not worth the price. If you hadn't ordered the kit already, I'd say to get a 4x8GB one instead because why not use quad channel when it's available? But since you have the kit already on the way, just use that, should be fine. 

The CPU can still bottleneck at a low usage, because that number is an average. Most games will hit 2-3 cores hard, but barely push the rest, so when averaged the usage looks low. But most of the work is done by those 2-3 cores and since they're running at a lower clock they can't keep up. Higher clocks = higher single core performance = better performance in games that don't use all your cores to 100%. And since Intel chips regularly clock higher than their stock boost (AMD chips with their better PBO system can't always outclock their own boost in games) it'll perform better in anything that does use all the cores as well. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Zando Bob said:

Yeah 32GB is fine for editing/gaming. I just added in the 4x 4GB kit on my rig because I had it lying around, not that I ever hit the limit of the 32GB kit I was running before. 

Depends on the game, it's usually not a large difference so probs not worth the price. If you hadn't ordered the kit already, I'd say to get a 4x8GB one instead because why not use quad channel when it's available? But since you have the kit already on the way, just use that, should be fine. 

The CPU can still bottleneck at a low usage, because that number is an average. Most games will hit 2-3 cores hard, but barely push the rest, so when averaged the usage looks low. But most of the work is done by those 2-3 cores and since they're running at a lower clock they can't keep up. Higher clocks = higher single core performance = better performance in games that don't use all your cores to 100%. And since Intel chips regularly clock higher than their stock boost (AMD chips with their better PBO system can't always outclock their own boost in games) it'll perform better in anything that does use all the cores as well. 

Alright so I installed the ram, and my mobo has some special features that caught me off guard.  It has preset OC tune profiles that OC the CPU at 3.8,4.0, and 4.2 GHz and I tested the 3.4 and 4.0 but It would immediately crash.  So I stopped trying that feature lol; then I went to adjust my ram memory frequency to 3200 and enabling xmp 2.0 profile and by DEFAULT they change the BCLK from default 100 to 120 and that started causing crashes.  Once I set that back to 100 I could boot to windows and confirmed I could run the new sticks at 3200.  Then my good friend told me that when he OC'd he used Intel Extreme Tuning Utility and that I could change and stress test from the desktop, so that sounded much easier and I looked up a YT tutorial for it.

 

When I'm using this program it seems to be missing all the voltage adjustment features that are in the BIOS and it really dumbs the process down to just sliding the GHz as high as u can go.  Right now 4.0 GHz is the highest I can get it without crashing and my max temps hit 50c on the CPU.  It really seems like I could go further but  adjusting clocks higher than 4.0 are an instant crash. Right now I'm sitting here doing an hr stress test to make sure it stays stable. At least I don't have to worry about degradation from high CPU temps by settling for 4.0 GHz.

Link to comment
Share on other sites

Link to post
Share on other sites

@Zando Bob I went back into the BIOS and i made the exact changes that you specified with the InputVoltage and the Vcore voltage and I was able to boot this time to 4.2 GHz

 

latest benchmark

 

It seems that I lost a bit of performance in some places but gained about 3% on the CPU.

 

it still says my average went up to 3.85GHz from 3.65GHz so I have you to thank for that.  You've been a big help homie tytytytytyty

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Wizwerd said:

@Zando Bob I went back into the BIOS and i made the exact changes that you specified with the InputVoltage and the Vcore voltage and I was able to boot this time to 4.2 GHz

 

latest benchmark

 

It seems that I lost a bit of performance in some places but gained about 3% on the CPU.

 

it still says my average went up to 3.85GHz from 3.65GHz so I have you to thank for that.  You've been a big help homie tytytytytyty

You're welcome! And for CPU testing, I'd recommend Cinebench R20. It's a very consistent test, meaning you'll get more or less the same score each time (usually a variation of 20-50 points depending on what's running in the background). You can run it at stock, mark down that score, then OC and see what you've gained. RAM speed and uncore will also affect this score a bit, so it's an easy way to see any gains you may have gotten over stock. 

 

That's what my 6950X did stock:

IMG_3672.thumb.jpg.973b62e153e8a9945cc6be56b8982492.jpg

Vs at 4.2 core/3.5 uncore with 3200Mhz CL16 RAM:

Screenshot_129.thumb.PNG.0058ef4cc3afc6feff6907f3a447193b.PNG

31% faster single core, 36% better multicore. Massive improvement over stock. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Zando Bob said:

You're welcome! And for CPU testing, I'd recommend Cinebench R20. It's a very consistent test, meaning you'll get more or less the same score each time (usually a variation of 20-50 points depending on what's running in the background). You can run it at stock, mark down that score, then OC and see what you've gained. RAM speed and uncore will also affect this score a bit, so it's an easy way to see any gains you may have gotten over stock. 

 

That's what my 6950X did stock:

IMG_3672.thumb.jpg.973b62e153e8a9945cc6be56b8982492.jpg

Vs at 4.2 core/3.5 uncore with 3200Mhz CL16 RAM:

Screenshot_129.thumb.PNG.0058ef4cc3afc6feff6907f3a447193b.PNG

31% faster single core, 36% better multicore. Massive improvement over stock. 

I'd run a stock test but after 4 days of waking up, troubleshooting, googling, etc.  I'm kinda beat.  I already can tell a huge difference in the games I'm playing.

 

Mordhau which can use all 6 of my cores is running at 165 FPS on near max settings when before I had to use low settings on everything to get that framerate.

 

I tested it in populated lobbies and different maps, its huge.  I barely dip below 100 fps on my lows.  I think i'll see a huge improvement when the x4 8 gig CL16 sticks I ordered come tomorrow.  I'm gonna return the 16 gig ones cause after doing some research I learned how to judge ram sticks much better.  for $86 per set its a deal.

 

I'll let you know the performance difference between the dual channel and the quad tomorrow.

cinebench release 20 score.PNG

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Wizwerd said:

I'd run a stock test but after 4 days of waking up, troubleshooting, googling, etc.  I'm kinda beat.  I already can tell a huge difference in the games I'm playing.

 

Mordhau which can use all 6 of my cores is running at 165 FPS on near max settings when before I had to use low settings on everything to get that framerate.

 

I tested it in populated lobbies and different maps, its huge.  I barely dip below 100 fps on my lows.  I think i'll see a huge improvement when the x4 8 gig CL16 sticks I ordered come tomorrow.  I'm gonna return the 16 gig ones cause after doing some research I learned how to judge ram sticks much better.  for $86 per set its a deal.

 

I'll let you know the performance difference between the dual channel and the quad tomorrow.

Noice! And hehe, exact same single core lol. These chips seem to be remarkably consistent at the same clocks. My 5960X also did 403 single core, but at 4.5Ghz core/3.7Ghz uncore, needed 300Mhz/200Mhz more to hit the same level of performance. Your chip should behave exactly like mine in any game that uses 6 cores or less, which would be most of them since few games actually scale across many cores well. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×