Jump to content

7800x vs 7700k, hint: the 7800x is slower(even when overclocked)

kiska3
1 minute ago, done12many2 said:

 

Just as it is with core clocks, the goal in cache overclocking should be to minimize wasted cycles.  Increasing its speed increases the opportunity for faster cycles from time of requests to time received (lower latency).  

 

I think the fact that higher memory overclocks net less return from cache overclocking can be explained in the fact that the goal of minimizing wasted cycles is a product of both ends.  If it's a miss, the request is then sent to the next in line.  If the next batter up has his game face on, he's helping to keep the overall goal of minimized wasted cycles to a minimum.  In the end, the faster memory is already doing a great job of sending stuff back so the cache doesn't benefit as much from running faster.  That's not to say that it doesn't benefit at all because the faster cache always has better opportunity.  

 

Just a hunch.  :D

 

 

Cache (mesh) speed on Skylake-X is pretty dramatic.  Increasing cache from stock 2400 MHz to just 3000 MHz can bring about a 10ns + drop in request to received latency, which is what AIDA64 measures.  

That could explain the difference in OP's reported performance. If the lower-end Skylake-X chips run a slower cache, everything attached to it is slower as a result. Would be interesting to see reviewers tinker with Skylake-X's cache speed, and bench the results. Cache overclocking just might become relevant again, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, MageTank said:

I have no idea. I just know that uncore is tied to your memory controller, ALU, FPU, L1, L2 and your QPI controllers (L3, IMC, etc).

 

https://en.wikipedia.org/wiki/Uncore

So I am assuming, if your memory controller is bottlenecked by your uncore (cache) speed in any way, that increasing it will remove said bottleneck. I don't really understand the science behind it, I am just a simple overclocker, but someone might be able to clarify for us if they understand how it works. Either way, that's how a lot of the higher end memory overclockers set their records. Using tricks most people deemed "worthless". 

On a simplified level cache saves values as they are read (and/or written depending on architecture) from central memory and the cpu looks for data there before it accesses ram. This can be more efficient because cache is much faster than ram. However it also means cache must be accessed first every time. It's not so much about bottlenecking but about having a fixed time that the cpu spends looking for data in cache. Since it is so fast that time is still small so the difference you notice in access latency is not that high. At least that's what seems likely to me (certainly it's much more complex in modern architectures, especially when it comes to working out the actual ratio between clock speed and latency but it should still be similar to that on a macro level).

 

-edit-

@done12many2 explained it in more detail than I did

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sauron said:

On a simplified level cache saves values as they are read (and/or written depending on architecture) from central memory and the cpu looks for data there before it accesses ram. This can be more efficient because cache is much faster than ram. However it also means cache must be accessed first every time. It's not so much about bottlenecking but about having a fixed time that the cpu spends looking for data in cache. Since it is so fast that time is still small so the difference you notice in access latency is not that high. At least that's what seems likely to me (certainly it's much more complex in modern architectures, especially when it comes to working out the actual ratio between clock speed and latency but it should still be similar to that on a macro level).

 

-edit-

@done12many2 explained it in more detail than I did

 

Yeah, but yours sounded more professional.  I always type like I talk.  haha

Link to comment
Share on other sites

Link to post
Share on other sites

On the actual topic, Steve really needs to add a Ryzen 5/7 comparison. Even by his testing from several months ago, it looks a lot like the 7800X is pretty much hitting the exact same barriers that Ryzen did, minus a few exceptions where that single core can be pushed harder. 

 

If that does turn out to be the case (and it looks a lot like it), then it would confirm one of my suspicions since Ryzen launched: Nvidia's driver team has found a way to exploit the Ring Bus to gain extra performance. So the performance difference, while real, has a lot to do with the lack of a high-end AMD card to highlight what's been going on. 

Link to comment
Share on other sites

Link to post
Share on other sites

This brings up a really interesting discussion about Gaming Performance: Coffee Lake (6c & 4c) should be a Kaby Lake refresh/node improvement. Icelake, which should come late 2018, is an outright Architecture change. Are they going to use the same Mesh Cache system on the Desktop platforms going forward?

 

If the cache system switches, it could leave the Coffee Lake CPUs as the best gaming CPU for a good number of years, or until the GPU drivers get adjusted to the new CPU environment. 

Link to comment
Share on other sites

Link to post
Share on other sites

Further thoughts: is it possible that both Nvidia's Launch Lead and AMD's "Finewine" effects are actually down to the way Nvidia functionally gave up on optimizing for AMD CPUs and used their software scheduler to extract a baseline of better performance from the Intel architecture, rather than the specific core itself?  

 

This would explain the "clumping" numbers we generally see when we got testing of the 2600k to current K SKU CPUs. In my neck of the analysis woods, the way those results clump told me that there's almost never a CPU bottleneck in any of the Intel or Ryzen CPUs. Nvidia's drivers are really clever at something that doesn't really work when the L3 becomes a "victim cache".   And it's not a small issue. Nvidia has found somewhere between 15-30% more FPS for the work.  That's impressive.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

In my neck of the analysis woods, the way those results clump told me that there's almost never a CPU bottleneck in any of the Intel or Ryzen CPUs. Nvidia's drivers are really clever at something that doesn't really work when the L3 becomes a "victim cache". 

It also would explain Ryzen's poor gaming performance (because Ryzen has 2 L3 caches instead of 1)....

 

It will be very interesting to see a Ryzen vs 7700K and a 7800X vs 7700K comparison with an AMD GPU (probably Vega, because the 580 is too weak)

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, PCGuy_5960 said:

It also would explain Ryzen's poor gaming performance (because Ryzen has 2 L3 caches instead of 1)....

 

It will be very interesting to see a Ryzen vs 7700K and a 7800X vs 7700K comparison with an AMD GPU (probably Vega, because the 580 is too weak)

That would be good! Vega and 1080/Ti as a control sample as well to see how it all goes.

 

Then do it all again once Threadripper is here :D

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah the X aint worth it right now. All depends though, could be a write off, or whatever.

Go with the K for sure, its good value if you O/C for games.

 

I dont see no need, a used unit from the 4th gen is dirt cheap these days.

Link to comment
Share on other sites

Link to post
Share on other sites

People keep bringing up the future crap are the same people who told people to get 8350's over 3770's & 4670's ect and look what happened that 8350 getting wacked by a 2500k in 2017 let alone older i7's. going by their logic you shouldn't move to a 1500x or 1600.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, PCGuy_5960 said:

It also would explain Ryzen's poor gaming performance (because Ryzen has 2 L3 caches instead of 1)....

 

It will be very interesting to see a Ryzen vs 7700K and a 7800X vs 7700K comparison with an AMD GPU (probably Vega, because the 580 is too weak)

There is some other little wrinkles to Nvidia's driver and the 6 thread approach it takes, which could also be affecting the 7800X specifically. And we do already know there is a DX12 + Nvidia 1080/Ti + Ryzen issue, which could also be cropping up for the 7800X as well. While Steve's larger Ryzen data testing set is from 4 months ago, the 7800X and Ryzen 7 1800X numbers look starkly similar in most games. (Almost pure single-core games will still get the frequency boost.)

 

Granted, for me, the dual-core results were the big tip off to this months ago.  Lower-clocked G4xxx series CPUs shouldn't be faster, in some cases, than Ryzen parts. Unless Nvidia's driver isn't actually responses to "clock frequency" but L3 Cache speed, which scales with clock speed. That would answer a lot of questions we've had since Ryzen launched, and it really shows that AMD being out of the high-end GPU market this cycle actually made the situation look a lot worse.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, PCGuy_5960 said:

It also would explain Ryzen's poor gaming performance (because Ryzen has 2 L3 caches instead of 1)....

 

It will be very interesting to see a Ryzen vs 7700K and a 7800X vs 7700K comparison with an AMD GPU (probably Vega, because the 580 is too weak)

Hardware unbox did a video on this even used a 295x which is faster than 580

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Demonking said:

Hardware unbox did a video on this even used a 295x which is faster than 580

But it's a dual GPU card, so not optimal :/

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

@Demonking it's less Nvidia has been handicapping AMD CPUs; it's far more that they found a way to extract a chunk of extra speed out of the Intel architecture on their top-tier GPUs.

 

There can be bottlenecks anywhere in the: CPU, motherboard, chipset, GPU, game engine or GPU driver. With that many "moving" parts, it takes a lot of data to come up with something. But the L3 cache structure & Nvidia's driver implementation would make a lot of sense.

 

It would also explain the DX12 issues. It's already know that Nvidia's DX12 setup isn't as good as AMD's, but it strikes me that whatever approach they were able to master for DX11 and the L3/Ring Bus just doesn't carry over.  It would explain one of the things going on.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Enderman said:

So you also this this is margin of error?

AIDA64-CPU-Photoworxx.jpg

Ok sure, whatever you want to believe dude...........

"6700k and 7700k must have the same IPC, that's why these benchmarks show a difference" xD

 

3

Dude, it has a measly increase of 0.48%. You really believe that is not margin of error? You see the graph start at 26000 right?

 

And again, how many times has these benches been run to give a good average?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Notional said:

Dude, it has a measly increase of 0.48%. You really believe that is not margin of error? You see the graph start at 26000 right?

 

And again, how many times has these benches been run to give a good average?

That's not even the biggest deal in regards to photoworxx. Photoworxx is not an IPC test, lol. It doesn't care about clock speed at all. It's a memory bandwidth test. Look at how Xeons and Opteron's top the chart, and how I was able to score over 20% higher than that 7700k's result, with my 6700k, clocked at 4.5ghz. Over 33k score with just my memory overclock alone. 

 

This is why I hate graphs. They are almost always deceptive when the actual logistics are ignored. The same can be said of percentages. The difference between 20 and 22 fps is only 2 FPS, but in a percentage, that is a 10% difference in framerates. However, at 100fps vs 98fps (still 2fps difference) the difference is only 2%. Yet people still use percentages in regards to framerates, to represent a "clear" IPC boost.

 

If anyone looks at the actual raw data, it's clear without a single doubt that there is no IPC boost gained from Kaby over Skylake. Intel themselves didn't even claim an IPC boost, and no other legitimate outlet claimed an IPC boost either. Every test that has ever been done, can easily be debunked by varying memory speeds and testing environments. The fact that people use different tests from different sources, tells me they are only cherry picking data that fits their narrative. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

TechSpot has a BIOS issue. PCGamer is doing ongoing testing with BIOS updates, and the results are vastly different. He's testing with a 7900X but it's the same architecture as the 7800X. Youtubers don't know what they're talking about.

http://www.pcgamer.com/the-ongoing-testing-of-intels-x299-and-i9-7900x/

 

 

Spoiler

8dwBdxpxwEFbpWqEeu7tm-650-80.png.10991ca4e9d6235f8dd333b29fc5405b.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.5510cb1c31e751be35fae4a378595b35.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.0679521a5cf6129d3fedccf93180ff48.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.075cf678cf9f480676cfe0c2afc13083.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.c93d307155c3e41276729e74dbc38abd.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.f7a28a79bc3bce96333734b7f3276c89.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.0c30a96eba5754d056f6eaaaf09dd497.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.0c30a96eba5754d056f6eaaaf09dd497.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.e932be580565959cfd12fcec64c13311.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.3e8df406fa17f6f86f059d1dfc00b0a6.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.d0b446f134a4b30828bcc93d2d0bd8af.pngcqGLsLXDsXnCFhhZgHNws-650-80.png.d0b446f134a4b30828bcc93d2d0bd8af.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.ba56361625413b73b9635717f2f7b4c2.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.e7c4d22da1dd76dd89c392780a827ef3.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.72e3c60f530635f2e0fa770357064aad.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.95d37f02bd3bfb4a278cf6c33fc9a625.pngRCdD6LgR2gXiMbDENXn2n-650-80.png.4456e1185fa6f584b4bd184d65389ad8.pngRCdD6LgR2gXiMbDENXn2n-650-80.png

cqGLsLXDsXnCFhhZgHNws-650-80.png

 

 

 

 

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Simon771 said:

 

 

And you can clearly see that i7 7700k at stock is still better than 7800X at 4,7GHz. Now I might be wrong, but I think i7 7700k at stock doesn't get to 4,7GHz.

So much about your math xD

7800x is a different arch, it has more l2 cache and less l3. The L2 matters more for server workloads which is what skylake x based CPUs are designed for but in gaming you see a regression.

18 hours ago, Enderman said:

7700k is kabby lake, 7800X is still skylake.

That's why it's slower.

Intel has not changed skylake arch AT ALL when going to Kaby. Skylake x is a different story but skylake vs Kaby Lake is the same.

http://www.anandtech.com/show/10959/intel-launches-7th-generation-kaby-lake-i7-7700k-i5-7600k-i3-7350k/8

Quote

IPC: No Change in CPU Performance (link)

 

As was to be expected, especially judging on how Intel described the upgrade between Skylake and Kaby Lake, there is no IPC gain between the two for direct performance. In our testing, a 3.0 GHz Core i7-7700K Kaby Lake part performed identical to a 3.0 GHz Core i7-6700K Skylake processor (HT disabled). The only difference is really in the memory support, given that Skylake supports DDR4-2133 and Kaby Lake supports DDR4-2400, however this has a minor effect on almost all benchmarks.

 

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, MageTank said:

To further illustrate my point on cache speed, and it's impact on the IMC (and latency): 

 

Stock 3200 C14 XMP, 4.8ghz cache (north bridge clock):

UU7YrEl.png

 

Stock 3200 C14 XMP, 4.0ghz cache (north bridge clock):

3sh0qDO.png

Now, I am not even going to sit here and pretend that I know why it's that dramatic a difference in latency. Someone more educated on caches relationship to the IMC will have to clarify that point, but I know that I, along with some of my enthusiast memory overclocking friends, have kept this a secret for quite some time now. That 800mhz difference was a 12ns difference in latency for some reason. Now, I will also say this: the higher you push your memory overclocks, the less dramatic an impact cache seems to have on latency. So diminishing returns do exist. For my 3600 C14 profile, I go from 39ns at 4.2ghz cache, down to 36ns at 4.8ghz cache. Basically 1ns per 200mhz cache. For this slower XMP speed, the difference seems to be far more dramatic.

 

If it helps, I can redo the tests with my Aida64 SPD page opened in the background, along with ASRock timing configurator open so that you can see I did not adjust tRFC, tREFI, or any tertiary timings to deflate this result. I take memory overclocking very seriously, and I do not benefit from deceiving with false information. If it helps, @done12many2can chime in with his results after tinkering with cache speeds. He actually spent more time testing cache than I did, lol. 

Fight me IRL bro

 

ooHIB.png

 

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Taf the Ghost said:

On the actual topic, Steve really needs to add a Ryzen 5/7 comparison. Even by his testing from several months ago, it looks a lot like the 7800X is pretty much hitting the exact same barriers that Ryzen did, minus a few exceptions where that single core can be pushed harder. 

I have gotten from Steve that an upcoming video will look into that. I don't know when that will materialise, but I will update my OP, when that happens

Western Sydney University - 4th year BCompSc student

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×