Jump to content

Ryzen 2700X OCed to 4.3Ghz (1.4v) across all cores, performance numbers included.

Master Disaster
4 minutes ago, hobobobo said:

how many cores is 8300 or 8350k?)

Somewhere between 3 and 5 I think :D 

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Cookybiscuit said:

Intel CPUs don't have four cores anymore though :/

Increase the core count by two on both. Top of the line consumer at AMD has 8 cores (with good thermal interface material) and intel has 6. Wait is the mid end 6 core 6 thread and not 12 thread?

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, GoldenLag said:

Increase the core count by two on both. Top of the line consumer at AMD has 8 cores (with good thermal interface material) and intel has 6. Wait is the mid end 6 core 6 thread and not 12 thread?

 

The i5s are all 6c without HT. AMD doesn't have 6C/6T chips. Only 4C/4T, 4C/8T, 6C/12T and 8C/16T.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, asus killer said:

3.5 cores, 4.5 cores? Did they pull a Nvidia xD

Yes, 3.5 Coffee Lake cores and 0.5 486-DX2 :P 

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, NelizMastr said:

The i5s are all 6c without HT. AMD doesn't have 6C/6T chips. Only 4C/4T, 4C/8T, 6C/12T and 8C/16T.

Something tells me those 6 core 6 threads are gonna remain for at least 3 years in the i5 skew. I wonder how many cores in the clusters Zen 2 will have and how many clusters they can fit at 7nm and if they have dedicated cache cluster

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, GoldenLag said:

They are okay for minor arcitectural changes. They still havent fixed the clock issue which seems to be linked to the arcitecture and not so much the node. We got exactly what was expected from the 14-12 nm shrink.

The Clocks are the Node; the Cache Latency is the design.

 

14nm to 12nm looks to be about 10% for All-Core clock speed, which is pretty solid. (Depends on SKU a bit.) And it also looks like OC'ing will net you very little in this generation, as there's still the lack of OC headroom because of the Node.

 

The issues for gaming are the Cache Latency and Nvidia's driver team. Microsoft could help matters as well. (They already released one fairly major scheduler change last year for Ryzen.) This is actually why the 8400 & 8700k tend to be far closer to each other in gaming benchmarks than they should be. The big difference is how developers have learned to exploit the "Core" Architecture of Intel over the years.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Cookybiscuit said:

Intel CPUs don't have four cores anymore though :/

Mainstream only very recently yes and at a much higher cost than I could get a Ryzen system to outperform an Intel system for the same price in GalCiv 3, don't think cost is actually a factor for me though since I've spent double on water cooling than most spend on their gaming PC. 

 

Not everyone has essentially blank cheques to buy the best of each component and I'd recommend spending more on going up a GPU model than chasing 7700k/8700k gaming CPU performance.

 

People say higher FPS but all I see is this

5898cedd3bb88.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

For what is worth it the i7 8700 being much on pair with the Ryzen 7 1700 at multi-threading while the i7 blows it away in single thread is impressive on its own right, chip's superior with actual 2c/4t less.

 

Things will be more interesting when Ice Lake finally arrives and Intel goes equal with AMD on core and thread count at last, also gonna mean the end of HEDT in a way since all those people who needed nicer CPUs but didn't need a bunch of pci-e lanes will finally rejoice.

 

I remember when Luke "upgraded" to the i7 6800k because he was lacking the CPU horse power, many others may have done the same, spend a buck just for a 6cores/12threads CPU without even really needing additional pci-e lanes or more than 64GB of ram.

 

Nowadays with the i7 8700 / 8700k and even the Ryzen 7 line up out there and in the near future with these Ryzen 7+ and Ice Lake 8c/16t mainstream CPUs I can only see it beneficial to a hell lot of people regardless who has the best offering...

 

I'm quite pleased the monopoly of over-expensive HEDT chips is over and quad-cores as best possible purchase available on mainstream is officially over.

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

Something tells me those 6 core 6 threads are gonna remain for at least 3 years in the i5 skew. I wonder how many cores in the clusters Zen 2 will have and how many clusters they can fit at 7nm and if they have dedicated cache cluster

i bet you that by this time next year with the new zen2 CPU's you'll be changing your mind 9_9

.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, asus killer said:

i bet you that by this time next year with the new zen2 CPU's you'll be changing your mind 9_9

I probably will, but then again my mind is as reliable as a certain country politician not legally getting bribed

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, GoldenLag said:

Something tells me those 6 core 6 threads are gonna remain for at least 3 years in the i5 skew. I wonder how many cores in the clusters Zen 2 will have and how many clusters they can fit at 7nm and if they have dedicated cache cluster

I can see i5 move to 6c12t if the i7 moves to 8c16t for the 9700k on z390.

 

Also some rumors say zen2 will be 6c per CCX, making it 12c per die, the rumor is based on that the next server CPU will be 48c and keep the same socket which means it has to be a 4 die design.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

I would be tempted to go 2xxx series ryzen. Though im gonna kill my 1500x by overclocking before then. Its then going on my keychain. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Monarch said:

 

But I've just literally explained to you, in detail, why that's irrelevant. You may not play at 720p, but you will run into a CPU intensive part of the game in which case your framerate depends on the CPU, just like when you create a CPU-bound situation to represent a real one by lowering res to 720p. Say a difference between an i5 and i7 in a real world CPU-bound situation is 30%, you should have about the same performance difference in a 720p CPU-bound scenario you created yourself to not have to look for a real one in the game.

Bottlenecks aren't linear within game engines. 720p is different than 1440p for where things will bottleneck. This is why the stuff isn't predictive, except for the game engine itself. This is why, in some games, they've found 30% uplift on Ryzen by tuning the Memory faster & faster. (From a 2133 base to >3200 at low latency sub-timings.) Once you get above 100 FPS, and those 10 ms timings, very subtle aspects of the way a game engine is built start to crop up. 

 

This is why the real difference between AMD & Intel is in the optimization. They just know how to get more out of the Core Architecture from a very long time of practice. This is also why the i5-8400 and i7-8700k tend to be very close to each other in a lot of game benchmarks, yet really far apart in a few. It's also why X299 parts have some really issues in certain games, as the stock Mesh latency is higher than stock Ryzen latency.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, GoldenLag said:

I probably will, but then again my mind is as reliable as a certain country politician not legally getting bribed

in less than 12 months AMD closed the gap in a way no one could even imagine. With the new Zen 2 closing even more and offering more cores for less money, Intel has just on road to take, offering more cores because i don't see them being able to make huge leaps in core clock. And off course the all "a chipset a year makes intel rich", no overclock on B mobos, closing some CPU's to overclock, etc... doesn't help much

 

and i guess this road that AMD started will eventually see the "single core clock" argument lose all relevance, most games (at least games) will start to use more and more cores

.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, The Benjamins said:

I can see i5 move to 6c12t if the i7 moves to 8c16t for the 9600k on z390.

 

Also some rumors say zen2 will be 6c per CCX, making it 12c per die, the rumor is based on that the next server CPU will be 48c and keep the same socket which means it has to be a 4 die design.

Some has speculated that server and threadripper will have a center die for the cache to have more room for other components. Though that is pure speculation and might not function properly with the current iteration of zen

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, hobobobo said:

There is even some difference between 200fps and 1000fps, but only for pro players. As far as i remember, cs registers shots by frames so having the shot registered on 100th (0.5 sec) frame vs 491th (0.491 sec) makes or breaks some kills

That doesn't make much sense. Rendering is based on game data. Something happens on the game, it triggers changes in what must be drawn, then it gets drawn based on the game data and your settings, then you get a picture. In principle, it should be possible to store all the game data only, then re-rendered it as many times as necessary with as many different settings as necessary. You would get different quality and FPS in wach case, yet it would have zero impact on what happens.

I mean, that's basically what a cut-scene is (except for the old-style, pre-rendered videos at lousy quality and framerates -looking at you, Metro. But the game-engine based cut-scenes, like I don't know, shot cams in X-COM, for example).

What you are saying implies that rendering precedes simulation, which is putting the cart in front of the horse. There may be something else, though.

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, GoldenLag said:

I would be tempted to go 2xxx series ryzen. Though im gonna kill my 1500x by overclocking before then. Its then going on my keychain. 

 

Quoted because it's awesome.

 

 

One thing to note in the computer industry.. it's usually not the big guys that do innovation. It's the small guys because they have to.

 

Ex: Microsoft talks all day about innovation it's like their tag line.. but for the life of me I can't think of a single thing they have ever done that didn't involve buying some other company or stealing the idea from something else.

 

Is Intel innovative? sure.. somewhat.. but for the amount of money they spend on RnD you'd like to hope they had some results. On the flip side.. AMD had to get this right, they don't have the endless RnD budget. There was no other option for them. Did they? Well.. mostly. They got close enough to get Intel to notice and that is a resounding victory for them I'm sure. How? Well they kind of cheated.. but in design if you cheat and you fail that's trash but if it works is that then elegance instead?

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, jde3 said:

 

Quoted because it's awesome.

 

I would show a picture of my other dead CPU hanging on my keychain, however its an old CPU. I i figured that keychaining them would be more fun than selling them

 

Edit: also keychained some RAM components that were myseriusly cut in half

Edited by GoldenLag
Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Monarch said:

 

But I've just literally explained to you, in detail, why that's irrelevant. You may not play at 720p, but you will run into a CPU intensive part of the game in which case your framerate depends on the CPU, just like when you create a CPU-bound situation to represent a real one by lowering res to 720p. Say a difference between an i5 and i7 in a real world CPU-bound situation is 30%, you should have about the same performance difference in a 720p CPU-bound scenario you created yourself to not have to look for a real one in the game.

It's not irrelevant, representing either the top of bottom 5% of performance as real world is disingenuous.

 

These scenarios you're talking about are likely to happen so infrequently that, as I've already said twice now, it's not representative of real world performance.

 

It's more honest, and useful, to show how a product will perform in the majority of situations rather than how it can perform under one very specific situation.

 

That said I do understand the need to show best case scenarios, I'm certainly not saying they shouldn't exist but again, they should not be used to measure how a product will perform all the time. That's simply not true.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, SpaceGhostC2C said:

That doesn't make much sense. Rendering is based on game data. Something happens on the game, it triggers changes in what must be drawn, then it gets drawn based on the game data and your settings, then you get a picture. In principle, it should be possible to store all the game data only, then re-rendered it as many times as necessary with as many different settings as necessary. You would get different quality and FPS in wach case, yet it would have zero impact on what happens.

I mean, that's basically what a cut-scene is (except for the old-style, pre-rendered videos at lousy quality and framerates -looking at you, Metro. But the game-engine based cut-scenes, like I don't know, shot cams in X-COM, for example).

What you are saying implies that rendering precedes simulation, which is putting the cart in front of the horse. There may be something else, though.

 

Im by no means competent on internal workings of game engines, but it was explained to me that source2 renders as much frames as it can get and drops the exess ones in between sending the relevant ones. Input is registered by the engine on the frame on which input reaches it, disregarding if the frame is displayed or dropped

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

Some has speculated that server and threadripper will have a center die for the cache to have more room for other components. Though that is pure speculation and might not function properly with the current iteration of zen

The offloaded "central hub" die makes some sense, actually, but I still don't think we're seeing it at the production level for at least a few generations. I get the feeling it was from a design study that AMD did and it got rumor milled into being for production. 

 

Most likely scenario is still a fork in design. "Zen2" cores end up in an 8c part and a 16c part. 8c design, on 7nm, ends up probably around ~140-150 sq mm with the 16c ~230-250 sq mm. Epyc 2 ends up with SKUs with either Big or Little die. Little Dies would clock higher, while Big dies would have up to 64c per SKU.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, hobobobo said:

There is even some difference between 200fps and 1000fps, but only for pro players. As far as i remember, cs registers shots by frames so having the shot registered on 100th (0.5 sec) frame vs 491th (0.491 sec) makes or breaks some kills

don't pro's play on the same hardware on LAN tournaments? 

 

200FPS is a 5ms frame time, so 200 vs 1000 is only useful at lan events. even at 120FPS the frame time is just over 8ms, I can't see how saving 3-4ms is useful outside of LAN play.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Dylanc1500 said:

Better be overclocking.

#ARDUINOLIVESMATTER.

Anything is possible 

GNU/Linux Running On An 8-Bit Processor

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

Sometimes borderline flame wars bring fourth the best dicussions. Good thing we were given matches and not torches

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×