Jump to content

Ryzen 2700X OCed to 4.3Ghz (1.4v) across all cores, performance numbers included.

Master Disaster
2 minutes ago, Cookybiscuit said:

If multi-threading in games was that good Ryzen would easily outperform Intel, but it doesn't yet. There's still a ways to go before moar cores>fast cores.

PissMark isn't an indicator of anything, it's a useless test.

same can be said for your chart made with really outdated prices 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Bananasplit_00 said:

oh wow missed this cheeky edit, nice way to get out of screwing up... anyway i was just saying that i didnt remember Ryzen being 30% behind, used your provided benchmark and calculated out you are claiming almost doubble what the actiual numbers are 9_9 To answer your question here, because its cheaper. the 1600 + mobo is cheaper than the I5 7400 + mobo was or about the same, so it was generally recomended and the 1700 is great value if you do something that can use the cores and game ontop of that, otherwise the Intel options can be better. 

7400 is irrelevant to the discussion when the 8400 exists and trounces the 1600.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, jde3 said:

When did I say better?

 

It's got a novel design, it's good for stuff.. its darn good for my use case. But better? What is that metric? Best ever? no.. what are we doing? x86 isn't even the best architecture.

Someone should develop doom for Power, I'll run it on Power 9 for giggles. Then we will have a comparison of architectures (as far as ONE individual game is concerned).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

They are okay for minor arcitectural changes. They still havent fixed the clock issue which seems to be linked to the arcitecture and not so much the node. We got exactly what was expected from the 14-12 nm shrink.

pretty sure its a node issue, this is still a refinement to a low power node

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dylanc1500 said:

Someone should develop doom for Power, I'll run it on Power 9 for giggles. Then we will have a comparison of architectures (as far as ONE individual game is concerned).

The original doom ran on Irix and SGI systems.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, hobobobo said:

Im upset since its unrepresentative. 720 benachmark highlights latency difference mainly, where ryzen is behind 20-30%. Show me someone gaming on 720p monitor with a 8700k and a vega64 and ill know how an oblivious idiot looks like

 

Benchmarking at 720p is done to ensure a CPU-bound scenario at all times throughout the testing, regardless of what's going on a game. Wasting hours to find a real CPU-intensive part of the game makes no sense, and even if you do find it you can't rerun the same test with different CPUs in most games due to a lot of variables.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, jde3 said:

The original doom ran on Irix and SGI systems.

That is very true. However I don't think anyone here really cares about how well the original doom runs nowadays. Then again there could be some outliers that might care.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Cookybiscuit said:

PissMark isn't an indicator of anything, it's a useless test.

Well, the 720p benchmark got thrown out of the window then. If synthetics cant be used for comparing CPU's. Why should the 720p benchmark count? They are both supposed to utilize the CPU to its maximum. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dylanc1500 said:

That is very true. However I don't think anyone here really cares about how well the original doom runs nowadays. Then again there could be some outliers that might care.

#DOOMONARDUINO

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Monarch said:

 

Benchmarking at 720p is done to ensure a CPU-bound scenario at all times throughout the testing, regardless of what's going on a game. Wasting hours to find a real CPU-intensive part of the game makes no sense, and even if you do find it you can't rerun the same test with different CPUs in most games due to a lot of variables.

We understand why it's done, the point is it's not representative of real world performance and therefore meaningless.

 

Just like when VW say their latest diesel can do 90 mpg, sure in a lab on a rolling road with no variables at all but get it out on the road and your doing good to hit 60. Meaningless.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Cookybiscuit said:

If multi-threading in games was that good Ryzen would easily outperform Intel, but it doesn't yet. There's still a ways to go before moar cores>fast cores.

And if the difference actually had anything to do with the basic core performance and not 10+ years of optimization specifically for Intel architectures then yes that would be true. If you look at games that actually claim to be well optimized for multiple cores then there is next to no difference.

 

As for my GalCiv 3 example absolutely yes a Ryzen 1600 or better would crush every 4 core Intel CPU in that game. If you play a lot of RTS and TBS games like I do you'll know once you hit late game turn times and general game interaction gets SUPER bad and only the most well optimized multi core friendly games pass muster here, rest have to go right in to the "click next then walk away and do something else" play style and is why Civ6 disappointed me so much.

 

It's like everyone expects a chef to get a Michelin star on the first try, a dish doesn't need one to still be a great meal.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is a really spooky company btw. You all of course remember Palladium in the Itanium chips.. thank god we dodged that one.. prob due to the Athlon64 being so good. This was real kiddos. https://www.theregister.co.uk/2002/06/25/why_intel_loves_palladium/

 

Win-tel wanted to make it so the only thing that could run on your CPU had to be approved software by them.

 

This is the shit we were dealing with when Microsoft and Intel had no opposition.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Thank god AMD is here. Even if Intel got to 6 core. Its gonna be a decade of stagnation and 10nm+++++ before 8 core would have become mainstream

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Coaxialgamer said:

#DOOMONARDUINO

Better be overclocking.

#ARDUINOLIVESMATTER.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Call me when I care about CS:GO and when monitors can do over 240Hz ;)

As someone who plays Counter-Strike since 1.5 I vouch for this video ;)

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Call me when I care about CS:GO and when monitors can do over 240Hz ;)

 

I won't be expecting a call though since I don't care about CS:GO and never will lol.

From someone who cares about this game, it saturates around 144-165Hz anyway. 240Hz is mostly placebo with this game anyway.

(And it's just so badly optimized sometimes that it shouldn't be used in any arguments :P )

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Master Disaster said:

We understand why it's done, the point is it's not representative of real world performance and therefore meaningless.

How is it not representative of real world performance? Instead of finding a CPU intensive part of the game and test CPUs there, you create a CPU intensive situation yourself. Of course, you are increasing the amount of fps by doing this which isn't representative of the actual fps you'd get in an actual cpu heavy part of the game, but the the performance difference between the cpus is what matters.

 

 

 

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Monarch said:

How is it not representative of real world performance? Instead of finding a CPU intensive part of the game and test CPUs there, you create a CPU intensive situation yourself. Of course, you are increasing the amount of fps by doing this which isn't representative of the actual fps you'd get in an actual cpu heavy part of the game, but the the performance difference between the cpus is what matters.

 

 

 

Because out in the real world no one actually plays at 720p.

 

The resulting data presents the best possible outcome and not necessarily the real world outcome.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, cj09beira said:

pretty sure its a node issue, this is still a refinement to a low power node

Would be nice to test cpu's on different nodes. How would Intel fare on AMD's nodes and how AMD would fare on theirs. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Master Disaster said:

Because out in the real world no one actually plays at 720p.

 

The resulting data presents the best possible outcome and not necessarily the real world outcome.

Ok, so I dont have a horse in the game here, but let's just take a step back. So what it does is test what happens in CPU bound gaming loads, which is not the same sort of thing as CPU bound cinebench or etc.

 

Thing is. Right now it seems ludicrous. No one plays at 720p. Ok. So what happens when you take a 1080ti or a 1280ti (years down the road) and play at 1080p. More or less the exact same thing. Highly CPU bound. And yeah, I know idiots that bought 1080tis to run at 1080p. They do exist. But even then, suppose you really dont think anyone will run at less than 1440p... at the current rate of improvement, how long before current games at 1440p will be cpu bottlenecked by GPU advancements? 

 

Hell, I can show you CPU bottlenecking on gta5 at 4k from a 4790k at 4.7 GHz moving to a 5930k at the same speed just from two 980tis...

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Princess Cadence said:

As someone who plays Counter-Strike since 1.5 I vouch for this video ;)

2 Jokes:

Sorry I work for Ubisoft

Counter-strike..... IGNORED

 

xD

 

On topic, yea it's not like I disagree with that however a lot of that smoothness comes from frame lows being higher rather than specifically the average being higher. As a gamer who thinks games that look anything like minecraft should burn in the deepest fires of hell anything above 120 FPS is wasted potential.

 

When I was playing cod4 my target was 100, which meant turning a lot of stuff down and going lower res on an X800. Still my most fun PC gaming years.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, leadeater said:

And if the difference actually had anything to do with the basic core performance and not 10+ years of optimization specifically for Intel architectures then yes that would be true. If you look at games that actually claim to be well optimized for multiple cores then there is next to no difference.

 

As for my GalCiv 3 example absolutely yes a Ryzen 1600 or better would crush every 4 core Intel CPU in that game. If you play a lot of RTS and TBS games like I do you'll know once you hit late game turn times and general game interaction gets SUPER bad and only the most well optimized multi core friendly games pass muster here, rest have to go right in to the "click next then walk away and do something else" play style and is why Civ6 disappointed me so much.

 

It's like everyone expects a chef to get a Michelin star on the first try, a dish doesn't need one to still be a great meal.

Intel CPUs don't have four cores anymore though :/

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Cookybiscuit said:

It does the job, you're talking about a seven year old product still being relevant, in the tech industry that's an extremely long lifespan.

If you said that about a GPU sure but CPU's not so much.... CPU's tend to out last GPU's in usefulness by a long shot

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Master Disaster said:

Because out in the real world no one actually plays at 720p.

 

But I've just literally explained to you, in detail, why that's irrelevant. You may not play at 720p, but you will run into a CPU intensive part of the game in which case your framerate depends on the CPU, just like when you create a CPU-bound situation to represent a real one by lowering res to 720p. Say a difference between an i5 and i7 in a real world CPU-bound situation is 30%, you should have about the same performance difference in a 720p CPU-bound scenario you created yourself to not have to look for a real one in the game.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Cookybiscuit said:

Intel CPUs don't have four cores anymore though :/

how many cores is 8300 or 8350k?)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×