Jump to content

AMD Ryzen benchmarks against 7700K and 6900K

cozz
11 hours ago, LAwLz said:

I have a 6800K system ordered right now, but because of a shortage of parts it is expected to ship in the middle of January.

That sounds like a pre-built? If so which one, if you don't mind me asking?

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Prysin said:

from my testing with a 4790k and FX 8320. Cinebench doesnt get any notable boost going from 1600Mhz Dual channel to 2400MHz dual channel. Nor does it see any notable benefit going between 2133 CL 9 or 2133 CL 11.

Same with my tests, going from 2133 C15 to 3200 C14. 

On 12/11/2016 at 0:27 AM, MageTank said:

The gauntlet is complete! I'll let you guys extrapolate the results, as I am too tired to do so myself. I'll go over whatever you guys conclude in the morning. Without further ado, the results:

 

Stock i7 6700k HT Off 2133mhz Ram: http://imgur.com/a/YeY9f  http://valid.x86.fr/3a9v8e

Stock i7 6700k HT Off 3200mhz Ram: http://imgur.com/a/O52eP   http://valid.x86.fr/fh77ve

Stock i7 6700k HT On 2133mhz Ram: http://imgur.com/a/jSnIm  http://valid.x86.fr/li4nvu

Stock i7 6700k HT On 3200mhz Ram: http://imgur.com/a/5jIuH  http://valid.x86.fr/kb5895

 

If there is any additional benchmarks you guys need me to do, let me know. I can't offer much experience when it comes to CPU's themselves, but anything memory related seems to come easy to me, so feel free to exploit that if applicable. 

I noticed no perceivable performance boost from using faster ram in Cinebench. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 20/12/2016 at 0:51 AM, themaniac said:

but its demolished by the cpu they says its supposed to compete against the i7 6900k

It says P3 1.0Ghz in the picture

 

INTEL-PENTIUM-III-P3-1000MHz-CPU-%93-SL4MF.jpg 

 

Not sure if we can trust this

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not really convinced these benchmarks are reliable. I would like to see reviews by the Youtube community. That's where I usually go to see real world performances in games. I prefer games as a benchmark because it's easier to understand for me because it's an actual product whereas IDK what these tests are designed to accomplish.

 

Are these test results a higher number is better scenario? If so why is the newer i7 7700K scoring lower than the older i7 processors?

 

Could someone help explain to me what these numbers mean?

 

As for red vs blue at the pace CPU hardware is evolving the gains seem very marginal at this point and it doesn't look to me like you can really go wrong with either brand now that there's some competition at 14nm in general. As I've meantioned before on this forum I'm really interested in how AMD's integrated graphics are evolving so see what kind of awesome compact HTPC form factor I can work with.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎12‎/‎20‎/‎2016 at 7:51 AM, themaniac said:

but its demolished by the cpu they says its supposed to compete against the i7 6900k

CPU benchmarks are never accurate,

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, AluminiumTech said:

CPU benchmarks are never accurate,

??? How so? You're seriously going to tell me Linpack, SAP, Chess, and Cinebench aren't accurate? I think what you mean to say is not all CPU benchmarks reflect what consumers will experience, and that has more to do with consumers and consumer software than it does with actual benchmark accuracy.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, patrickjp93 said:

??? How so? You're seriously going to tell me Linpack, SAP, Chess, and Cinebench aren't accurate? I think what you mean to say is not all CPU benchmarks reflect what consumers will experience, and that has more to do with consumers and consumer software than it does with actual benchmark accuracy.

CPU benchmarks do not reflect the final customer experience. That is what I originally meant to say.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

This has turned out to be a romor. But I still wonder if it is possible that someone can get unreleased chips from both of AMD and Intel?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AluminiumTech said:

CPU benchmarks do not reflect the final customer experience. That is what I originally meant to say.

And that's the fault of consumer software makers, not benchmark creators. Despite my disdain for PrimateLabs' lack of fairness and objectivity in their optimization practices for ARM vs. x86, at least they do know how to program very well. I haven't been able to beat their optimized assembly on my iPhone even after a couple weeks of effort.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Bouzoo said:

That sounds like a pre-built? If so which one, if you don't mind me asking?

Not a pre-built. It's just that I told them to not ship anything until all parts were in stock. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Prysin said:

TI only makes smaller controllers and small mobile SoCs. Foundries sell per wafer, not per chip. That is why yields and die size is so important for the product margins of chip designing companies like AMD, Apple, Samsung, Intel etc. The more you get per wafer, the lower the production cost due to fewer wafers. Growing perfect silicone crystals costs a lot of money.

 

Consoles are AMD tech. Consoles are at TSMC.

 

AMD is still taping out low volume production runs of the Fiji and Hawaii cards used in top end FirePro cards like the W9150 and i would guess, also the newly announced Fiji Nano based card. Which was 28nm

Actually, when yields are high enough, TSMC sells by the chip because that's cheaper for TSMC.

 

Actually growing silicon is incredibly cheap these days.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

Actually, when yields are high enough, TSMC sells by the chip because that's cheaper for TSMC.

 

Actually growing silicon is incredibly cheap these days.

it may be "cheap" to grow it, but it is extremely time consuming, and when the demand has been as extreme as it has recently, with the explosion of flash based products, IoT and other mobile/low power products, it becomes expensive as prices get inflated due to too high demand vs supply.

 

We know this is the case, because prices for tech hasnt really dropped as fast as they usually do . we normally see a drop in price, maybe not huge, but a drop, every 2-3 months, as production ramps up, product stockpiles grow and yields improve. However in 2016 that drop has been delayed up to 5-6 months. Manufacturing nodes and yields doesnt seem to be the issue, because there isnt many reports on catastrophical bad yields, instead natural disasters in japan, coupled with extremely fast growth, thus subsequent demand, has inflated the prices of silicone.

 

up-front recovery of EUV RnD costs also comes into play, but that is a long time investment plan, so prices shouldnt be affected this much.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, patrickjp93 said:

??? How so? You're seriously going to tell me Linpack, SAP, Chess, and Cinebench aren't accurate? I think what you mean to say is not all CPU benchmarks reflect what consumers will experience, and that has more to do with consumers and consumer software than it does with actual benchmark accuracy.

Some are better than others. Just look at Geekbench as a bad example. We've had a rollercoaster of wrong info for the last few months. It was quite often a major source/topic in your debates. 

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Prysin said:

it may be "cheap" to grow it, but it is extremely time consuming, and when the demand has been as extreme as it has recently, with the explosion of flash based products, IoT and other mobile/low power products, it becomes expensive as prices get inflated due to too high demand vs supply.

 

We know this is the case, because prices for tech hasnt really dropped as fast as they usually do . we normally see a drop in price, maybe not huge, but a drop, every 2-3 months, as production ramps up, product stockpiles grow and yields improve. However in 2016 that drop has been delayed up to 5-6 months. Manufacturing nodes and yields doesnt seem to be the issue, because there isnt many reports on catastrophical bad yields, instead natural disasters in japan, coupled with extremely fast growth, thus subsequent demand, has inflated the prices of silicone.

 

up-front recovery of EUV RnD costs also comes into play, but that is a long time investment plan, so prices shouldnt be affected this much.

I wouldn't call 500 crystals in 24 hours particularly time consuming when you get 300+ 300mm wafers per crystal, and the crystals can be reused too.

 

The cost of getting to Intel's real 14nm and Samsung's "10nm" (12.5 really) is much larger than previous nodes. That cost has to be amortized over a longer period, but it still has to be recouped and margined.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Bouzoo said:

Some are better than others. Just look at Geekbench as a bad example. We've had a rollercoaster of wrong info for the last few months. It was quite often a major source/topic in your debates. 

The only significant problem with GB is its enormous bias for ARM. There is nowhere near as much care for x86 as there is for ARM. I've tripled scores for SandyBridge on GB 3 just by disassembling it, decompiling it, and injecting fresh code I wrote and compiled myself. It makes no use of AVX 1 or 2. That's a sham considering that's Bulldozer, Jaguar, Sandy Bridge, and everything newer.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, patrickjp93 said:

The only significant problem with GB is its enormous bias for ARM. There is nowhere near as much care for x86 as there is for ARM. I've tripled scores for SandyBridge on GB 3 just by disassembling it, decompiling it, and injecting fresh code I wrote and compiled myself. It makes no use of AVX 1 or 2. That's a sham considering that's Bulldozer, Jaguar, Sandy Bridge, and everything newer.

well patrick, one can argue that the amount of real life day to day software that uses AVX is pretty much "none" in the grand scheme of things (yes i know there is some, but it is a minority, not a majority). And as such, testing for a "best of the worst" performance metric, rather then a "best of the best" case metric makes sense.

 

Does Excel, Word, Powerpoint, Acrobat, Photoshop, Skype, Steam, most torrenting software, most games etc... use AVX? no. Not to any beneficial degree.

 

Is it a shame? YES.

Does that mean we should go by "AVX" aka "best of the best" case performance? no.

We should look at "best of the worst case". As is taught by pretty much all programming lessons ive taken so far. In terms of actual performance, worst case is by far more common then best case.

 

Does that say alot about the programmers? YES

 

In the end, the consumers get the short end of the stick. Shoddy code and "glorified" tests which does not explain the extreme deficits we see from some tests over into real life applications.

 

Take a good example.

AMD FX vs Skylake.

Cinebench, which you just a few posts ago claimed was a reliable benchmark. Among many many other benchmarks. Show that Skylake is core for core, clock for clock, 86% faster then Piledriver. We know this because we can test it repeatedly.

 

But in games, how big is the disparity between say a 8350 and 6700k? Most fo the time.... no more then 35%...

So despite each core being faster by a significant degree, not factoring in the shitty multi-threaded scaling of AMD FX CPUs, we still have a 51% "performance gap".

 

"But but but, The FX actually have 8 cores, and this is what skews the results" you might say... 

But in the end, the scaling of the FX8 CPUs are around 6.7x single core, whilst the 6700k is closer to 5x single core... meaning due to hyper-threading, it performs better then a standard quad core, not only that, it performs better then a FX by a significant margin despite having less "hardware" cores.

 

So what does this tell us? It tells us that benchmarks using extremely optimized code runs the risk of being so well optimized, they no longer holds relevance for day to day usage. And that is when a benchmark "fails". If the benchmark is no longer representing a way of coding that is common to casual users, then the score of the benchmark no longer holds relevance to people outside of those running that specific program. The test turns into a "scoreboard" tool.

 

Much like the Nurnburgring has become for super-car manufacturers. 

Their cars may get around that track in insanely low lap-times, but as a byproduct, all their quoted performance caracteristics no longer hold water outside of the track itself. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

well patrick, one can argue that the amount of real life day to day software that uses AVX is pretty much "none" in the grand scheme of things (yes i know there is some, but it is a minority, not a majority). And as such, testing for a "best of the worst" performance metric, rather then a "best of the best" case metric makes sense.

 

Does Excel, Word, Powerpoint, Acrobat, Photoshop, Skype, Steam, most torrenting software, most games etc... use AVX? no. Not to any beneficial degree.

 

Is it a shame? YES.

Does that mean we should go by "AVX" aka "best of the best" case performance? no.

We should look at "best of the worst case". As is taught by pretty much all programming lessons ive taken so far. In terms of actual performance, worst case is by far more common then best case.

 

Does that say alot about the programmers? YES

 

In the end, the consumers get the short end of the stick. Shoddy code and "glorified" tests which does not explain the extreme deficits we see from some tests over into real life applications.

 

Take a good example.

AMD FX vs Skylake.

Cinebench, which you just a few posts ago claimed was a reliable benchmark. Among many many other benchmarks. Show that Skylake is core for core, clock for clock, 86% faster then Piledriver. We know this because we can test it repeatedly.

 

But in games, how big is the disparity between say a 8350 and 6700k? Most fo the time.... no more then 35%...

So despite each core being faster by a significant degree, not factoring in the shitty multi-threaded scaling of AMD FX CPUs, we still have a 51% "performance gap".

 

"But but but, The FX actually have 8 cores, and this is what skews the results" you might say... 

But in the end, the scaling of the FX8 CPUs are around 6.7x single core, whilst the 6700k is closer to 5x single core... meaning due to hyper-threading, it performs better then a standard quad core, not only that, it performs better then a FX by a significant margin despite having less "hardware" cores.

 

So what does this tell us? It tells us that benchmarks using extremely optimized code runs the risk of being so well optimized, they no longer holds relevance for day to day usage. And that is when a benchmark "fails". If the benchmark is no longer representing a way of coding that is common to casual users, then the score of the benchmark no longer holds relevance to people outside of those running that specific program. The test turns into a "scoreboard" tool.

 

Much like the Nurnburgring has become for super-car manufacturers. 

Their cars may get around that track in insanely low lap-times, but as a byproduct, all their quoted performance caracteristics no longer hold water outside of the track itself. 

Excel and Word do. So does Photoshop.

 

I disagree with you. We should have benchmarks which expose the BS for what it is: BS.

 

Benchmarks consist of micro benchmark suites for a reason. It goes back to my blogs. If I alone am capable of smashing senior Epic Games, Blizzard, and Bethesda devs, then either I'm a genius or they're idiots. What's more likely here?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

I wouldn't call 500 crystals in 24 hours particularly time consuming when you get 300+ 300mm wafers per crystal, and the crystals can be reused too.

id love a source , also do you fully understand what you are talking about here? 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, patrickjp93 said:

Excel and Word do. So does Photoshop.

 

I disagree with you. We should have benchmarks which expose the BS for what it is: BS.

 

Benchmarks consist of micro benchmark suites for a reason. It goes back to my blogs. If I alone am capable of smashing senior Epic Games, Blizzard, and Bethesda devs, then either I'm a genius or they're idiots. What's more likely here?

I know Excel uses some AVX, guess Word is based on the same underlying arithmatic engine, so poor argument from my side.

Photoshop uses it scarcely, only a few functions will see a benefit from it vs a product without AVX support. Although, where it IS used, it does improve things quite a lot. Adobe software suit seems to favor the usage of GPU acceleration far more then AVX and CPU. 

 

 

please don't talk about Bethesda employees in teh same breath as the rest of humanity. To work there you must be a savant when it comes to shit coding and recycling of near mumified code.

 

Blizzard doesnt even try. They just reduce the quality of their assets and complexity of their system to such a degree that two potatoes with a string of copper wire between them can run any blizzard title.

 

Epic games, alongside Crytek, doesnt know how to actually code. They know how to generate beautiful effects, but not how to make said effects run well. This is why Cryengine and Unreal Engine based titles often struggle to run properly on any CPU. Their code aint tidy. Perhaps not outright bad, but not tidy, and as a result in the case of Unreal, i have a feeling there is ALOT of "over the years" bandaid solutions and exceptions being used to circumvent bad code that should have been rewritten from scratch.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, patrickjp93 said:

If I alone am capable of smashing senior Epic Games, Blizzard, and Bethesda devs, then either I'm a genius or they're idiots. What's more likely here?

 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Prysin said:

 

please don't talk about Bethesda employees in teh same breath as the rest of humanity. To work there you must be a savant when it comes to shit coding and recycling of near mumified code.

 

Blizzard doesnt even try. They just reduce the quality of their assets and complexity of their system to such a degree that two potatoes with a string of copper wire between them can run any blizzard title.

 

Epic games, alongside Crytek, doesnt know how to actually code. They know how to generate beautiful effects, but not how to make said effects run well. This is why Cryengine and Unreal Engine based titles often struggle to run properly on any CPU. Their code aint tidy. 

Not trying to derail the thread but which game companies do write the best code? I'm fairly new to pc gaming and would be interested to know...

Love not hate

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Prysin said:

This is why Cryengine and Unreal Engine based titles often struggle to run properly on any CPU. Their code aint tidy. Perhaps not outright bad, but not tidy, and as a result in the case of Unreal, i have a feeling there is ALOT of "over the years" bandaid solutions and exceptions being used to circumvent bad code that should have been rewritten from scratch.

This is why I was so disappointed Square Enix dropped their Luminous Engine for Unreal. I mean it's not like I know how well coded it was but I've always had immense respect for a company that makes an entire new game engine for every major Final Fantasy title, doing that is freakin expensive and time consuming. That either means they are very experienced in doing it and can make something good or are experts and throwing garbage together that actually functions rather well by the time the game is released.

 

P.S. Before anyone comments please know the difference between Square Enix published games and ones ACTUALLY made by them.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, jeremymwilson said:

Not trying to derail the thread but which game companies do write the best code? I'm fairly new to pc gaming and would be interested to know...

DICE LA has a fairly good track record. (Battlefield 4 post launch)

 

CD-Project RED (The Witcher series, Cyberpunk 2077)

 

Valve (Source Engine isnt fantastic, but it isnt rubbish. Valve supported quad cores back in 2005-2006. At around the time when the industry barely bothered to do dual-cores.

 

Rockstar -> they have their ups and downs. But they aint outright BAD.

 

Nixes -> mostly do console ports. They did RotTR among others.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

i have a feeling there is ALOT of "over the years" bandaid solutions and exceptions being used to circumvent bad code that should have been rewritten from scratch.

Just like all software that as been around for 20 years incrementally brought up to new versions over time. It's really easy for us to sit around and say "wow that code is awful and lots of it should be rewritten" but doing that is extremely expensive and time consuming and creates a complete fork in the support chain further increasing the cost.

 

I very much hate enterprise software they are all awful but they all started in the 90's and have never lost that 90's stain, it's always there either as a small dot or covering the whole bloody thing >:(.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×