Jump to content

8700k benchmarks leaked.

ravenshrike
Just now, mr moose said:

you only live once, there's always food in the bins near McDonald's.  A 1080ti is a very important purchase.

Rofl, Nah I'll get a 1080Ti when I get a new job and build my ryzen system. I love food more than anything else

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Price it right, Intel. This chip looks promising. My guess is that I'll still have to delid it. :/

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

Honestly, they can't really afford to price it any higher without stepping into X299's domain. If they price their 8700k too close to the 7800X (which is $383-$389 MSRP, currently priced for $395 on Newegg), you might as well buy an X299 board at that point and reap the benefits the platform has to offer. This is all under the assumption that you bought the 6 core, 12t CPU because you actually needed more thread power, not because you wanted a gaming CPU. At $340-$350, there is still a enough of a buffer to separate the CPU's (along with the difference in platform/cooling costs). 

 

I've read the Intel whitesheets, and I cannot find any disclosed methodology or information that offers us a means to validate the advertised TDP's as being "sufficient". While I know it's not my place to question their engineers, I do question their choice of words here:

What is this workload, and where exactly is it specified? This is the source of that quote: https://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-desktop-s-processor-lines-datasheet-vol-1.html

 

But I've looked through several generations of Intel's whitesheets, and have yet to find an actual testing methodology that they use to arrive to the numbers they get. If they say "you need a cooler capable of dissipating 95w of heat under near worst case commercial load" for a 7700k, I'd like to know what test they used to achieve that load. We know what Intel's temperature tolerance range is, as they very clearly list tCase and tJunction, but there are still plenty left to be desired when speaking about TDP. Perhaps I just don't get it, but I feel like I should be able to know how they arrived at that number, and that I should be able to test their methodology as a means to validate those claims. I ask this out of every reviewer I watch, is it asking too much for the manufacturers to offer the same information?

 

The ambiguity for me, stems from the fact that as long as your CPU is not falling below base clocks, it's perfectly fine by Intel's standards. Not everyone else feels this way, and would consider their cooling solution inadequate if it could not maintain all-core boost ratios (as defined by the default boost table, not by advanced turbo). Intel also lists anything under tJunction as "okay", but in the very same document, mention prolonged temperatures above 76C being harmful to longevity. I've seen plenty of i7's go beyond 76C, even in just gaming workloads, at stock clocks on the Haswell DC stock cooler (the one with the copper slug). 

 

It becomes even more obfuscated to me, when you have i5's and i7's with identical TDP's, but i7's run a clear 15C hotter due to HT. No rhyme or reason as to why the TDP's are identical, just a static "X TDP for both of these CPU's". All of this completely ignoring the fact that turbo boost and variable voltage completely throws away the accuracy of any TDP rating anyways. 

 

Sorry for this rant, but man, I really hate the concept of TDP as a whole, lol. If you could somehow simplify it for a dummy like me to comprehend, I'd greatly appreciate it. Part of me feels I am overthinking it entirely, and the other part still doesn't even know if I am on the right track at all when it comes to understanding it. The more I look into it, and cross reference various coolers to the actual advertised loads (130w of heat being produced, cooled by a cooler rated for 130w), the further I get from actually making any sense of it. 

I think the issue here is you are trying to come to an absolute figure on what constitutes a workload.  This doesn't need to be absolute as the definition is for a commercially available workload, which is SKU dependent, I.E Intel defines a xeon SKU is designed for server workloads, thus a server based work load that pushed the CPU to almost 100% while remaining at the base clock frequency will require a thermal solution of *TDP* to maintain the Tj at spec. and prevent ht CPU from throttling.

 

The thing is it doesn't have to be a specific program, just one that demonstrates the typical workload one would see in for that typical SKU use case.  I suspect that anyone of your stress test programs would suffice, given that would utilise the CPU to 100% @ base clocks. 

 

Maybe you should read this:

 

http://www.electronicdesign.com/boards/without-thermal-analysis-you-might-get-burned

 

EDIT: the other thing is, and I've tried to say it before but not very well,  is that TDP is not a spec for end users.like us,  if the cooler we buy is recommended for the intended use (I.E over clocking or reducing noise), then the people who made it should have looked at the TDP and relevant definitions and made sure it works.  

  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 27/08/2017 at 6:17 PM, DrMacintosh said:

at this point Intel's IPC is useless unless they can price it right and I don't think they can. 

not entirely...

 

if AMD can't match intel's IPC and clockspeeds, people WILL pay the intel premium (myself included).

There does come a point where Ryzen's "value" becomes useless, currently Ryzen's only advantage is core count. Seeing as this chip has 2 less cores and 4 less threads and STILL wins in some productivity benchmarks, people will definitley pay 50$ maybe even 100$ more than a Ryzen 7; myself included.

Main Rig

CPU: Ryzen 2700X 
Cooler: Corsair H150i PRO RGB 360mm Liquid Cooler
Motherboard: ASUS Crosshair VII Hero
RAM: 16GB (2x8) Trident Z RGB 3200MHZ
SSD: Samsung 960 EVO NVME SSD 1TB, Intel 1TB NVME

Graphics Card: Asus ROG Strix GTX 1080Ti OC

Case: Phanteks Evolv X
Power Supply: Corsair HX1000i Platinum-Rated

Radiator Fans: 3x Corsair ML120
Case Fans: 4x be quiet! Silent Wings 3

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Armakar said:

if AMD can't match intel's IPC and clockspeeds, people WILL pay the intel premium (myself included)

You really should capilatize and bold the word "and," because AMD's current offers are 5% behind Intel's. There isn't a single person on Haswell, which is 5% behind Sky for IPC, that actually needs to update from a chip that can hit 4.1GHz+. And as it stands, ~4GHz Ryzen, just like 4GHz Haswell+, is more than enough for many people.

 

8 minutes ago, Armakar said:

currently Ryzen's only advantage is core count. Seeing as this chip has 2 less cores and 4 less threads and STILL wins in some productivity benchmarks, people will definitley pay 50$ maybe even 100$ more than a Ryzen 7

Assuming that the user's workload benefits from multicore performance, regardless of core count. But there are workloads where having more processes not competing for core time is more important than getting individual threads over with.

A friend of mine has an octocore Sandy Bridge Xeon, that has damn near the same multicore as my hexacore 5930K at stock. If we were to trade systems, he'd need to reconfigure his workflow because having 2 extra cores to throw threads on is vital. Doesn't matter that single core is lower.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Drak3 said:

You really should capilatize and bold the word "and," because AMD's current offers are 5% behind Intel's.

Can you stop spreading lies?It doesn't perform equally in every workload, in some it is almost on kaby lake level, in others it's 15-20% behind, and in some it's behind up to 33%, look up how r3 overclocked to 3.8-4ghz barely beats i5 7400 at 3 ghz, that's 33% difference which is around Sandy bridge.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, MyName13 said:

Can you stop spreading lies?It doesn't perform equally in every workload, in some it is almost on kaby lake level, in others it's 15-20% behind, and in some it's behind up to 33%, look up how r3 overclocked to 3.8-4ghz barely beats i5 7400 at 3 ghz, that's 33% difference which is around Sandy bridge.

Ryzen is only behind in AVX and AVX 2. AVX because it has half the resources, AVX 2 because it is bottlenecked by memory.

 

Otherwise, IPC is 5% behind Kaby lake. Any performance disparity larger than that, when set at the same clocks, comes down to IMC, cache configuration, or CCX intercommunication. And even then, many programs not reliant on AVX instruction sets, can be optimized to close the gap.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Drak3 said:

Ryzen is only behind in AVX and AVX 2. AVX because it has half the resources, AVX 2 because it is bottlenecked by memory.

 

Otherwise, IPC is 5% behind Kaby lake. Any performance disparity larger than that, when set at the same clocks, comes down to IMC, cache configuration, or CCX intercommunication. And even then, many programs not reliant on AVX instruction sets, can be optimized to close the gap.

Do you even think before typing?It isn't 5 % behind and if IMC and cross ccx communication ruin performance then it is simply an inferior architecture, am I supposed to use only 1 ccx with a modified IMC?Check techpowerup review and you'll see that it is up to 30-45% behind clock for clock.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MyName13 said:

Do you even think before typing?It isn't 5 % behind and if IMC and cross ccx communication ruin performance then it is simply an inferior architecture, am I supposed to use only 1 ccx with a modified IMC?Check techpowerup review and you'll see that it is up to 30-45% behind clock for clock.

No source has pegged the Zen architecture as being more than 7% behind in its worst performing instruction, outside of AVX, without introducing a bottleneck. Only when an Uncore subsystem is underperforming does Ryzen underperform compared to a 4GHz Haswell.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Drak3 said:

No source has pegged the Zen architecture as being more than 7% behind in its worst performing instruction, outside of AVX, without introducing a bottleneck. Only when an Uncore subsystem is underperforming does Ryzen underperform compared to a 4GHz Haswell.

Here you go, compare r3 and i5 7500 ;)

https://www.techpowerup.com/reviews/AMD/Ryzen_3_1300X/6.html

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MyName13 said:

You might want to look into those benchmarks.

Your link disproved nothing I've said. Backed it up, actually. Uncore subsystems on Ryzen act as bottlenecks.

 

IPC wise, AMD is roughly 5% behind Intel, except for AVX and AVX2. In Techpowerup's conclusion, they note the CCX and Memory configurations as being negatives of the R3.

 

It's been noted that 4+0 performs slightly better than 2+2 in some scenarios, but not the other way around. This is due to the lack of CCX intercommunication with 4+0.

 

https://www.techpowerup.com/231873/amd-ryzen-quad-core-2-2-vs-4-0-core-distributions-compared

 

More over, synthetic benchmarks aren't the end all, be all of measuring CPU performance. Cinebench R15 is losing credibiity in regards to new CPUs due to its method of scoring performance, based on time. Geekbench only cares about DMI strings. And all benchmarks only hit resources in a small handful of ways, that aren't quite indicative of how real world performance is going to be across the board. While Techpowerup's testing is more thorough than some other reviewers, it doesn't look into why performance is like that, only that it is.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/31/2017 at 3:00 AM, MageTank said:

.

Is the single core performance of the locked i7 8700 going to be worth it for gaming though? given how not every game will use all 12 threads and the base clocks are lower than Kaby Lake does this potentially mean to a high end gaming exclusive computer going locked i7 won't be as good option as it was with Skylake and Kaby Lake?

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Princess Cadence said:

Is the single core performance of the locked i7 8700 going to be worth it for gaming though? given how not every game will use all 12 threads and the base clocks are lower than Kaby Lake does this potentially mean to a high end gaming exclusive computer going locked i7 won't be as good option as it was with Skylake and Kaby Lake?

It will depend on the title. Some titles do scale enough with additional threads that it will offset the single core deficit, but titles that scale only with per-core speed (MOBA's, MMO's, most competitive shooters, etc) will see a performance hit. 

 

I can't imagine people will feel a difference though, given they are improving the IPC and constantly tweaking speedshift to improve turbo response times. We will see when it launches if it makes a difference.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Drak3 said:

You might want to look into those benchmarks.

Your link disproved nothing I've said. Backed it up, actually. Uncore subsystems on Ryzen act as bottlenecks.

 

IPC wise, AMD is roughly 5% behind Intel, except for AVX and AVX2. In Techpowerup's conclusion, they note the CCX and Memory configurations as being negatives of the R3.

 

It's been noted that 4+0 performs slightly better than 2+2 in some scenarios, but not the other way around. This is due to the lack of CCX intercommunication with 4+0.

 

https://www.techpowerup.com/231873/amd-ryzen-quad-core-2-2-vs-4-0-core-distributions-compared

 

More over, synthetic benchmarks aren't the end all, be all of measuring CPU performance. Cinebench R15 is losing credibiity in regards to new CPUs due to its method of scoring performance, based on time. Geekbench only cares about DMI strings. And all benchmarks only hit resources in a small handful of ways, that aren't quite indicative of how real world performance is going to be across the board. While Techpowerup's testing is more thorough than some other reviewers, it doesn't look into why performance is like that, only that it is.

1)What is this uncore subsystem?

2)Memory and ccx configurations can't be fixed and if it ruins the performance then it ruins the performance, am I supposed to lock the other ccx?Why are you defending AMD so much?

3)That is hardware unboxed's benchmark and 4+0 performs almost like 2+2, there's 1-3 fps difference in gaming. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, MyName13 said:

What is this uncore subsystem?

Every single subsystem required for a functioning system that is not the CPU core itself. Memory and IMC, L3 Cache, IO, and every controller on chip.

 

8 hours ago, MyName13 said:

Why are you defending AMD so much?

Because what you're saying about Ryzen as a whole, only applies to few aspects of Ryzen that are not vital to most mainstream users. I'd defend a Ford F150 against someone that was claiming it was shit when compared to a Volvo semi.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Drak3 said:

Every single subsystem required for a functioning system that is not the CPU core itself. Memory and IMC, L3 Cache, IO, and every controller on chip.

 

Because what you're saying about Ryzen as a whole, only applies to few aspects of Ryzen that are not vital to most mainstream users. I'd defend a Ford F150 against someone that was claiming it was shit when compared to a Volvo semi.

1)So you're saying that if we considered ryzen in single ccx with better IMC, controllers etc. that we would have better CPUs?Wtf O_O if the entire system (called CPU) can't match competition then it's because it's an inferior product, this is like saying that a car with great design and crap engine is great but you just need to ignore the most important part of the vehicle.

 

2)What do you mean by "not vital to most mainstream users"?It performs worse in everything.

Gaming?Up to 30% and more

Encoding?Up to 20%

 

This is far from your 5%, what exactly are you talking about?To me it looks like you're saying that I should ignore everything but CPU cores, I'm not talking about architecture of cores, I'm talking about the whole CPU as a product.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, MyName13 said:

What do you mean by "not vital to most mainstream users"?It performs worse in everything.

Gaming?Up to 30% and more

Encoding?Up to 20%

Chief, your link only has the 1300X losing to 4c8t+ processors by more than a few FPS that are within the margin of error, and only in games that make use of more than 4 threads.

As for encoding, beyond media consumption, which makes use of hardware encoders on the GPU in use (Ryzen has no iGPU, Kaby lake has an iGPU and H.265 hardware encoder), your standard consumer does not give a shit. Encoding without GPU acceleration will result in software decoding vs hardware decoding, and software decoding will lose that race on identical hardware.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Drak3 said:

Chief, your link only has the 1300X losing to 4c8t+ processors by more than a few FPS that are within the margin of error, and only in games that make use of more than 4 threads.

As for encoding, beyond media consumption, which makes use of hardware encoders on the GPU in use (Ryzen has no iGPU, Kaby lake has an iGPU and H.265 hardware encoder), your standard consumer does not give a shit. Encoding without GPU acceleration will result in software decoding vs hardware decoding, and software decoding will lose that race on identical hardware.

TPU gaming benchmarks look very unreliable, I was talking about other benchmarks (I don't have any links now), every CPU on their site has almost the same frame rate, not only is this weird, but almost all of them have over 100 fps.

 

About encoding, if it's worse then it's because it's weaker and will be weaker in other things, what do you mean nobody gives a sh!t?The same problem will occur when comparing r5 1400 to i7.Is the iGPU really used in that benchmark?I'm pretty sure they benchmark it with a dedicated GPU or else it wouldn't be a fair benchmark.

Link to comment
Share on other sites

Link to post
Share on other sites

why-the-fuck-is-this-thread-still-going.

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MyName13 said:

Is the iGPU really used in that benchmark?

Yes.

 

1 hour ago, MyName13 said:

I'm pretty sure they benchmark it with a dedicated GPU or else it wouldn't be a fair benchmark.

Disabling the iGPU doesn't make it a more fair benchmark. They're comparing full package vs full package in benches that test for very specific performance.

And in that regard, Broadwell is one of the best choices, due to the 128mb DRAM that can be used as L4 cache.

1 hour ago, MyName13 said:

every CPU on their site has almost the same frame rate, not only is this weird, but almost all of them have over 100 fps.

That's because every game they choose, which is a rather good representation of the heavier games that people play, don't care about the performance increases beyond a certain point. The 1300X is able to keep the GPU fed so that the CPU isn't a bottleneck. That isn't possible if we blindly accept your claims on Ryzen IPC without exceeding 4GHz by a wide margin.

 

1 hour ago, MyName13 said:

About encoding, if it's worse then it's because it's weaker and will be weaker in other things, what do you mean nobody gives a sh!t?

TPU tested the type of encoding content producers use. In which case, they'll use GPU acceleration and you'll see no difference between the 1300X and a 7980XE.

But they used the CPUs for the test. Ryzen currently lacks iGPU options, and subsequently, hardware encoders. Ryzen doesn't need those, they'll be on the GPU you pair with it, and the GPU's will be more powerful anyways.

Core i, on the otherhand, has iGPUs and hardware encoders.

So, when using the CPU to encode, you're using hardware vs software. To emulate that hardware functionality takes more CPU time than going via hardware. One can expect similar resulta from the i5 had they disabled the hardware encoder or try to encode something that the hardware encoder cannot handle.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Drak3 said:

Yes.

 

Disabling the iGPU doesn't make it a more fair benchmark. They're comparing full package vs full package in benches that test for very specific performance.

And in that regard, Broadwell is one of the best choices, due to the 128mb DRAM that can be used as L4 cache.

That's because every game they choose, which is a rather good representation of the heavier games that people play, don't care about the performance increases beyond a certain point. The 1300X is able to keep the GPU fed so that the CPU isn't a bottleneck. That isn't possible if we blindly accept your claims on Ryzen IPC without exceeding 4GHz by a wide margin.

 

TPU tested the type of encoding content producers use. In which case, they'll use GPU acceleration and you'll see no difference between the 1300X and a 7980XE.

But they used the CPUs for the test. Ryzen currently lacks iGPU options, and subsequently, hardware encoders. Ryzen doesn't need those, they'll be on the GPU you pair with it, and the GPU's will be more powerful anyways.

Core i, on the otherhand, has iGPUs and hardware encoders.

So, when using the CPU to encode, you're using hardware vs software. To emulate that hardware functionality takes more CPU time than going via hardware. One can expect similar resulta from the i5 had they disabled the hardware encoder or try to encode something that the hardware encoder cannot handle.

Sorry but r3 1300x is far behind i5 7500 in 720p tests which actually show real performance, it's around 30-40% behind, in RoTR its 83% behind and in battlefield 1 50%, what 5% difference are you talking about?This is actually worse than Sandy bridge.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×