Jump to content

AMD FX 8370 vs Intel I7 5960x [GTX 970 SLI 4K benchmarks]

amd cpu getting 60+ fps on gta v average has to be fake. No way it does. Unless someone patched gta v.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

In no land does this make sense, the 5960x has lower minimums despite having more cpu horsepower in every aspect. It has more cache more single thread and more cores.

Not to burst bubbles but the 8370 does not have 8 true cores. They are 2x modules with massively shared resources, and indeed some shared computation. (like think where two Intel smt threads give 110-130% performance of one thread, two amd "cores" give 170% [made up number less than 2 greater than 1.5] the performance of one regular thread.)

http://www.anandtech.com/show/6201/amd-details-its-3rd-gen-steamroller-architecture

Any article will do. It's a quite well documented idealogy. Either you call them 4 super cores or 8 gimped cores.

 

Ah yes, I forgot about this. My bad.

waffle waffle waffle on and on and on

Link to comment
Share on other sites

Link to post
Share on other sites

Hi have you used an AMD CPU before?

I think he was missing a /sarcasm....

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Um, they're 8 cores, but 2 cores per module. They are 8 "true cores" since there are 8 physical cores inside the processor. You still get to use all 8 physical cores. Just because they are smaller doesn't mean they aren't cores.

But each core doesn't do all the opperations we traditionally require individual cores to do. The module overhead does a not insignificant amount of the work including most of the fetches, which if you'd believe it is what most of cpu time is made of.

http://images.anandtech.com/doci/6201/Screen%20Shot%202012-08-28%20at%204.38.05%20PM.png

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I will take the time to look this over, and talk to the other writer and try to figure it out. Unfortunately its almost 2:30 AM and I've got benchmarking I need to get to...

Like I said before you had geniune questions and I really appreciated it :)

Yea go for it! It'd be nice to know really. Perhaps the shared core module architecture actually is showing a bit of stretch in its legs with this over SMT.

I wonder what @patrickjp93 has to think about all of this.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

As the writer of this article, I must first thank the OP for posting it here.. appreciate the feedback and exposure!

 

 

 

AMD has not ever sponsored any work I've done, including this. They're not an advertising partner nor did they sanction this report. I did request the hardware from them (as well as a few other companies for other components) just as I would with any review or report. That being said, they were never promised the results would be favorable. They shared the link to it on their facrbook simply because they clearly liked the report. I never even asked them to, they found it on their own. 

That's good to know, I suppose.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well...

The review was shared by AMD on facebook and it could say a lot about it, since the post was like "hey our cpuz are good for gaming!!1!1!11!1!"

Moreover, not to be disrespectful towards the reviewer, his results are different from all the dozens of other reviews published during this years, where FX83xxs usually lose to i5s and even i3s ( in gaming )

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

Flashback to AMD's initial "own results of benchmarks" right before fury x released.   :rolleyes:

Link to comment
Share on other sites

Link to post
Share on other sites

I know how they are made, but the problem is that the cores are there.  The controller is the slow part since everything is shared in each module.  However, the cores are still there, so they are still true cores.  They are true 8 cores, but they're just not used as well in a lot of tasks.  If the cores are there then they are true.  If they were not there then they'd be false.  That's my argument.

See:

Any article will do. It's a quite well documented idealogy. Either you call them 4 super cores or 8 gimped cores.

I don't think he's saying they're false cores, he's just saying they're cut down a bit if I'm understanding it correctly. In his and your words combined, they'd be "gimped true cores", wouldn't they?

 

Well...

The review was shared by AMD on facebook and it could say a lot about it, since the post was like "hey our cpuz are good for gaming!!1!1!11!1!"

Moreover, not to be disrespectful towards the reviewer, his results are different from all the dozens of other reviews published during this years, where FX83xxs usually lose to i5s and even i3s ( in gaming )

 

The difference now is they're using DX12, which has changed a lot.

waffle waffle waffle on and on and on

Link to comment
Share on other sites

Link to post
Share on other sites

The difference now is they're using DX12, which has changed a lot.

No game currently supports DX12 since they need to be rewritten in order to do so, at least partially.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

No game currently supports DX12 since they need to be rewritten in order to do so, at least partially.

Ah shit, I read the post wrong.

 

Yes and no.  When I use them for 3D modelling no they perform like full powered 8 core, but the controller slows them down compared to the 700+ dollars more intel option.  Where the toll really takes effect is in single threaded performance.  So, they are and they aren't gimped.  If you saw my cinebench post you'd see where they get gimped.   :D

Interesting...

waffle waffle waffle on and on and on

Link to comment
Share on other sites

Link to post
Share on other sites

Why are people saying that at 4k the cpus have less stress? Isnt it like that the CPU must do the same operations in a game no matter the res?

Longboarders/ skaters message me!

Link to comment
Share on other sites

Link to post
Share on other sites

Because at 4k VRAM and power is more important than the CPU.

Yes but people saying "at 1080p it would be different"?

Longboarders/ skaters message me!

Link to comment
Share on other sites

Link to post
Share on other sites

Actually, PCPer found that one of these games (at least), GTA V, had an improvement from Sandy Bridge to Skylake at 1440p (not quite 4k, but still more HD than 1080p). Considering the most dramatic results in this post were in the GTA V benchmarks, I think PCPer's findings settle the "BUTT 4K IZ NAT CPU INTENZIV" argument. Yes, the majority of the work is on the GPU, but there is a fair amount of work for the CPU to do in GTA V at >1080p. I can't speak for the other games, however, because I'm too lazy to look up other benchmarks. :)

 

http://www.pcper.com/reviews/Graphics-Cards/Skylake-vs-Sandy-Bridge-Discrete-GPU-Showdown/Grand-Theft-Auto-V

https://youtu.be/F2DTWZoO1Ck?t=30m45s

 

Again, not 4k, but 1440p is still more GPU intensive than 1080p and there was an improvement, even though the difference between Skylake and Sandy Bridge is much less dramatic than the difference between the 5960x and the 8370.

 

 

"Not quite 4K"? It's significantly less and much closer to 1080p than UHD. And as per-usual; a bunch of presumptions.

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

As an FX9590 user, anything AMD seems to struggle to give a solid frame latency, I always see random spikes in games.

Link to comment
Share on other sites

Link to post
Share on other sites

5960X lower min. framerate? heh, hehehe. 

Yeah no...

...And the winner of the most non-biased objective view on the topic award goes to: @Majestic  :P

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

...And the winner of the most non-biased objective view on the topic award goes to: @Majestic  :P

 

Sorry for not being elaborate in my response. But I feel like this type of disingenuous benchmarking doesn't deserve much of my time.

This is clearly pandering to an audience, fabricating results to get instant karma with the AMD users that fall into his trap.

 

There is just no way these results are conducted in a proper way, or not doctored.

 

fx-8370-vs-5960x_gaming-gtav_gtx970-sli.

 

Take this result for example. You can see the AMD only gaining 2 extra fps by overclocking it to 4.6ghz, except the 5960X beats it by 5 fps. Meaning the game is not totally GPU bottlenecked, and yet it doesn't scale with overclocking.

 

And it's not that the 5960X has a more consistent fps to reach a higher fps, it supposedly has a lower min. framerate. It doesn't add up at all, and just proves he made the results up or botched the testing. Willingly or unwillingly.

Link to comment
Share on other sites

Link to post
Share on other sites

This just in SLI 970s are the limiting factor in 4K gaming. Who knew!?

 

I mean they cope admirably but they are not going to give you 144 Hz even if displayport could.

 

 

Because at 4k VRAM and GPU power is more important than the CPU.

 
GPU power definitely, the 970's vram is fine. Whether the slow vram part, or the 4GB total limit might become an issue were the GPU itself more powerful, who can say? But the GPU itself is the limiting factor here.
 
Which makes sense -- 970 SLI and a 980 Ti perform very similarly. Would they if memory were the 970's problem?
Link to comment
Share on other sites

Link to post
Share on other sites

Sorry for not being elaborate in my response. But I feel like this type of disingenuous benchmarking doesn't deserve much of my time.

This is clearly pandering to an audience, fabricating results to get instant karma with the AMD users that fall into his trap.

 

There is just no way these results are conducted in a proper way, or not doctored.

You can pander to me anytime with your ever so eloquent sentence execution.

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry for not being elaborate in my response. But I feel like this type of disingenuous benchmarking doesn't deserve much of my time.

This is clearly pandering to an audience, fabricating results to get instant karma with the AMD users that fall into his trap.

 

There is just no way these results are conducted in a proper way, or not doctored.

 

fx-8370-vs-5960x_gaming-gtav_gtx970-sli.

 

Take this result for example. You can see the AMD only gaining 2 extra fps by overclocking it to 4.6ghz, except the 5960X beats it by 5 fps. Meaning the game is not totally GPU bottlenecked, and yet it doesn't scale with overclocking.

 

And it's not that the 5960X has a more consistent fps to reach a higher fps, it supposedly has a lower min. framerate. It doesn't add up at all, and just proves he made the results up or botched the testing. Willingly or unwillingly.

 

Clearly, you don't know the difference between "minimum framrate" and "frame time variance" the frame time variance is the amount of time it takes for the GPU to render the frame, when it is low the result is a noticeable stutter. 

Link to comment
Share on other sites

Link to post
Share on other sites

Clearly, you don't know the difference between "minimum framrate" and "frame time variance" the frame time variance is the amount of time it takes for the GPU to render the frame, when it is low the result is a noticeable stutter. 

 

This is frames per second, it says so right in the top-right corner. It means the 0.1% low of the 5960X was 14-17 fps, and for the 8370 it was 32-35 fps.

It means at the 0.1% variance level and the 1% variance level, the 5960X, according to this benchmark, did worse in both occasions.

 

That means the numers are very strange, and it doesn't account for the non-existant scaling with overclocking eventhough CPU seems to matter in this benchmark. 

 

Scaling at GPU bottleneck

CPU_02.png

 

Scaling at CPU bottleneck

CPU_03.png

 

The graph from the benchmark in question shows both happening, which makes no sense.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×