Jump to content

AMD FX 8370 vs Intel I7 5960x [GTX 970 SLI 4K benchmarks]

I just remembered I saw similar benchmarks on TT(a reputable site IMHO) between 8350 and 4770K.

http://www.tweaktown.com/tweakipedia/58/core-i7-4770k-vs-amd-fx-8350-with-gtx-980-vs-gtx-780-sli-at-4k/index.html

 

They show that, although the 8350 loses to 4770K in pretty much everything, the gap is not that big, and for 4K, the money difference between the FX and the i7 would be like half a GPU...or the difference between high-end and absolute high end.

 

I think that's what the writer of the article that we are debating here,  wanted to emphasize, even if the actual numbers are way too off to be real.

 

Why do you keep fussing about this topic over and over again, though?

In 2015 AM3+ is a dead end for new buyers, so intel is still the way to go...simple.
 
I am a fx8350 user and I have no regret for saying it.

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

4K gpu limmitation, realy hard to understand right?

 

Doesn't explain the min. framerates being so low. Or the things Patrick brought up. Noone is expecting the 8350 to not get 40-60 fps in modern, non CPU-intensive, games. But the fact it beats the 5960X on several occasions, and scores significantly better in min. framerates when that is normally Intel's strong suit, makes it suspicious.

 

And to be frank, the choice in games isn't very inspiring either. Some non-intensive games in single player (cept for GTA5, which is also a weird result). I bet if you'd play some multiplayer games the graphs would've been severely different. And be honest, most people play multiplayer games the majority of the time.

Link to comment
Share on other sites

Link to post
Share on other sites

Unless we are looking at different graphs thats not minimum framerates. That is the framerate equivalent of the worst tenth of a percent of frame times. If you have a million frames rendered and 990,000 of them take 10ms to render equating to a 100FPS performance and 9000 take 20ms equating to a 50FPS performance, but a thousand take a whopping 50ms to render equating to a 20FPS performance you may have never hit anything lower than 60FPS in game. You could have had individual frames crap out on you in render causing stutter but the average FPS never dropped that low. There could be a million and one things causing such outliers. Heck with some games, poorly coded, NETWORK issues can effect frame rates. There are some egregiously coded games out there where the frame is not allowed to render until specific information is received from the server. Those games probably not so much, but frame time variance is an issue for enjoyment but not yet a smoking gun until we know why they happened, what caused them.

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't explain the min. framerates being so low. Or the things Patrick brought up. Noone is expecting the 8350 to not get 40-60 fps in modern, non CPU-intensive, games. But the fact it beats the 5960X on several occasions, and scores significantly better in min. framerates when that is normally Intel's strong suit, makes it suspicious.

 

 

 

well could be allot of other reasons for that basicly.

If we talk about the overclocked 5960X numbers, 4.4GHz is realy a big overclock for a 5960X with all cores + threads enabled.

Ofc some chips will reach that easaly, but if you keep in mind that we allready talk about a 1.4GHz overclock from stock.

It could ofc be, that the 5960X wasnt running fully stable on that overclock.

Or maybe there was a throttle somewhere, could be ofc.

Those low minimums on the 5960X are a bit weird i have to agree.

But cpu throtle could explain the low minimums.

 

Wenn both systems run stable, there shouldnt barely any diffrence realy,

because of gpu limmitation.

Of course there will be some exceptions to the rule in some particular games and gaming scenario´s.

like a very cpu intensive multiplayer mmo´s for example.

Link to comment
Share on other sites

Link to post
Share on other sites

well could be allot of other reasons for that basicly.

If we talk about the overclocked 5960X numbers, 4.4GHz is realy a big overclock for a 5960X with all cores + threads enabled.

Ofc some chips will reach that easaly, but if you keep in mind that we allready talk about a 1.4GHz overclock from stock.

It could ofc be, that the 5960X wasnt running fully stable on that overclock.

Or maybe there was a throttle somewhere, could be ofc.

 

Questions he should've asked before presenting them to the audience and drawing false conclusions from those odd results. Noone interested in presenting an impartial/unbiased report would leave such obvious discrepancies uncontested, uninvestigated and without mention/explanation. But he already showed us that he isn't impartial/unbiased, he tried to swing a narrative. So the poor results for the 5960X were only of convenience to him, and didn't need investigating.

 

I'm fully aware 970 SLI, running games at 4K high settings, will start to chug way before the CPU will (in modern AAA games, running mostly single player), atleast to any significant degree (>5%). But then the results should be mostly the same, not have huge differences between them. If there are significant differences, logic would dictate that in those scenario's the 5960X should be superior. Because it IS superior to the 8370 in any conceivable way. When it falls behind the 8370, he should've retraced his steps and figure out what went wrong.

 

He didn't, so I'm calling him out on it. I see no reason why anyone, whichever side of the argument you're on (AMD or Intel pundit, or impartial), wouldn't want accurate results.

Link to comment
Share on other sites

Link to post
Share on other sites

How about we all calm down and ask Linus, Logan, Paul, Jay etc. etc. to benchmark the AMD CPU's at 4K with the 5960X, but also with the recent 6000 series CPU's from Intel to see if their testing create similar results?

 

I know they have busy work schedules and what have you, but this seems to have worked up a fair few people on these forums (who knows about others at this stage) so it seems it is a controversial and interesting topic.

 

Anyway, that's just my two cents.

 

Keep going guys it's interesting to read all these posts. :-P

I have no idea what I am doing

Link to comment
Share on other sites

Link to post
Share on other sites

Let me guess, topics that hinge on a social constructs. Debates between what is the law and what shouldn't be the law, topics that centre around personal rights versus personal freedoms and of course the good old religious/political discussions? And occasionally brand loyalty on products that are crucial like radios, EPIRB's and Flares?

the last two examples you listed are not far off. But if you are retired and like working on boats some of the stuff on there is rather useful. A lot of knowledge in one place.
Spoiler

Corsair 400C- Intel i7 6700- Gigabyte Gaming 6- GTX 1080 Founders Ed. - Intel 530 120GB + 2xWD 1TB + Adata 610 256GB- 16GB 2400MHz G.Skill- Evga G2 650 PSU- Corsair H110- ASUS PB278Q- Dell u2412m- Logitech G710+ - Logitech g700 - Sennheiser PC350 SE/598se


Is it just me or is Grammar slowly becoming extinct on LTT? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So you are saying fanboyism isnt real because a weaker product cannot compete with a stronger one when there is something else holding them back?

 

Its like trying to fill a 1000L tank, with a inlet that only allows 5L/minute flowrate... Wont matter how big the pump, you'd still get limited by that inlet.

It's a Haswell 8-core vs. a Vishera 8-core. It's 32 ALUs vs 16. HELL NO!

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I guess I'll just leave this here and show people some true results of FX vs Intel at 1080P

 

 

All your post proves is the OP..

 

Look at i5 4690k vs 4790k vs 5960X.... the FPS difference, stock for stock, is absolutely minimal. Despite one (5960X) having more real haswell cores then both the 4690k and 4790k.

 

When the CPU is not the limit, it really boils down to the limitation of DX11.

 

It would be interesting if we had paired 2x R9 390 with mantle to see how this would affect the two CPUs when both are being given more headroom.

Link to comment
Share on other sites

Link to post
Share on other sites

It's a Haswell 8-core vs. a Vishera 8-core. It's 32 ALUs vs 16. HELL NO!

doesnt matter. the CPU cannot go any faster because the GPU(s) is holding it back. It can not use its full potential due to DX11 and the massive load on the GPUs.

 

to clarify.

IF DX11 hadnt been the horseshit it is, The 5960X would have won hands down. But DX11 IS horseshit. It DOES bottleneck the CPU, so does the two 970s too. they are holding the 5960x back, more so then the FX 8370.

Link to comment
Share on other sites

Link to post
Share on other sites

It's a Haswell 8-core vs. a Vishera 8-core. It's 32 ALUs vs 16. HELL NO!

Oh ffs, I've been over this multiple times and your obviously falling for AMD's false advertising. The Vishera 4 module CPU have 8 ALU, with 4 of everything else-compare the die shot of an FX 8350 to that of a Phenom II X4. And then compare it to a dieshot of the i7 5690X.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

But, what if they have that budget though instead buy a K6?  You gunna get jolly mad at them?  What if they just bought that 4k tv or monitor to play some DOS games?

I'd just tell them politely that they have made a bad choice, but it doesn't bother me...it's their money, even though the words passing through my mind are: "You're fuc*ing stupid..."

 

I've been in these situation, recommending a good, balanced pc config..and then the person I recommended the config to came and said. "I liked your idea but I got X because it said gaming on it...."..like gaming PSUs...ffs.

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

doesnt matter. the CPU cannot go any faster because the GPU(s) is holding it back. It can not use its full potential due to DX11 and the massive load on the GPUs.

 

to clarify.

IF DX11 hadnt been the horseshit it is, The 5960X would have won hands down. But DX11 IS horseshit. It DOES bottleneck the CPU, so does the two 970s too. they are holding the 5960x back, more so then the FX 8370.

 

DX11 has a lot of stuff happening on the mainthread (core 0) that prevents multicore CPU's from properly scaling (other cores are mostly in idle_state). The FX-8370 has much worse single-core performance and should therefor scale worse, and have less consistent framerate. As it's performance is solely determined by the mainthread performance.

 

You'd think the 5960X would still win, albeit only in min. fps and slight average fps due to the way drawcalls/saturation on GPU's works. And yet, in these results, it's worse. Maybe the tester did something wrong, maybe it was hyperthreading..who knows. He never explained.

 

And i'm not convinced a lower overhead API would've made the 5960X have significantly more fps in this scenario. Maybe with a couple of graphics cards more suited for 4K, like Titan X SLI.

 

EDIT;

http--www.gamegpu.ru-images-stories-Test

 

So i pulled this graph from gamegpu.ru. Looks like GTA 5 has a bit of a weird distribution on the 5960X. That would explain some of the 0.1% and 1% results there.

Link to comment
Share on other sites

Link to post
Share on other sites

DX11 has a lot of stuff happening on the mainthread (core 0) that prevents multicore CPU's from properly scaling (other cores are mostly in idle_state). The FX-8370 has much worse single-core performance and should therefor scale worse, and have less consistent framerate. As it's performance is solely determined by the mainthread performance.

 

You'd think the 5960X would still win, albeit only in min. fps and slight average fps due to the way drawcalls/saturation on GPU's works. And yet, in these results, it's worse. Maybe the tester did something wrong, maybe it was hyperthreading..who knows. He never explained.

 

And i'm not convinced a lower overhead API would've made the 5960X have significantly more fps in this scenario. Maybe with a couple of graphics cards more suited for 4K, like Titan X SLI.

 

EDIT;

http--www.gamegpu.ru-images-stories-Test

 

So i pulled this graph from gamegpu.ru. Looks like GTA 5 has a bit of a weird distribution on the 5960X. That would explain some of the 0.1% and 1% results there.

yeah, also, FX cores are a bit weird.

 

If you force which core to do what, you would see much higher single thread performance in Cinebench forcing it to use core "1" rather then core "0"... Like, it can be as much as 9-12 score higher, consistently... I am not sure why that matters for the FX as the resources is pretty evenly split. But certain "cores" perform much better then other ones for, no real reason. I read this on reddit, i think it was reddit atleast, then went testing it myself with my own FX 8320 and well it checks out. In my case Core "1,2,4 and 6" performed better then the other ones... god knows why.

 

So if the core distribution of a game leans more towards the "strong" cores, then that too could skew the results

Link to comment
Share on other sites

Link to post
Share on other sites

yeah, also, FX cores are a bit weird.

 

If you force which core to do what, you would see much higher single thread performance in Cinebench forcing it to use core "1" rather then core "0"... Like, it can be as much as 9-12 score higher, consistently... I am not sure why that matters for the FX as the resources is pretty evenly split. But certain "cores" perform much better then other ones for, no real reason. I read this on reddit, i think it was reddit atleast, then went testing it myself with my own FX 8320 and well it checks out. In my case Core "1,2,4 and 6" performed better then the other ones... god knows why.

 

So if the core distribution of a game leans more towards the "strong" cores, then that too could skew the results

http://techreport.com/review/23750/amd-fx-8350-processor-reviewed

Look at the die shot.

 

http://www.bit-tech.net/hardware/2012/11/06/amd-fx-8350-review/1

Replace the false term Core with ALU.

 

http://www.pcper.com/reviews/Processors/Haswell-E-Intel-Core-i7-5960X-8-core-Processor-Review

Now this is a real 8 core.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

i know how the CMT based Piledriver chips works. But still, when you take a core and split it down the middle. Shouldnt both halves do pretty much the same work? That is my point.

Link to comment
Share on other sites

Link to post
Share on other sites

i know how the CMT based Piledriver chips works. But still, when you take a core and split it down the middle. Shouldnt both halves do pretty much the same work? That is my point.

Nope, sine the ALU share resources occasionally 1 (in each module) needs to wait for the other to finish its task.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

But certain "cores" perform much better then other ones for, no real reason.

 

I believe you may be interpreting the graph incorrectly when you say "certain cores perform better". Because those threads being 50-70% whilst the rest isn't doing anything means those threads are doing poorly. It means the entire simulation gets caught up on those particular threads of the game, causing most of the resources of the 5960X to practically do nothing, since they're "waiting".

 

It's especially frustrating to see that GTA 5 seems to think that a CPU with 16 threads has 16 equally powerful workloads, so actually disabling HT might prove benificiary for GTA 5. A bit the same like the way most games handle the GTX970. It has 4GB, but it doesn't have 4GB at the same time.

 

Bit of a fuckup in the heuristics if you ask me. Would like to see if I can find some proof of this. Or someone with a 5960X is willing to investigate.

Link to comment
Share on other sites

Link to post
Share on other sites

that screams bullshit to me.

 

Can't.

 

Not surprising given the GPU dominant nature of 4K. The i7 will wipe the floor with the FX during anything else.

 

This is totaly no BS.

At 4K you are simply GPU limmited.

THe cpu doesnt make any significant diffrence at 4k gaming anymore.

With most gpu´s we have right now.

fanboys not being able to accept the facts.

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

fanboys not being able to accept the facts.

Lol, and you know what's funny? Those who throw the word fanboy around can be described by that very word.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

doesnt matter. the CPU cannot go any faster because the GPU(s) is holding it back. It can not use its full potential due to DX11 and the massive load on the GPUs.

to clarify.

IF DX11 hadnt been the horseshit it is, The 5960X would have won hands down. But DX11 IS horseshit. It DOES bottleneck the CPU, so does the two 970s too. they are holding the 5960x back, more so then the FX 8370.

That doesn't remotely mean the 8320/50/70 will win out and nor is there any logic to support it. And DX 11 is not a shit API. It mainly comes down to bad developers. I've had this discussion enough times, and I have a blog on here with a template code for building a multithreaded game that scales to core count. You don't need more than 1 CPU core talking to the GPU. In fact doing so limits the AI you can build, the number of players you can host on one map in a multiplayer game, and the complexity of your physics engine. DX 12 providing the ability for multiple CPU cores to make draw calls is only going to provide more choices for compromises.

Please come back when you don't rely on faulty data, bias, and logical fallacies.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I believe you may be interpreting the graph incorrectly when you say "certain cores perform better". Because those threads being 50-70% whilst the rest isn't doing anything means those threads are doing poorly. It means the entire simulation gets caught up on those particular threads of the game, causing most of the resources of the 5960X to practically do nothing, since they're "waiting".

 

It's especially frustrating to see that GTA 5 seems to think that a CPU with 16 threads has 16 equally powerful workloads, so actually disabling HT might prove benificiary for GTA 5. A bit the same like the way most games handle the GTX970. It has 4GB, but it doesn't have 4GB at the same time.

 

Bit of a fuckup in the heuristics if you ask me. Would like to see if I can find some proof of this. Or someone with a 5960X is willing to investigate.

No i mean, on FX. Not the Core i7.

If you start up Cinebench R15 and force CB to use certain cores, in my own FX 8320s case it was Core 1,2,4 and 6. I scored consistently higher in single thread for no obvious reason.

 

And yes, disabling HT may help allocate the real cores rather then some fake ones + real ones in a mixed bag.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh ffs, I've been over this multiple times and your obviously falling for AMD's false advertising. The Vishera 4 module CPU have 8 ALU, with 4 of everything else-compare the die shot of an FX 8350 to that of a Phenom II X4. And then compare it to a dieshot of the i7 5690X.

No. Each Vishera core has 2 integer ALUs and 1 half of a shared FPU. Each module has 4 integer ALUs and 1 FPU.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Lol, and you know what's funny? Those who throw the word fanboy around can be described by that very word.

Yeah, but thats not the point. This shows that the fx series is not the bottleneck people make it out to be.

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

That doesn't remotely mean the 8370 will win out and nor is there any logic to support it. And DX 11 is not a shit API. It mainly comes down to bad developers. I've had this discussion enough times, and I have a blog on here with a template code for building a multithreaded game that scales to core count. You don't need more than 1 CPU core talking to the GPU. In fact doing so limits the AI you can build, the number of players you can host on one map in a multiplayer game, and the complexity of your physics engine. DX 12 providing the ability for multiple CPU cores to make draw calls is only going to provide more choices for compromises.

Please come back when you don't rely on faulty data, bias, and logical fallacies.

Do you know how to undo the blatant bias for core 1 and core 2 in DX11? Because the extreme load put on those two, compared to other cores, no matter what, game it is. It always emphasis's Core 1 and Core 2, with a lower emphasis on Core 3,4,5,6,7,8,9,10 etc...

They way the API is written, is not giving a uniform load on all cores, thus some cores are just partially active (read: light load) or idling.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×