Jump to content

FX 8350 better with SLI @4k than 4930k.

So if we are seeing a GPU bound benchmark and its not the CPUs... Are we saying the AMD mobo is better?

Link to comment
Share on other sites

Link to post
Share on other sites

So if we are seeing a GPU bound benchmark and its not the CPUs... Are we saying the AMD mobo is better?

That's the only value the article has, yes.

 

A 990FX chipset has better scaling in SLI than X79. Fantastic, bravo.

Link to comment
Share on other sites

Link to post
Share on other sites

That's the only value the article has, yes.

 

A 990FX chipset has better scaling in SLI than X79. Fantastic, bravo.

Both platforms scale equally. By default nvidia drivers set the pcie express config to 2.0 on SB-e CPU's, if you run a small patch, it will set it to 3.0 on SB-E CPU's. Intel didnt certificate PCI-Express 3.0 on them. If you see a difference (excluding GPU bound scenario's with the 3.0 boost), it's going to be AMD that's going to be behind Intel because of less cpu performance not because of a platform scaling. 

Link to comment
Share on other sites

Link to post
Share on other sites

That's the only value the article has, yes.

A 990FX chipset has better scaling in SLI than X79. Fantastic, bravo.

Really? Didn't expect that, can't see why people compare CPUs anyway, Intel and AMD are in different boats sailing opposite ways now.
Link to comment
Share on other sites

Link to post
Share on other sites

I just really enjoy how positive AMD news must immediately be explained away with caveats about "GPU bound programs" and how "the tests were skewed because of X".

Would you all agree that this is going to become the norm in PC gaming? Where a 120 dollar chip is all you need for UHD gaming and big, powerful GPU's that can tackle 4k resolution at 60 FPS?

Link to comment
Share on other sites

Link to post
Share on other sites

I just really enjoy how positive AMD news must immediately be explained away with caveats about "GPU bound programs" and how "the tests were skewed because of X".

Would you all agree that this is going to become the norm in PC gaming? Where a 120 dollar chip is all you need for UHD gaming and big, powerful GPU's that can tackle 4k resolution at 60 FPS?

Explain me why 3-4Way SLI still scales bad in 4K or doesn't offer any performance gain? So think again if you're always guaranteed to be gpu limited. You can't use the GPU performance numbers as proof that x CPU is faster than y CPU, might as well do this with SSD's and claim afterwards AMD CPU's are faster because the highest number you could pull off was with an AMD CPU.

Let him turn the res all the way down to 1080p/720p and I'd like to see AMD getting close, they'll be up to 100% slower.

Link to comment
Share on other sites

Link to post
Share on other sites

Would you all agree that this is going to become the norm in PC gaming? Where a 120 dollar chip is all you need for UHD gaming and big, powerful GPU's that can tackle 4k resolution at 60 FPS?

This is basically the same as those saying iGPs will take over the whole dGPU market for gaming within this decade.

It would be possible if our standard did not change. However more will be added and once again, we will experience the same as on 1080p.

Link to comment
Share on other sites

Link to post
Share on other sites

Explain me why 3-4Way SLI still scales bad in 4K or doesn't offer any performance gain? So think again if you're always guaranteed to be gpu limited. You can't use the GPU performance numbers as proof that x CPU is faster than y CPU, might as well do this with SSD's and claim afterwards AMD CPU's are faster because the highest number you could pull off was with an AMD CPU.

Let him turn the res all the way down to 1080p/720p and I'd like to see AMD getting close, they'll be up to 100% slower.

So more caveats on top of more caveats. Sprinkled with non sequiturs. Mmm tasty.

Why are we going backwards in resolution? Are we worried about backwards compatibility or something?

It's not stating that "x CPU is faster than y CPU". It's stating that in UHD gaming your CPU matters LESS than your GPU. And that in modern games, you will benefit MORE from a multithreaded/multicore CPU.

And DirectX 12, Open GL Next, and Mantle are already making that case today.

This is basically the same as those saying iGPs will take over the whole dGPU market for gaming within this decade.

It would be possible if our standard did not change. However more will be added and once again, we will experience the same as on 1080p.

You'll have to explain how the two correlate.

Link to comment
Share on other sites

Link to post
Share on other sites

Why are we going backwards in resolution? Are we worried about backwards compatibility or something?

 

To have a real CPU comparison. AMD fanboys are dropping their balls when we are suggesting to simulate CPU bound scenario's.

 

It's not stating that "x CPU is faster than y CPU". It's stating that in UHD gaming your CPU matters LESS than your GPU. And that in modern games, you will benefit MORE from a multithreaded/multicore CPU.

Lol? And no you will benefit more from single core performance than more cores. Atm only Metro 2033/Crysis 3 take proper advantage of 8 threads, a 8350 performs equal in every game to a 6300. You're comparing a low-end CPU with a high-end CPU (which is actually worse than a mid-range CPU).

There's a reason why 3/4Way SLI adds except in a few games almost no performance gain because of a CPU limitation.

 

And DirectX 12, Open GL Next, and Mantle are already making that case today.

 

Which will affect any other CPU too, so the gains are still up to 100% between AMD & Intel as long as we don't have the GPU hitting its limit.

Comparison between a 8350 & 3930K, double the frames. 

Link to comment
Share on other sites

Link to post
Share on other sites

*snip*

Ok, I can see you're not looking at the big picture here, or listening to what I'm saying.

Link to comment
Share on other sites

Link to post
Share on other sites

You'll have to explain how the two correlate.

Means new things will be implemented that will require more CPU resources.

Same will happen with GPUs. (A great example would be ray tracing)

Some peoples prediction is based on only increasing one factor, when in reality, you will need to increase many more.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, I can see you're not looking at the big picture here, or listening to what I'm saying.

Your picture was;

 

 

Where a 120 dollar chip is all you need for UHD gaming and big, powerful GPU's that can tackle 4k resolution at 60 FPS?

Which is wrong. Even in 4K you can be CPU limited. Those games were even on 1080p GPU bound >.> So if a 8320 is all you need, then a 4300 is all you need. So every 8320 owner wasted money (which they did).

Link to comment
Share on other sites

Link to post
Share on other sites

Which is wrong. Even in 4K you can be CPU limited. Those games were even on 1080p GPU bound >.> So if a 8320 is all you need, then a 4300 is all you need. So every 8320 owner wasted money (which they did).

Considering how windows handle CMT (basically uses core 0, 2, 4, 6 first, and then load the second 'core' in the module) a fx 6300 or fx 8320 is a far better product than a fx 4300 (which will much faster be bottlenecked do to its frontend).
Link to comment
Share on other sites

Link to post
Share on other sites

Means new things will be implemented that will require more CPU resources.

Same will happen with GPUs. (A great example would be ray tracing)

Some peoples prediction is based on only increasing one factor, when in reality, you will need to increase many more.

Have you seen how much more CPU overhead there is when using Mantle/DirectX 12?

It's not like they're going to reach CPU capacity with more physics/AI/Sound processing any time soon.

And your comparison wasn't very valid. For grandma and grandpa, all they will ever need is an APU, for the sister in highschool/college will she need anything better than a MacBook Air?

A discrete GPU is already being replaced in those markets.

And Real Time Ray Tracing will allow me to die happy as a gamer.

Link to comment
Share on other sites

Link to post
Share on other sites

Your picture was;

Which is wrong. Even in 4K you can be CPU limited. Those games were even on 1080p GPU bound >.> So if a 8320 is all you need, then a 4300 is all you need. So every 8320 owner wasted money (which they did).

Your hyperbole is bordering on ridiculous.

Link to comment
Share on other sites

Link to post
Share on other sites

Have you seen how much more CPU overhead there is when using Mantle/DirectX 12?

It's not like they're going to reach CPU capacity with more physics/AI/Sound processing any time soon.

And your comparison wasn't very valid. For grandma and grandpa, all they will ever need is an APU, for the sister in highschool/college will she need anything better than a MacBook Air?

A discrete GPU is already being replaced in those markets.

And Real Time Ray Tracing will allow me to die happy as a gamer.

Oh a straw man argument.

This is basically the same as those saying iGPs will take over the whole dGPU market for gaming within this decade.

People said the same thing at in the past.

"Software will require no more CPU resources than we currently do!" - Each and every single time, new things will be implemented.

It is not even about having more physics/AIs, it could be having more advanced physics/AIs.

Link to comment
Share on other sites

Link to post
Share on other sites

Considering how windows handle CMT (basically uses core 0, 2, 4, 6 first, and then load the second 'core' in the module) a fx 6300 or fx 8320 is a far better product than a fx 4300 (which will much faster be bottlenecked do to its frontend).

Far better? That's exaggerated. ~10% averagely on benchmarks which is neglible pretty much making the 8350 a waste of money. 

Link to comment
Share on other sites

Link to post
Share on other sites

Far better? That's exaggerated. ~10% averagely on benchmarks which is neglible pretty much making the 8350 a waste of money.

If you read my statement, and you understand the performance penalty the bulldozer architecture gets under some workloads, the fx 6300 and fx 8320 is a far more solid product.

Most benchmark only present the FPS count, which should not be the only measurement.

The FX 4300 often get a horrible latency.

Same reason why a I3 can get the same FPS as an I5, but you will still be able to tell the difference because of the latency.

Also you are only taking gaming into consideration.

Link to comment
Share on other sites

Link to post
Share on other sites

Also you are only taking gaming into consideration.

What's this thread all about then? Rendering performance?

 

 

Same reason why a I3 can get the same FPS as an I5, but you will still be able to tell the difference because of the latency.

That's being measured from the GPU, never going to be fully accurate and latency tests have been mostly useful for finding irregular delays aka microstutters, the difference between 10ms-40ms isn't noticeable. In some games I have 10ms and in a different game I have 40-50ms, difference isn't noticeable.

 

 

Most benchmark only present the FPS count, which should not be the only measurement.

The FX 4300 often get a horrible latency.

Same goes for the 8350 then.

 

Link to comment
Share on other sites

Link to post
Share on other sites

That's being measured from the GPU, never going to be fully accurate and latency tests have been mostly useful for finding irregular delays aka microstutters, the difference between 10ms-40ms isn't noticeable. In some games I have 10ms and in a different game I have 40-50ms, difference isn't noticeable.

Microstutter occurs far often on a low-end processor.

It doesn't have to be microstutter. The eye will notice a constant change of latency, which is the issue.

Same goes for the 8350 then.

No, because the FX 8350 have 2 extra cores to fill up before loading the second core in the system.
Link to comment
Share on other sites

Link to post
Share on other sites

Oh a straw man argument.

Why do you use them if you don't like them?

Link to comment
Share on other sites

Link to post
Share on other sites

No, because the FX 8350 have 2 extra cores to fill up before loading the second core in the system.

skyim-99th.gif

http://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus/6

Looking at their results, the difference in latency is only seen if we see AMD being outperformed in the FPS graphs. Explain us why a 4170 has multiple times better latency in Skyrim/Bat AC? Feel free to explain why a i5/i7 have lower latency than a 3960x.

You have zero advantage of the extra cores a 8350 provides for games that are mainly 1-4 threaded.

Link to comment
Share on other sites

Link to post
Share on other sites

Why do you use them if you don't like them?

Where did I use a straw man argument?
Link to comment
Share on other sites

Link to post
Share on other sites

skyim-99th.gif

http://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus/6

Looking at their results, the difference in latency is only seen if we see AMD being outperformed in the FPS graphs. Explain us why a 4170 has multiple times better latency in Skyrim/Bat AC? Feel free to explain why a i5/i7 have lower latency than a 3960x.

You have zero advantage of the extra cores a 8350 provides for games that are mainly 1-4 threaded.

I suspect they did not have the CMT fix windows released earlier that year.

A core I5/I7 will typically have a higher clockspeed than a core i7 3960x (depends on which of course).

Link to comment
Share on other sites

Link to post
Share on other sites

I used to have 660 sli with my 8350 @stock , very bad experiance , stutters all over the place im most games, yea i know it could have been the games but i sold the 660's to a friend of mine with a 1st gen i7 and he dosent get any stutters or glitches in the same games , so it was my CPU , i am now running a gtx970 @1500mhz on the core and its way better 

My Pc Specs

CPU : AMD FX-8350 @stock w/Noctua NH D-14   Mobo: Asus M5A99FX    Ram : 16gb Corsair XMS3 @1600mhz    GPU : Gigabyte GTX970 Windforce OC 4GB @1429mhz   SSD: Sandisk X110 256gb    Case: Be Quiet Silent Base 800 Windowed  PSU: EVGA 850w g2  Peripherals : Corsair K70 w/Red Switches , Logitech G502 , Samsung SyncMaster S22B300 (1920x1080) , Ttesports Shock one headset , Phone : HTC one A9

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×