Jump to content

Prove AMD's Superiority To Me

Suika

 

What kind of multitasking?  The vast majority of programs on your computer are single threaded, and run much snappier on an i5 because of its faster cores.

Only highly repetitive tasks will do better on the FX.  Even then, the gaming benefits far outweigh the content creation benefits unless your sole purpose is content creation.

 

Pick the right tool for the job, and make sure the programs you use actually benefit from all dem corez!

 

Gaming performance aside, the vast majority of daily tasks are single threaded.  Everything you do on your desktop, booting up your computer, loading a simple program such as iTunes is going to be faster on Intel because these are single threaded tasks and the performance per core is so much more powerful which results in a more snappy overall experience.  There are very few tasks that benefit from 8 cores.  A program that really benefits from all the cores you throw at it is a real niche area, often reserved for content creation and calculations-not games.

 

This is PCMark 7, it is a FutureMark benchmark that "is a complete PC benchmark that measures overall system performance during typical desktop usage across a range of activities such as handling images and video, web browsing and gaming. This is the most important test since it returns the official PCMark score for the system."

PCMark7.png

This shows that while the performance in daily workloads is similar, Intel is still ahead.  Also consider that these are older generation Intel processors that have since been improved upon, only further increasing the result in Intel's favor for daily tasks.  Think multi-tasking is better on the FX8 because of all those cores?  Nope.

multi-fps.gif

 

Some more productivity benchmarks for your enjoyment:

photoshop.png

---

premiere.png

---

aftereffects.png

---

lightroom.png

---

winrar.png

---

x264.png

---

photo_cs6_op.png

---

blender.png

---

3dmax.png

---

autocad.png

---

The FX processors do have some strengths, just make sure that you are using a program that maximizes those strengths.  Also, in my opinion the gaming benefits of a locked i5, far outweigh the productivity(certain programs) benefits of the FX8.

 

Sources:

http://www.xbitlabs....30_6.html#sect0

http://pclab.pl/art57691-12.html

 

 

At least from some of the benchmarks I've seen the performance hit while streaming or recording with the FX was lower than with the i5. As you said though, right tool for the job.

R7 2700x | ASUS Strix X470-f | 4x8 Corsair Vengeance RGB Pro | Zotac RTX 2080ti | Lian Li PC-011 Dynamic
2x Xeon X5650 | Supermicro X8DT3-LN4F | 6x4 Micron DDR3 ECC | 4x4TB HDD
Link to comment
Share on other sites

Link to post
Share on other sites

That's not how it works! 

If my minimums are 30 with an 8320 and R9 290 and 55 with a i5-4460 and R9 290, that's not the GPU's fault! 

You just don't get it do you?

A better GPU will still improve min framerate.  Overclocking benchmarks easily show this.

 

There are some games that run 15-20 FPS better on Intel min/max, but they're few and far between  most games have a ~5 FPS difference at most.  Most of the time, these cards are already running in completely playable framerate ranges.  And ding dong, if you have a lower min framerate and a lower max framerate, the delta between the two (the actual most important thing) is pretty much the same.  The smoothness of the experience isn't affected all that much, it's actually worse to go from 120 fps to 60 than it is to go from 70 to 40.  This is why framerate caps are generally a good idea for a smooth experience.  

 

I'm not disagreeing with you guys.  Intel offers a better experience.  AMD offers a pretty damn good one though.  There's a reason all the FX processors have stellar reviews, and it's because they offer good value.  My sister just spent $312 on a locked i5-4670, which is almost double the price of some of those black friday deals on 8350's.  (Although if you go back a generation or two, you could get the same gaming performance from those i5's for much cheaper.  But I wanted something a bit more future proof.)

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

A better GPU will still improve min framerate.  Overclocking benchmarks easily show this.

 

There are some games that run 15-20 FPS better on Intel min/max, but they're few and far between  most games have a ~5 FPS difference at most.  Most of the time, these cards are already running in completely playable framerate ranges.  And ding dong, if you have a lower min framerate and a lower max framerate, the delta between the two (the actual most important thing) is pretty much the same.  The smoothness of the experience isn't affected all that much, it's actually worse to go from 120 fps to 60 than it is to go from 70 to 40.  This is why framerate caps are generally a good idea for a smooth experience.  

 

I'm not disagreeing with you guys.  Intel offers a better experience.  AMD offers a pretty damn good one though.  There's a reason all the FX processors have stellar reviews, and it's because they offer good value.  My sister just spent $312 on a locked i5-4670, which is almost double the price of some of those black friday deals on 8350's.  (Although if you go back a generation or two, you could get the same gaming performance from those i5's for much cheaper.  But I wanted something a bit more future proof.)

Here is my interpretation of the FX having positive reviews:

 

"It boots! 10/10"

 

And again, you have no freaking clue how it works.

"I genuinely dislike the promulgation of false information, especially to people who are asking for help selecting new parts."

Link to comment
Share on other sites

Link to post
Share on other sites

Here is my interpretation of the FX having positive reviews:

 

"It boots! 10/10"

 

And again, you have no freaking clue how it works.

I'd kinda have to agree with this, just looking at how its made, its completely different from Intel cpus in the way it works, it does the same job, but its architecture is a lot more complex (this is from wikipedia I know, but its a good basis for those wo want to look more in-depth) http://en.wikipedia.org/wiki/Bulldozer_%28microarchitecture%29

 

Edot: this is what should hopefully revamp AMD cpus next year- AMD just can't compete with Intel on high end cpus. http://en.wikipedia.org/wiki/Excavator_%28microarchitecture%29

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

However, when it comes to answering your specific question of "why choose the FX 8350 over the current haswells", the answer is pretty simple. Better multi-tasking opportunities for a far cheaper price. Remember what i said earlier? About price:peformance being the #1 kicker for consumers? This remains true for the FX 8350 even to this day, for people that absolutely need Cores over Clock speed. Some people use hardware virtualization, and having 8 cores to dedicate to KVM based OS's, with multiple GPU's passed through to them is alot nicer than trying to share 4 single haswell cores per virtualized OS.

I'm not really sure what you mean with better multi-tasking, you're not a lot with 5000 cores if each core of it is 200 times slower than a quad core. That 5000 core will be dramatically worse than the quad core for multitasking but the 5000 core would perform miles better when you're rendering assuming it's 5000 threaded. Humans are just bad multitaskers, we are struggling massively doing two things at a time, excluding rendering/gaming in the background just passive things, actively you'll never satify 8 cores no human can do that. Most things you are doing, which is one by one, is mostly running off one core. My 3930K at stock doesn't feel as responsive as a "2500K" in Maya with a 30-40% higher clock speed but whenever I'm rendering it to a picture the 6 cores at stock make a huge difference. OS just feels more responsive as well etc. That 10% multithreaded advantage doesn't make it the better multitasker at all, considering it has twice the corecount, it will be just worse. Most multithreaded workloads will be better on the i5. It's just the better single/multithreaded all-rounder and there's just no denying in that.

Also I would like to say that price/performance of a 8350 is cliché, it's getting old. First it was 200$ just 10$ cheaper than a 4670K and people still yelled that it's a better value except that they forgot to put in a ratio with performance. Now atm in the USA it's 170$, because it's slightly cheaper doesn't make it a better value, just like the G3258 being 30$ cheaper isn't a better value if you're seeking max multithreaded performance, performance is another variable that's needed to determine price:performance. When the 8350 was priced 200$, no shop was selling it for any cheaper, I saw people saying it's 100-150$ which were apparently Black Friday prices after a month or six. 

People who do productivity (probably less than 1% of the total desktop users) for a living have enough money to buy Intel, if not AMD would be for a short term use. Anyone who's doing KVM and runs AMD is doing it wrong, they might be useful for schools but you're not buying it for your school. Desktop is mainly used for gaming & mainstream use, AMD with their crappy IPC and Intel offering an equivalent for each of their CPU's for the same price, AMD just doesn't make much sense anymore.

 

 

First of all, comparing a CPU released in October 2012 to a CPU released in May 2014 Is a very difficult thing to do, and to even expect the results to be a fair comparison between the two would be insane

Except that Sandy Bridge which was released in January 2011 still dominates FX which were released in Oct 2011/2012, even Conroe which is an architecture from 2005 has a lot more IPC than Bulldozer. Bulldozer was in development for 6 years, that comes down to 2006 and Sandy Bridge was in development since 2005.

 

 

no, it actually is better at multithreaded tasks.

If we're using theoretical performance for normal multithreaded tasks I might as well bring the i3 with AVX2 enabled will perform as good as the 8350 in floating point performance and the i5/i7 outperforming it by twice as much; http://media.bestofmicro.com/J/3/386895/original/multimedia.png

Not really sure why a synthetic benchmark trying to seek the peak performance would prove that most multithreaded tasks will be better on a given CPU.

It's not just;

Singlethreaded: Intel

Multithreaded: AMD

Multi means two or more, you can't say 2 AMD cores are more powerful than 2 Intel cores.

Link to comment
Share on other sites

Link to post
Share on other sites

no, it actually is better at multithreaded tasks.

post-154303-0-90163600-1414267925.png

FX-9590-52.jpg

cinebench.png

 

Brother tell me do i need to believe charts made by internet fanboys or specific forums or what my eyes looked at ? 

CPU: i7 4790K | Ram:Corsair Vengeance 8GB | GPU: Asus R9 270 | Cooling :Corsair H100i | Storage : Intel SSD, Seagate HDDs | PSU : Corsair VS 550 | Case: CM HAF Advanced.

Link to comment
Share on other sites

Link to post
Share on other sites

It's inexpensive. I got a 8320 for my wife for $99 black Friday. $99 for an 8 core.

Intel 4670K /w TT water 2.0 performer, GTX 1070FE, Gigabyte Z87X-DH3, Corsair HX750, 16GB Mushkin 1333mhz, Fractal R4 Windowed, Varmilo mint TKL, Logitech m310, HP Pavilion 23bw, Logitech 2.1 Speakers

Link to comment
Share on other sites

Link to post
Share on other sites

Brother tell me do i need to believe charts made by internet fanboys or specific forums or what my eyes looked at ? 

do you know what hardware reviewers are?

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'd kinda have to agree with this, just looking at how its made, its completely different from Intel cpus in the way it works, it does the same job, but its architecture is a lot more complex (this is from wikipedia I know, but its a good basis for those wo want to look more in-depth) http://en.wikipedia.org/wiki/Bulldozer_%28microarchitecture%29

 

Edot: this is what should hopefully revamp AMD cpus next year- AMD just can't compete with Intel on high end cpus. http://en.wikipedia.org/wiki/Excavator_%28microarchitecture%29

They can easily compete with enthusiast grade microprocessors. When the FX-8150 launched it crushed Intel's i7-2600k at the time in threaded workloads. The only problem AMD has faced since then is single thread performance. The issue we have today is that most people are comparing several newer generations of Intel chips to AMD chips that are over several years old. AMD needs to do a FX refresh with Excavator to at least give the consumer market some options. Tho I think AMD really wants to just conserve their resources for their high priority new architecture. At this point in time if you're on a budget AMD offers plenty of competition at the price. If you can afford it Intel all the way. Me personally am hoping to at least see an Excavator based Athlon. As AMD does fairly well in the sub $100 price bracket.

Link to comment
Share on other sites

Link to post
Share on other sites

do you know what hardware reviewers are?

 

Do you know why softwares are on internet for free do you know what is called instructions of behavior of a software during development? do you know it can be manipulated by "if this value comes out show this?"  did you ever heard of "future mark corporation" ? do go on my DP name. may be i'm here to judge peoples knowledge about computers and to see they deserve better givaaways or not.

you may know sometimes money speaks louder then efficiency to hit the market. believe me i5 4690K is way more then you think and it can beat any CPU from AMD of this year . Where AMD GPUs are killing Nvidia market. you will see in few months  

CPU: i7 4790K | Ram:Corsair Vengeance 8GB | GPU: Asus R9 270 | Cooling :Corsair H100i | Storage : Intel SSD, Seagate HDDs | PSU : Corsair VS 550 | Case: CM HAF Advanced.

Link to comment
Share on other sites

Link to post
Share on other sites

 

I'm not really sure what you mean with better multi-tasking, you're not a lot with 5000 cores if each core of it is 200 times slower than a quad core. That 5000 core will be dramatically worse than the quad core for multitasking but the 5000 core would perform miles better when you're rendering assuming it's 5000 threaded. Humans are just bad multitaskers, we are struggling massively doing two things at a time, excluding rendering/gaming in the background just passive things, actively you'll never satify 8 cores no human can do that. Most things you are doing, which is one by one, is mostly running off one core. My 3930K at stock doesn't feel as responsive as a "2500K" in Maya with a 30-40% higher clock speed but whenever I'm rendering it to a picture the 6 cores at stock make a huge difference. OS just feels more responsive as well etc. That 10% multithreaded advantage doesn't make it the better multitasker at all, considering it has twice the corecount, it will be just worse. Most multithreaded workloads will be better on the i5. It's just the better single/multithreaded all-rounder and there's just no denying in that.

Also I would like to say that price/performance of a 8350 is cliché, it's getting old. First it was 200$ just 10$ cheaper than a 4670K and people still yelled that it's a better value except that they forgot to put in a ratio with performance. Now atm in the USA it's 170$, because it's slightly cheaper doesn't make it a better value, just like the G3258 being 30$ cheaper isn't a better value if you're seeking max multithreaded performance, performance is another variable that's needed to determine price:performance. When the 8350 was priced 200$, no shop was selling it for any cheaper, I saw people saying it's 100-150$ which were apparently Black Friday prices after a month or six. 

People who do productivity (probably less than 1% of the total desktop users) for a living have enough money to buy Intel, if not AMD would be for a short term use. Anyone who's doing KVM and runs AMD is doing it wrong, they might be useful for schools but you're not buying it for your school. Desktop is mainly used for gaming & mainstream use, AMD with their crappy IPC and Intel offering an equivalent for each of their CPU's for the same price, AMD just doesn't make much sense anymore.

 

 

Except that Sandy Bridge which was released in January 2011 still dominates FX which were released in Oct 2011/2012, even Conroe which is an architecture from 2005 has a lot more IPC than Bulldozer. Bulldozer was in development for 6 years, that comes down to 2006 and Sandy Bridge was in development since 2005.

 

 

If we're using theoretical performance for normal multithreaded tasks I might as well bring the i3 with AVX2 enabled will perform as good as the 8350 in floating point performance and the i5/i7 outperforming it by twice as much; http://media.bestofmicro.com/J/3/386895/original/multimedia.png

Not really sure why a synthetic benchmark trying to seek the peak performance would prove that most multithreaded tasks will be better on a given CPU.

It's not just;

Singlethreaded: Intel

Multithreaded: AMD

Multi means two or more, you can't say 2 AMD cores are more powerful than 2 Intel cores.

 

 

The price to performance of the 8350 was not cliche in 2012. Again, saying its $10 off from the 4670k is an impossible statement because the 4670k was not a thing when the 8350 came out. Buying a cpu that was the same price as the 8350 gave you two choices. the i5 3330 (roughly $10 cheaper) or the i5 3550 which was roughly $10 more expensive. In order to buy an unlocked i5, that would cost you $30 more. At the time, without knowing the future of the eventually dead AM3 socket, the 8350 was a worthy purchase.

 

And you are correct, i cannot say 2 AMD cores are more powerful than 2 intel cores. However, i can say that 8 cores on 2 FPU's are more powerful than 4 cores with hyper threading in apps that do not use hyper threading, which believe it or not, is quite a lot. To beat the FX in area's such as compression and video encoding, you would need an i7, and the price differences between the two are quite large. I do not know if you are speaking from personal experience from owning both Vishera based FX, and a haswell CPU, but i am. The differences are not as bad as everyone tends to exaggerate.

 

My cousin's i5 was a great purchase, and over all, a better purchase for what he needed to do. However, from a working standpoint, he could not unpack large model files at the same speed in which my FX 8320 could. 

 

Would i suggest a new buyer to invest in an FX CPU? No, i honestly cannot think of any reason with this current CPU lineup, why anyone would want to invest in a dead socket. If they are going for budget gaming, intel's g3258 won that market single handedly. If they are wanting a cpu for work related things, an i5 will still suffice, unless they need to do some heavy lifting, in which case, they should just go ahead and invest in an i7 anyways. 

 

The weaker IPC of the FX series just will not allow it to be the best gaming experience for the money you invest in it. However, i do have a great analogy for this current argument, on CPU's and upgrades.

 

Comparing modern haswells to older FX cpu's is like comparing blu-ray to DVD. Sure, its better in every aspect, but not everyone is going to want to repurchase all of their DVD's (their mobo and cpu) in blu-ray format in order to get the same experience, when they already enjoy what they have now. However, someone that has not yet invested all of their money into DVD's, should probably consider going straight to blu-ray as blu-ray will most likely be around longer than normal DVD's, and they will provide a better experience, Don't bring up the blu-ray players being able to play DVD's, because i doubt AMD and intel will ever share the same socket type lol.

 

Regardless of this discussion, my point has been proven time and time again, even by linus himself.

 

 

The 8350 performed on par with the 3570k for the most part, but on average, it did perform weaker than the 3570k. Still, for the $30 price difference, it was not an issue and the 8350 was, at the time, very viable.

 

Modern haswells, like the 4670k will absolutely crush the 8350 in gaming, and compete with it in multi threaded tasks, but that is to be expected from a cpu that is 2 years older.

 

Also, how is doing KVM on the FX "doing it wrong"? My experience was quite enjoyable. 2 cores dedicated to each virtualized OS, each having a seperate gtx 750 ti passed through to them. My shield could connect to each system, and despite each having only 2 cores, it was a very good gamestream experience. 

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The price to performance of the 8350 was not cliche in 2012. Again, saying its $10 off from the 4670k is an impossible statement because the 4670k was not a thing when the 8350 came out. Buying a cpu that was the same price as the 8350 gave you two choices. the i5 3330 (roughly $10 cheaper) or the i5 3550 which was roughly $10 more expensive. In order to buy an unlocked i5, that would cost you $30 more. At the time, without knowing the future of the eventually dead AM3 socket, the 8350 was a worthy purchase.

MSRP of the 2500K/3570K always has been around 200-220$, the MSRP of the 8350 has been for 2 years long 200$.

 

 

However, i can say that 8 cores on 2 FPU's are more powerful than 4 cores with hyper threading in apps that do not use hyper threading, which believe it or not, is quite a lot. To beat the FX in area's such as compression and video encoding, you would need an i7, and the price differences between the two are quite large. I do not know if you are speaking from personal experience from owning both Vishera based FX, and a haswell CPU, but i am. The differences are not as bad as everyone tends to exaggerate.

You mean that each module has 2 FPU's? As far as I'm aware AMD offers per module 2x 128 bit FPU's that can work together as a single 256 bit FPU and 2 MMX units so that should make a total of 4 giving you in total 16. I'll give you an idea; Sandy Bridge and up have 6 FPU's per core making a total of 24 FPU's. You just can't go by the FPU count and say x cpu is better than y cpu. Haswell has all of them at 256 bit giving support for AVX2, Intel is just dominating SIMD's. Besides Hyperthreading doesn't require any special support. What do you need to fill 2 bottles of 1 liter each? 2 liter of water. Same goes to a dual core, you need 2 threads to utilize two cores, hyperthreading has the same requirement. Not all resources in a core can be used with a single thread, Hyperthreading allows you to keep more resources busy at the same time. Just re-uses their resources that's all instead of making a 2nd core and having a few resources not even being used on your 1st core. It's a common misconception people keep spraying especially after they saw Hyperthreading adding no performance, it's not guaranteed to add performance. If one thread was enough to keep literally all of your execution resources busy at the same time hyperthreading won't add any performance. It's rare to see HT adding no performance though.

 

 

To beat the FX in area's such as compression and video encoding, you would need an i7, and the price differences between the two are quite large. 

 

Most video encoding benchmarks on the net tell me the opposite, even if the 8350 is faster than the i5 it's just by a small margin.

 

 

The 8350 performed on par with the 3570k for the most part, but on average, it did perform weaker than the 3570k. Still, for the $30 price difference, it was not an issue and the 8350 was, at the time, very viable.

Right ask yourself this question; Would Linus who has 1M of subs ever show Intel outperforming AMD by nearly twice as much even if he had benchmarked it? He won't. I'll just tell you what happened there; Intel reached the GPU bottleneck extremely fast where as when AMD (read when) falled behind it was almost close to be GPU bound. Intel was limited by the GPU and AMD was still slightly bottlenecking it, that video isn't showing the potential of the i5 unless he decides to put a stronger GPU in there or add another in SLI. Remove the GPU bottleneck completely, like 720p/low settings the results are easily almost scaling up to 100% depending on how many cores the game takes advantage of (up to 4, then it goes lower). Not realistic but if you're aware of many games being CPU bound under SLI setups, those 720p/low benchmarks are perfect benchmarks. Many multiplayer games are heavily CPU limited which can't be accurately benchmarked, there aren't any of them, all we mostly see are singleplayer games with a GPU bottleneck which is only saying "AMD is ok" or "the game is GPU bound". MMO's for example, like WoW, if you'd google benchmarks they'll be showing you quite easily frames above 100 FPS and no difference between AMD & Intel because the GPU was hitting its limit, what did you expect they're just taking a flight from X to Y. Get in a 25m raid your frame rates drop way below 30 fps making the game CPU bound which reviewers haven't tested. Benchmarks that were done for older games currently say the game is quite GPU limited but we got much stronger cards now and eventually we're getting CPU bound and we would see Intel just doing better again.

Also have you thought about the fact that a 4300 performs literally the same as a 8350 in every game out there except 5/6? Not much of a point spending twice as much for the same performance, so I'm not seeing any reason why the 8350 would be an alternative to the i5 or just in general why it would make sense for a gaming system. The 3570K does in many games better than the i3 and even if the game only takes advantage of 2 cores you got the unlocked multiplier that will make it up. At the time viable? Not at all, no reviewer tested it properly and none of them ever pointed out that half of its cores were useless for gaming. Linus himself has been saying he's trying to aim for a GPU bound scenario, you're not a lot with that when you're after which CPU is better as both are priced the same. 

 

 

Also, how is doing KVM on the FX "doing it wrong"? My experience was quite enjoyable. 2 cores dedicated to each virtualized OS, each having a seperate gtx 750 ti passed through to them. My shield could connect to each system, and despite each having only 2 cores, it was a very good gamestream experience. 

Performance of one Intel core is nearly or equal to 2 AMD cores so not sure what benefits you're getting from a 8350.

Link to comment
Share on other sites

Link to post
Share on other sites

You mean that each module has 2 FPU's? As far as I'm aware AMD offers per module 2x 128 bit FPU's that can work together as a single 256 bit FPU and 2 MMX units so that should make a total of 4 giving you in total 16. I'll give you an idea; Sandy Bridge and up have 6 FPU's per core making a total of 24 FPU's. You just can't go by the FPU count and say x cpu is better than y cpu. Haswell has all of them at 256 bit giving support for AVX2, Intel is just dominating SIMD's. Besides Hyperthreading doesn't require any special support. What do you need to fill 2 bottles of 1 liter each? 2 liter of water. Same goes to a dual core, you need 2 threads to utilize two cores, hyperthreading has the same requirement. Not all resources in a core can be used with a single thread, Hyperthreading allows you to keep more resources busy at the same time. Just re-uses their resources that's all instead of making a 2nd core and having a few resources not even being used on your 1st core. It's a common misconception people keep spraying especially after they saw Hyperthreading adding no performance, it's not guaranteed to add performance. If one thread was enough to keep literally all of your execution resources busy at the same time hyperthreading won't add any performance. It's rare to see HT adding no performance though.

AMD does use two 128-bit FPU's per module called a FlexFPU that can come together as a single 256-bit FPU. It does that in case one of the cores wants to dispatch an AVX instruction. AMD has implemented the AVX instruction set with Bdver4. As for Hyper-Threading one thread can use all of the cores shared resources. When a thread needs to use something it can pause the other thread to use the shared resource. Hence why Hyper-Threading doesn't scale as well as full logical cores. Tho it is faster if you're dealing with a lot of threads as each core has two processors that can execute two threads simultaneously. Not every thread ran is going to require all of the same resource. AMD's approach would of been good if they just left it SMT and had good core performance behind it.

 

Performance of one Intel core is nearly or equal to 2 AMD cores so not sure what benefits you're getting from a 8350.

People who make that statement are the ones I usually like to label with the official "I don't know anything" award. If a single Intel core doubled a single AMD core then AMD wouldn't be in the microprocessor business right now. AMD's core performance in no more than 30% behind Haswell right now. Bdver4 is suppose to bring 15-30% IPC gains which means Carrizo would beat out Sandy Bridge with flying colors. That is if AMD can really deliver with Bdver4 and hopefully we see a desktop variant of it as who wouldn't buy a $99 Athlon x4 960k if it beats the $200 i5-3570k in gaming all day long. This is to hoping AMD doesn't keep Bdver4 mobile and that it brings forth at least a 20% increase in IPC which isn't out of reach. AMD promises 10% increase in IPC per generation. With HDL equaling a full node improvement (shorter pipes) so we can expect easily another 5-10% increase in IPC.

Link to comment
Share on other sites

Link to post
Share on other sites

 

MSRP of the 2500K/3570K always has been around 200-220$, the MSRP of the 8350 has been for 2 years long 200$.

 

 

You mean that each module has 2 FPU's? As far as I'm aware AMD offers per module 2x 128 bit FPU's that can work together as a single 256 bit FPU and 2 MMX units so that should make a total of 4 giving you in total 16. I'll give you an idea; Sandy Bridge and up have 6 FPU's per core making a total of 24 FPU's. You just can't go by the FPU count and say x cpu is better than y cpu. Haswell has all of them at 256 bit giving support for AVX2, Intel is just dominating SIMD's. Besides Hyperthreading doesn't require any special support. What do you need to fill 2 bottles of 1 liter each? 2 liter of water. Same goes to a dual core, you need 2 threads to utilize two cores, hyperthreading has the same requirement. Not all resources in a core can be used with a single thread, Hyperthreading allows you to keep more resources busy at the same time. Just re-uses their resources that's all instead of making a 2nd core and having a few resources not even being used on your 1st core. It's a common misconception people keep spraying especially after they saw Hyperthreading adding no performance, it's not guaranteed to add performance. If one thread was enough to keep literally all of your execution resources busy at the same time hyperthreading won't add any performance. It's rare to see HT adding no performance though.

 

 

 

Most video encoding benchmarks on the net tell me the opposite, even if the 8350 is faster than the i5 it's just by a small margin.

 

 

Right ask yourself this question; Would Linus who has 1M of subs ever show Intel outperforming AMD by nearly twice as much even if he had benchmarked it? He won't. I'll just tell you what happened there; Intel reached the GPU bottleneck extremely fast where as when AMD (read when) falled behind it was almost close to be GPU bound. Intel was limited by the GPU and AMD was still slightly bottlenecking it, that video isn't showing the potential of the i5 unless he decides to put a stronger GPU in there or add another in SLI. Remove the GPU bottleneck completely, like 720p/low settings the results are easily almost scaling up to 100% depending on how many cores the game takes advantage of (up to 4, then it goes lower). Not realistic but if you're aware of many games being CPU bound under SLI setups, those 720p/low benchmarks are perfect benchmarks. Many multiplayer games are heavily CPU limited which can't be accurately benchmarked, there aren't any of them, all we mostly see are singleplayer games with a GPU bottleneck which is only saying "AMD is ok" or "the game is GPU bound". MMO's for example, like WoW, if you'd google benchmarks they'll be showing you quite easily frames above 100 FPS and no difference between AMD & Intel because the GPU was hitting its limit, what did you expect they're just taking a flight from X to Y. Get in a 25m raid your frame rates drop way below 30 fps making the game CPU bound which reviewers haven't tested. Benchmarks that were done for older games currently say the game is quite GPU limited but we got much stronger cards now and eventually we're getting CPU bound and we would see Intel just doing better again.

Also have you thought about the fact that a 4300 performs literally the same as a 8350 in every game out there except 5/6? Not much of a point spending twice as much for the same performance, so I'm not seeing any reason why the 8350 would be an alternative to the i5 or just in general why it would make sense for a gaming system. The 3570K does in many games better than the i3 and even if the game only takes advantage of 2 cores you got the unlocked multiplier that will make it up. At the time viable? Not at all, no reviewer tested it properly and none of them ever pointed out that half of its cores were useless for gaming. Linus himself has been saying he's trying to aim for a GPU bound scenario, you're not a lot with that when you're after which CPU is better as both are priced the same. 

 

 

Performance of one Intel core is nearly or equal to 2 AMD cores so not sure what benefits you're getting from a 8350.

 

 

Thank you for clarifying the FPU for me, was unaware exactly how it worked between the two. Regarding the statement of linus not showing the complete destruction of an AMD cpu by an Intel one would be believable if several other videos did not come to the same conclusion. Tek Syndicate also reached similar conclusions, along with a few lesser known tech channels. Need i also remind you that i do own an FX CPU AND a haswell CPU, and i even have access to my cousins i5 3570k. As far as gaming goes, the frame rates are similar, but the FX suffers from harsh minimum frame drops, That's not to say its terrible at gaming, its just that intel cpu's deliver the better experience for gaming.

 

I also do not know if you have actually used an FX for compressing large files, but i can easily tell you the difference between the i5 and the 8320 was night and day. My 8320 was faster AND allowed me to do other things while compressing, where as the i5 was screaming trying to keep up with just the compressing, let alone any extra tasks assigned to it. I am not speaking about something i know nothing about, i can show you pictures of each of my rigs, and run any task you want me to run on them. The real world results that i see, show the FX 8320 to be a formidable CPU, especially at the $110 price point i paid for it. To afford a CPU that can perform all of the same tasks equally, would have cost me nearly twice as much. No amount of synthetic benches will prove otherwise.

 

We are all aware of the flaws the FX has, and do not need to be reminded about them in every post, in every forum. To claim that intel cores are twice as fast as AMD cores is not an accurate statement. Yes, they are faster, and indeed noticeable to the naked eye, but 2x faster? No, not true. In gaming performance, the intel cpu's on average, perform roughly 20% faster than their AMD counterparts. Good games to test multi threading would be Rust, or Metro LL. My g3258 absolutely folds under the demand of these games, even with the faster cores. 

 

As for the KVM arguement, some applications absolutely will not run with 1 core. Battlefield being an example, and Far Cry 4 is already picky about letting 2 cores run it. Hyper threading would work in these situations, but an 8320 is still cheaper than a quad core intel with HTT. Not to mention, you dont have to go that extra step in patching the intel iGPU within QEMU. I just dont see how its doing it wrong, when in my real world tests, it worked perfectly.

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD does use two 128-bit FPU's per module called a FlexFPU that can come together as a single 256-bit FPU. It does that in case one of the cores wants to dispatch an AVX instruction. AMD has implemented the AVX instruction set with Bdver4. 

8150 had AVX already. Also I was talking about AVX2.

 

 

When a thread needs to use something it can pause the other thread to use the shared resource. Hence why Hyper-Threading doesn't scale as well as full logical cores.

Except that's not the reason why it doesn't scale that well as phys cores and your explanation makes no sense at all. In fact Hyperthreading can boost your performance by 100%, same benefit as from a 2nd physical core.

 

 

As for Hyper-Threading one thread can use all of the cores shared resources. When a thread needs to use something it can pause the other thread to use the shared resource. 

Not sure why you're repeating what I've been saying.

 

 

AMD's approach would of been good if they just left it SMT and had good core performance behind it.

It's CMT, not SMT or just TMT since it doesn't have SMT. Intels branding for SMT is Hyperthreading and is atm the only form of SMT. 

 

Bdver4 is suppose to bring 15-30% IPC gains which means Carrizo would beat out Sandy Bridge with flying colors. 

15% is still below Conroe and 30% is still 20% behind Sandy Bridge and don't forget to take the awful scaling in account and the fact that the singlethreaded performance drops whenever you get a 2nd thread in the same module. Lower IPC also means a lower performance per clock, difference will only get larger once the clock speed goes up.

Here all CPU's at 2.8GHz;

TDLx2vT.png

That's just a beginning of the story.

c1ZWhQ9.jpg

70% to go.

 

 

To claim that intel cores are twice as fast as AMD cores is not an accurate statement. Yes, they are faster, and indeed noticeable to the naked eye, but 2x faster? No, not true. In gaming performance, the intel cpu's on average, perform roughly 20% faster than their AMD counterparts. 

Just the fact that a quad core has roughly the same performance as an octocore should give you the idea that we're dealing with an IPC difference of nearly 100% or else how would Intel achieve this. Even two QX9775's at 4.2GHz are easily 10-20% faster than a 8350 at 5.5GHz. I like to see where you're getting that 20% from.

Link to comment
Share on other sites

Link to post
Share on other sites

i have a FX8350 for arround 2 years now, and i have to say it served me realy well.

Unfortunatly i do have some issues with my current mobo, and im thinking of a switch to X99.

 

However THe FX8350 basicly never dissapointed me.

The reason why i choosed for the FX8350 that time, was because i do allot of virtualization.

like KVM, Proxmox, Hyper V, VM-ware´s and what not.

The FX8350 realy shines wenn it comes to virtualization, and there isnt any MS intel cpu, which can deliver better performance and compatibillity for virtualization.

The FX8350 offers me full virtualization support, ECC ram supper and what not.

Things i couldnt find on any intel cpu back in 2012 for that price.

 

X99 would be my next logical upgrade, because a 4790K would not be an interessting upgrade for my virtualization tasks.

 

Long story short, FX8350 is superior in virtualization.

In terms of gaming, sure intel haswell i5's i7´s are better.

But the FX8350 still does a totaly fine job in my opinnion.

Never had any game that wasn´t playable.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Just the fact that a quad core has roughly the same performance as an octocore should give you the idea that we're dealing with an IPC difference of nearly 100% or else how would Intel achieve this. Even two QX9775's at 4.2GHz are easily 10-20% faster than a 8350 at 5.5GHz. I like to see where you're getting that 20% from.

 

 

A quad core performing the same as an octacore (comparing core count to core count) would indicate an application being unable to properly use more threads, no? Lets not forget, if you are comparing an application that handles HTT better, then those 4 cores technically become on par with the 8 that AMD has. Why are you repeating what everyone already knows about AMD's weak IPC? I have mentioned it countless times, as well as you, and everyone else in this thread. Where do i get the "intel is only 20% faster in gaming" from? Personal testing. Video benchmarks that other gamers/youtubers have tested that you completely ignore. Easy to check the differences in the effectiveness of the IPC on MMO's. Having an average of 70 FPS on Guild Wars 2, with graphics on ultra, Switching to a G3258 from my FX 8350 gave me an average frame of 89 FPS. The fact that the min framerates did not drop as much as the FX is what made the experience more enjoyable. WoW is a game that severely benefits from an Intel CPU too. I've seen results of +30fps when people switch from their AMD counterparts to an intel CPU.

 

Still, on average, 20fps is the general difference in most games.

 

Feel free to leave synthetic benches out of the equation, and post some ingame results, or test them for yourself rather than take the words of biased loyal fans.

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

interesting read this discussion...i owned both an fx8320 and an i7 and ive messed around with both a lot disabling cores and modules and turning hyperthreading on and off to simulate other cpus and see how they perform with games...from my testing here's how i would rank them when it comes to modern games (remember most modern games now use many threads, i used to test cpus playing watchdogs, metro last light, bf4 and crysis3 mostly..) :

 

-Pentium

-Fx4300

-Fx6300

-core i3

-Fx83xx

-core i5

-core i7

 

...now, what do i think about fx cpus? 

well..they certainly offer a very good bang in the multitasking and productivity department...for a budget workstation or home server for example its the perfect choice...but when it comes to gaming rig intel is pretty much always a better choice because the fx cpu does limit the performance of higher end gpus and in order to overclock them to get ''okay'' performance in games you need to spend more on a good motherboard and cpu cooler for it...also with a cheap intel solution such as a core i3 for example you get an upgrade path where as with amd youre stuck.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

interesting read this discussion...i owned both an fx8320 and an i7 and ive messed around with both a lot disabling cores and modules and turning hyperthreading on and off to simulate other cpus and see how they perform with games...from my testing here's how i would rank them when it comes to modern games (remember most modern games now use many threads, i used to test cpus playing watchdogs, metro last light, bf4 and crysis3 mostly..) :

 

-Pentium

-Fx4300

-Fx6300

-core i3

-Fx83xx

-core i5

-core i7

 

...now, what do i think about fx cpus? 

well..they certainly offer a very good bang in the multitasking and productivity department...for a budget workstation or home server for example its the perfect choice...but when it comes to gaming rig intel is pretty much always a better choice because the fx cpu does limit the performance of higher end gpus and in order to overclock them to get ''okay'' performance in games you need to spend more on a good motherboard and cpu cooler for it...also with a cheap intel solution such as a core i3 for example you get an upgrade path where as with amd youre stuck.

 

I would say that is a fair perspective on the subject. Those games are indeed very thread bound, and lack of cores will lead to poor results, which is why the G3258 was ranked the lowest. Overall, comparing most games (plenty of people just play MMO's) it does a fantastic job too. None of the CPU's you listed are bad in any way either. People just need to care less about the brand, and care only about the price to performance they are getting at any given time. The FX CPU's are 2 years old now. Nobody is saying people should still buy them with gaming in mind, as intel has better alternatives. However, if someone already has a good AM3 board, and little cash to spend, an FX CPU will still get the job done rather nicely.

 

I have the following CPU's in my household, and i still find all of them to be good at alot of the "modern" games too:

 

AMD Phenom II 720 Black Edition "Heka"

AMD Phenom II 965 Black Edition

AMD FX 6300

AMD FX 8350

Intel G850

Intel G3258

Intel i5 3570k (cousin's CPU, but i've tinkered with it quite a alot)

 

I am not really loyal to either brand, i mostly pay for whatever gets the job done at the time, within whatever budget i have. Even to this day, those Phenom's still get the job done (My dad is using the 720 BE in Tomb Raider with a GTX 550 Ti, and he is enjoying it greatly). 

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8150 had AVX already. Also I was talking about AVX2.

All Bulldozer revisions have AVX. Bdver4 brings AVX2 instructions.

 

Except that's not the reason why it doesn't scale that well as phys cores and your explanation makes no sense at all. In fact Hyperthreading can boost your performance by 100%, same benefit as from a 2nd physical core.

Hyper-Threading will not scale to 100% and it has never reached that due to shared resources among the processors. This is why the i3 in every aspect cannot compare to the performance of the i5.

 

Not sure why you're repeating what I've been saying.

You said Hyper-Threading can execute two threads synchronously without any interference. Which is untrue as the two processors have to share a large amount of resources.

 

It's CMT, not SMT or just TMT since it doesn't have SMT. Intels branding for SMT is Hyperthreading and is atm the only form of SMT. 

CMT is cluster based threading which Bulldozer architecture currently uses. I stated if Bulldozer was SMT then it wouldn't of been as bad as it is hyped to be. Sure single thread performance wouldn't change but multiple thread performance would of been amazing. AMD's new Zen architecture will be SMT.

 

15% is still below Conroe and 30% is still 20% behind Sandy Bridge and don't forget to take the awful scaling in account and the fact that the singlethreaded performance drops whenever you get a 2nd thread in the same module. Lower IPC also means a lower performance per clock, difference will only get larger once the clock speed goes up.

Here all CPU's at 2.8GHz;

TDLx2vT.png

That's just a beginning of the story.

c1ZWhQ9.jpg

70% to go.

Both of them charts do not dictate single core performance. Bdver3 is no more than 30% behind Sandy Bridge The fact that Intel hasn't done anything astonishing to their architecture since then leaves a lot of room for AMD to reclaim ground. If Bdver4 brings at minimum a 30% increase in IPC then the Athlon x4 960k will likely compete with the i5-2500k or even the i5-3570k. That much increase in IPC would put AMD back on the map as a competitor in the budget market. To give you an idea K10 is roughly 30% behind Sandy Bridge. Bdver3 has definitely exceeded K10 already. The only problem for consumers is we more than likely will not see a Bdver4 FX variant even if AMD closes a large margin of the gap with Intel. We will have to wait until 2016 in order to see a true competitive microprocessor from AMD. Which is understandable for them to conserve their resources.

 

Just the fact that a quad core has roughly the same performance as an octocore should give you the idea that we're dealing with an IPC difference of nearly 100% or else how would Intel achieve this. Even two QX9775's at 4.2GHz are easily 10-20% faster than a 8350 at 5.5GHz. I like to see where you're getting that 20% from.

First thing you need to do retain any credibility here is to drop Hyper-Threading. No one cares about multiple thread numbers when it comes to single thread performance. Most consumers will not buy an i7 because of it not scaling well in gaming. In threaded workloads the FX-8350 and the i7-4770k really have mixed matched numbers which is still impressive for a several year old architecture. Tho like said that's irrelevant to single thread performance. As this is the only spot that AMD has really dropped the ball with Bulldozer. I'm not going to do the legwork for you, these numbers are common sense among the elite of us. Go look around for trustworthy numbers. Hell I could compile a benchmark for you if you wanna test your own hardware. I have a thread on LTT for my benchmark (SimBench). Tho it hasn't seen any work done to it in a while (lost interest to bigger projects).

Link to comment
Share on other sites

Link to post
Share on other sites

All Bulldozer revisions have AVX. Bdver4 brings AVX2 instructions.

You said; AMD has implemented the AVX instruction set with Bdver4. 

 

 

Hyper-Threading will not scale to 100% and it has never reached that due to shared resources among the processors. This is why the i3 in every aspect cannot compare to the performance of the i5.

If it requires 2 cycles to calculate an integer/floating point instr rather than one cycle there will be a 100% difference.

 

You said Hyper-Threading can execute two threads synchronously without any interference. Which is untrue as the two processors have to share a large amount of resources.

Right lets start with some basics explaining to a game developer, a thread would at least contain 1 instruction and if we're leveraging Hyperthreading we would have 2 threads so 2 instructions at least and now the fact that a single ALU can only process one instruction at a time and lets say that our CPU only has one ALU and we're only doing integer cals that would mean Hyperthreading won't add any performance and it will wait for the other thread to finish. Lets say we have a single core CPU so we got Thread A only, just processes that instruction. We enable Hyperthreading we get Thread B. Thread B will complete the instruction as fast as thread A. There's no such thing as two processors on a single core and it's simultaneously not synchronously so your interference theory is completely BS.

 

Both of them charts do not dictate single core performance. 

​Then you're a fool, that C15 was singlethreaded and this below here will show you how the singlethreaded performance starts to tank when you pump a 2nd thread in there so in a multithreaded scenario the single core difference between AMD & Intel only becomes larger. 

CB-Scaling1.png

And also forgetting that the front-end bottleneck still needs to come making the difference again larger and larger.

 

 

Bdver3 is no more than 30% behind Sandy Bridge 

FX-8350-SuperPi.jpg

Feel free to think the 4300 is 30% slower than the 2500K, good joke.

 

 

No one cares about multiple thread numbers when it comes to single thread performance. 

If your game only takes advantage of 4 cores, you'll be comparing the singlethreaded performance between a 8350 & i5 obviously to find out which CPU is better and considering we're getting awful multithreading scaling from AMD it's extremely relevant here.

Link to comment
Share on other sites

Link to post
Share on other sites

For the Athlon VS the Pentium I would pick the Athlon I would never recommend a dual core to anyone.

750K, 760K or 860K? I am buying a 760K Because i was a small idiot a year ago and bought a FM2 (Non plus) motherboard thinking they were all the same and was going to put an 8350 into it. And now I cant go 860K

My current build - Ever Changing.

Number 1 On LTT LGA 1150 CPU Cinebench R15

http://hwbot.org/users/TheGamingBarrel

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, alright, it sounds like I'm trolling, but I'm not.

 

I'd like to have a civil discussion with the members of LTT. I don't want to hear the Intel argument, I know this one well and done, I want to hear, from somebody who actually enjoys the FX series, why it's the better option to Intel. I've had the FX-8350 at 4.9GHz, and now I have an i7-4790k, and in terms of just gaming performance, I don't regret my decision. Can anyone make me regret that?

 

If you're here to talk about Intel, please don't. If you're here to flame, please don't. I would like only to see the argument in favor of AMD. Why is the FX-8350 better than an i5-4460?

 

Please consider, I am rather open minded so long as you don't get vicious in how you argue. After all, I used to fanboy AMD pretty hard. Now I hate the FX series with a dying passion.

I commend you for your 4.9ghz overclock on your 8350. I would say the I5 4670K is the CPU of choice. The 4790K is the right CPU if you go i7.

Link to comment
Share on other sites

Link to post
Share on other sites

750K, 760K or 860K? I am buying a 760K Because i was a small idiot a year ago and bought a FM2 (Non plus) motherboard thinking they were all the same and was going to put an 8350 into it. And now I cant go 860K

760K, 750K and 860K all of them.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

You said; AMD has implemented the AVX instruction set with Bdver4. 

Yes I did, AVX2 is a AVX instruction set (expanded to 256-bit). If you're butthurt over me specifying which version (AVX2) then deal with it. I'm only giving you things to gripe about.

tumblr_lp3vb3Fbe01qdiqs8.gif

 

If it requires 2 cycles to calculate an integer/floating point instr rather than one cycle there will be a 100% difference.

If you're using a CELL processor.

 

Right lets start with some basics explaining to a game developer, a thread would at least contain 1 instruction and if we're leveraging Hyperthreading we would have 2 threads so 2 instructions at least and now the fact that a single ALU can only process one instruction at a time and lets say that our CPU only has one ALU and we're only doing integer cals that would mean Hyperthreading won't add any performance and it will wait for the other thread to finish. Lets say we have a single core CPU so we got Thread A only, just processes that instruction. We enable Hyperthreading we get Thread B. Thread B will complete the instruction as fast as thread A. There's no such thing as two processors on a single core and it's simultaneously not synchronously so your interference theory is completely BS.

I'm not just a game developer so don't think of that being my only area of knowledge. Spent my whole life around hardware and software design. It's not just a hobby like it is for people such as yourself, for me it's a passion. Like I said Hyper-Thread is irrelevant to the subject I'm even talking about. A Bulldozer module kills Hyper-Threading anyways but what's the point? Exactly. Your explanation of how ALU's and integer cores is completely off especially once you throw Hyper-Threading into the mix. Hyper-Thread doesn't break apart a single instruction. It's two threads executing the required instructions independent sent by the software. I would suggest you look up threading models and learn how software threads work in conjunction with hardware threads. A single software thread only scales well on a single hardware thread. This is why it's better to use completion ports than it is to thread every single socket of a server. You'll easily degrade CPU performance by executing more software threads than there is cores (hardware threads). It may scale up a tiny bit (more than likely due to scheduling) tho once you hit so many software threads performance numbers just drop to the floor. Hardware is really only as powerful as the software running on it. This is why Hyper-Threading scales no better than 50% because both of them threads are severely crippled. You may be able to execute two instructions simultaneously but they aren't as fast. If you know how completion ports work I would tell you that Hyper-Threading works a lot like Berkeley sockets if you were to have two sockets per server.

 

​Then you're a fool, that C15 was singlethreaded and this below here will show you how the singlethreaded performance starts to tank when you pump a 2nd thread in there so in a multithreaded scenario the single core difference between AMD & Intel only becomes larger. 

CB-Scaling1.png

And also forgetting that the front-end bottleneck still needs to come making the difference again larger and larger.

What's your point with multiple thread charts and smack talk? I don't even read all of what you're saying because we're talking about two different subjects here homie. When you're ready to talk about single thread performance you know where its at.

 

Bdver3 has it's own decoder for every core.

SteamrollerProjection2-640x365.jpg

 

FX-8350-SuperPi.jpg

Feel free to think the 4300 is 30% slower than the 2500K, good joke.

I was going to call you an idiot... but you pretty much did that already to yourself. With that being said... How about we step up into the 21'st century where SSE and AVX have taken over. Hm?  ;)

 

Especially considering the x87 block was factory disabled.

 

If your game only takes advantage of 4 cores, you'll be comparing the singlethreaded performance between a 8350 & i5 obviously to find out which CPU is better and considering we're getting awful multithreading scaling from AMD it's extremely relevant here.

Obviously the i5 is better because of higher single thread performance. Tho once you step up into modern games the already 2+ year old FX-8350 still shows its face in benchmarks. What you have to understand is no one gives a rats ass about floating point performance on a microprocessor. I can tell you that first hand as someone who develops software. Most software doesn't even have a single floating point value written in it. Where the cards stand for performance numbers is how well single thread integer performance is (this is what we all mean by core performance). As hell even my A10-6800k blows away mobile Haswell i5's in floating point performance (check my benchmark thread). Tho like said that's not what's important when it comes to software. There's really only a couple of places where floating point numbers are used heavily and that's in games, 3D modeling, Photoshop. Whenever you're dealing with more than a single dimension. Tho even then the floating point calculation being done is not all that heavy. Floating point performance is the least of our worries.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×