Jump to content

Prove AMD's Superiority To Me

Suika

I would say that is a fair perspective on the subject. Those games are indeed very thread bound, and lack of cores will lead to poor results, which is why the G3258 was ranked the lowest. Overall, comparing most games (plenty of people just play MMO's) it does a fantastic job too. None of the CPU's you listed are bad in any way either. People just need to care less about the brand, and care only about the price to performance they are getting at any given time. The FX CPU's are 2 years old now. Nobody is saying people should still buy them with gaming in mind, as intel has better alternatives. However, if someone already has a good AM3 board, and little cash to spend, an FX CPU will still get the job done rather nicely.

Yes i'm aware that the pentium excels in games that only require a couple threads (up to 3..) i've seen it from my testing with slightly older games or games built on older/cheaper game engines (indie games or MMO's) and for a user that is looking to play such games it can be a good choice...but when it comes to modern open world games it leaves a lot to be desired and in many cases will suffer from pretty bad stuttering and pauses making the game basicaly unplayable...i rather have a constant lower framerate but no stuttering, something like an athlon 860K or FX-4300 will produce that kind of results and IMHO it's better because the game is still playable even though at lower framerates.

And yes if the user already has a motherboard an FX-6300 or FX-8320 can be appropriate in some cases, the GPU and screen resolution are the other VERY important factors to consider when it comes to gaming...the FX CPU's are a good match for mid-range GPU's at 1080p gaming...R9 280X or GTX770 tops, anything higher will be too limited by the performance of an FX CPU even if the chip is highly overclocked..and that's where people are getting fooled by AMD thinking it's a good bargain while it's not they buy R9 290X GPU to pair with FX-8350...which is not something i recommend (fell into that trap myself already).

In some games it will work good (Battlefield 4 is probably the best example, my GTX780 was NOT limited by the FX CPU playing this game online what so ever) but most games will suffer from low minimums and very inconsistent GPU loads (which honestly sucks when you have such a good GPU sitting there waiting for the stupid CPU to process more load...)

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

Yes I did, AVX2 is a AVX instruction set (expanded to 256-bit). If you're butthurt over me specifying which version (AVX2) then deal with it. I'm only giving you things to gripe about.

AVX2 is an integer SIMD extension. Sandy Bridge only had 128 bit SIMD integers which have been buffed up to 256 bit to support AVX2. AVX is just 256 bit for your regular FPU; http://www.realworldtech.com/haswell-cpu/4/

 

If you're using a CELL processor.

The Cell CPU is a single core with 8SPE's with one of them being disabled, which sadly has nothing to do with Hyperthreading and it doesn't use it. Not much of a point you pulled that BS right out of your back. 

Nope, imagine you have 2 threads now, you have a single core with Hyperthreading and one ALU/FPU. First thread being an integer instruction, 2nd an FP instr, without hyperthreading it would take two cycles first cycle completing the integer instruction and 2nd cycle completing the FP instr. With Hyperthreading you can run two threads simulatenously finishing both instructions at the same time. 1 vs 2 cycles -> 100% time saved -> 100% more performance.

 

 A Bulldozer module kills Hyper-Threading anyways but what's the point? Exactly. 

 

Have we ever argued Hyperthreading vs CMT? Not really. 

 

 

Hyper-Thread doesn't break apart a single instruction. It's two threads executing the required instructions independent sent by the software.

Never said "Hyper-thread" breaks a single instruction apart. Seems like you are having massive issues following your own discussion.

 

 

It's two threads executing the required instructions independent sent by the software.

This is like saying the grass is green. You're not proving a shit with this, wake up.

 

 

A single software thread only scales well on a single hardware thread. 

Since you say it only scales well on a single hardware thread you're trying to say that you can execute a single software thread on more than 1 hardware thread? Done.

 

I would suggest you look up threading models and learn how software threads work in conjunction with hardware threads. 

Says the guy who just claimed you can execute a single thread on more than one hardware thread. I suggest you learning a thing about threads & processes. I'll explain you what Hyperthreading is about; a single thread isn't always capable of keeping all of your execution resources busy and a 2nd hardware thread on that core aka Hyperthreading will allow you to use the sources that were idling.

 

 

You'll easily degrade CPU performance by executing more software threads than there is cores (hardware threads). 

O really, except that you when you enable HT on a i5 assuming you have 8 software threads you're getting a performance boost so your Hyperthreading theory above completely contradicts your own statement.

 

 

his is why Hyper-Threading scales no better than 50% because both of them threads are severely crippled. 

Hyperthreading doesn't cripple the first threads performance and fucking stop thinking there's one real core and one fake core.

 

 

What's your point with multiple thread charts and smack talk? I don't even read all of what you're saying because we're talking about two different subjects here homie. When you're ready to talk about single thread performance you know where its at.

Other than you're failing to understand that the 1st thread's performance degrades if you pump a 2nd thread in the same module in the 2nd hardware thread which is what that Cinebench graph pointed out. Which doesn't happen to Intel at all which means we're having a larger IPC difference. 

 

Bdver3 has it's own decoder for every core.

 

Except they aren't cores. An ALU cluster isn't a core, back-end alone doesn't form a core at all. A dual core would at least have 2 front-ends which we only have 1 of in a single module and the reason for that is space saving. Giving each ALU cluster their own dedicated front-end wouldn't make much sense since that wouldn't be saving any space. Module itself is the core, doubling a cluster doesn't form a dual core.

 

 

I was going to call you an idiot... but you pretty much did that already to yourself. With that being said... How about we step up into the 21'st century where SSE and AVX have taken over. Hm?   ;)

You already did by lying about Nvidia stole Mantle code and putted it in their own drivers according to your own sources and the fact that you still claim every monitor will have variable refresh rate with a single firmware update when a freaking monitor manufacturer was offering a PCB Swap. Best part is that you're claiming to be a game developer, I asked you to prove it, you apparently have no clue how threads & processes work, you totally have no clue how SMT works.

 

 

Obviously the i5 is better because of higher single thread performance. Tho once you step up into modern games the already 2+ year old FX-8350 still shows its face in benchmarks. What you have to understand is no one gives a rats ass about floating point performance on a microprocessor. I can tell you that first hand as someone who develops software. Most software doesn't even have a single floating point value written in it. Where the cards stand for performance numbers is how well single thread integer performance is (this is what we all mean by core performance). As hell even my A10-6800k blows away mobile Haswell i5's in floating point performance (check my benchmark thread). Tho like said that's not what's important when it comes to software. There's really only a couple of places where floating point numbers are used heavily and that's in games, 3D modeling, Photoshop. Whenever you're dealing with more than a single dimension. Tho even then the floating point calculation being done is not all that heavy. Floating point performance is the least of our worries.

ROFL. Do you even realize SIMD's, another word for FPU, are actually processing integer calculations? AVX2 is largely an integer SIMD extension, so Intel’s architects applied the same conceptual technique to achieve single cycle 256-bit SIMD integer execution.

The ones that are moved from 128 bit (Sandy Bridge) to 256 bit are SIMD integers.

FcRWfHy.png

Linking again so you can have a nice read; http://www.realworldtech.com/haswell-cpu/4/

Games heavily rely on SIMD performance and that's a known fact, twist it as much as you want, you'll just prove that you aren't a game developer.

Link to comment
Share on other sites

Link to post
Share on other sites

FX-8320 at 4.2GHz over here, I preferred it because I got my processor for $124 and it's giving me solid multithreaded performance.

 

Streaming is a breeze. I'll move to Intel eventually but for right now AMD is fine. No game I play suffers because I have an AMD CPU. 

Abigail: Intel Core i7-4790k @ 4.5GHz 1.170v / EVGA Nvidia GeForce GTX 980 Ti Classified  / ASRock Z97 Extreme6 / Corsair H110i GT / 4x4Gb G.Skill Ares 1866MHz @ CAS9 / Samsung 840 EVO 250Gb SSD / Seagate Barracuda 1TB 7200RPM / NZXT H440 Blue / EVGA SuperNOVA 750w G2

Peripherals: BenQ XL2411z 24" 144hz 1080p / ASUS VG248QE 24" 144Hz 1080p / Corsair Vengeance K70 RGB / Logitech G502 / Sennheiser HD650 / Schiit Audio Modi 2 / Magni 2 / Blue Yeti Blackout
Link to comment
Share on other sites

Link to post
Share on other sites

If you wanne improve the single threaded performance of an FX cpu allot, you should disable every second core inside the module.

So core 2/4/6/8, Then overclock the the cores that are left 1/3/5/7.

This will improve the single threaded performance significantly.

Link to comment
Share on other sites

Link to post
Share on other sites

Troll.

 

People who start these threads should get the ban hammer.

 

If you want to read fanboy flame wars, there's plenty of threads already out there for that.

 

lol :D

 

This community is allready full of flame wars,

i think it doesnt gonne make sense anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

Troll.

 

People who start these threads should get the ban hammer.

 

If you want to read fanboy flame wars, there's plenty of threads already out there for that.

Because I wanted to see if there was an actual purpose to buying an FX chip? Curiosity kills the cat, hm.

 

To my understanding, it's foolish to, I just want to see if there's a proper argument or reason behind it, asides from a few poorly conducted benchmarks or unfair comparison (there aren't that many situations where you see 8 threads in use, for example). If the FX series is so poor, then, I wanted to know if there were legitimate reasons it has somehow stayed afloat in the market. I guess it's just fanboys.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

If you wanne improve the single threaded performance of an FX cpu allot, you should disable every second core inside the module.

So core 2/4/6/8, Then overclock the the cores that are left 1/3/5/7.

This will improve the single threaded performance significantly.

what kind of a motherboard do you need to magicaly shut down only part of a module?! i've never been able to do that i was only able to shut down entire module(s).

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

Because I wanted to see if there was an actual purpose to buying an FX chip? Curiosity kills the cat, hm.

 

To my understanding, it's foolish to, I just want to see if there's a proper argument or reason behind it, asides from a few poorly conducted benchmarks or unfair comparison (there aren't that many situations where you see 8 threads in use, for example). If the FX series is so poor, then, I wanted to know if there were legitimate reasons it has somehow stayed afloat in the market. I guess it's just fanboys.

This would have saved you so much time:

 

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=Why+is+the+FX-8350+better+than+an+i5-4460%3F

Link to comment
Share on other sites

Link to post
Share on other sites

If you wanne improve the single threaded performance of an FX cpu allot, you should disable every second core inside the module.

So core 2/4/6/8, Then overclock the the cores that are left 1/3/5/7.

This will improve the single threaded performance significantly.

I've actually been wondering the past few days, what performance would actually be like if AMD decided to cut the module idea and kept to 4 physical cores. Could they have gotten per core performance on par with i5s? Or just as poor as the 8350's, just with less cores.

 

Interesting to think about, though.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

If Tek Syndicate has been a good example, flawed reasoning and testing exists everywhere. I trust people here a bit more, seeing some users I could only hope I could match in terms of intelligence.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

what kind of a motherboard do you need to magicaly shut down only part of a module?! i've never been able to do that i was only able to shut down entire module(s).

If i'm not wrong, some motherboards only allow to disable modules. So 4 modules in total and you can disable 1-3 modules

Link to comment
Share on other sites

Link to post
Share on other sites

If Tek Syndicate has been a good example, flawed reasoning and testing exists everywhere. I trust people here a bit more, seeing some users I could only hope I could match in terms of intelligence.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=Why+is+the+FX-8350+better+than+an+i5-4460%3F+linustechtips

Link to comment
Share on other sites

Link to post
Share on other sites

If i'm not wrong, some motherboards only allow to disable modules. So 4 modules in total and you can disable 1-3 modules

i was defenetly in that situation...do you think you can really improve the single-threaded performance for games that only require like 4 main threads doing that?!

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

i was defenetly in that situation...do you think you can really improve the single-threaded performance for games that only require like 4 main threads doing that?!

Yes. You get the full 256 bit FPU and the front-end (above the back-end) suffers from much less overhead. It's like 10-20% afaik

Link to comment
Share on other sites

Link to post
Share on other sites

The only problem is, that not all AM3+ boards support this, or atleast not every bios.

But if you have a board that support it, it will help a whole lot.

wenn disable every second core inside the module.

Link to comment
Share on other sites

Link to post
Share on other sites

If you wanne improve the single threaded performance of an FX cpu allot, you should disable every second core inside the module.

So core 2/4/6/8, Then overclock the the cores that are left 1/3/5/7.

This will improve the single threaded performance significantly.

I wouldn't recommend doing so as you will only lose performance overall. In a very select few cases the 4 module / 4 core method helps but only a whole 2-5% maximum which was all prior to KB2645594. If you run Windows 8 or later you won't see any change in performance. As the task scheduler handles modules the same exact way that it handles Hyper-Threading.

 

Yes. You get the full 256 bit FPU and the front-end (above the back-end) suffers from much less overhead. It's like 10-20% afaik

You're really hitting the nail on the head aren't you? Each core has 256-bit command support thanks to the FlexFPU. Sacrificing a core changes nothing. Especially when there is no AVX2 instructions to speed things up. It will be exactly the same either way. Which makes it pointless to disable a core when there is exactly no benefit of it. Seriously, request a name change to WCCFtech.  :lol:

Link to comment
Share on other sites

Link to post
Share on other sites

i was defenetly in that situation...do you think you can really improve the single-threaded performance for games that only require like 4 main threads doing that?!

 

 

The only problem is, that not all AM3+ boards support this, or atleast not every bios.

But if you have a board that support it, it will help a whole lot.

wenn disable every second core inside the module.

 

Most 990 boards(all the ones i've used) do support this. It's no so much that it makes single threaded performance go up, it just gives 1 core the ability to use all of the resources instead of sharing them even if the core isnt being used.

 

The problem is that on some boards if you tell it to shut off a single core it will shut off the entire module.

Link to comment
Share on other sites

Link to post
Share on other sites

Because I wanted to see if there was an actual purpose to buying an FX chip? Curiosity kills the cat, hm.

 

To my understanding, it's foolish to, I just want to see if there's a proper argument or reason behind it, asides from a few poorly conducted benchmarks or unfair comparison (there aren't that many situations where you see 8 threads in use, for example). If the FX series is so poor, then, I wanted to know if there were legitimate reasons it has somehow stayed afloat in the market. I guess it's just fanboys.

 

Problem is, you made the thread completely impossible for anyone to make such a claim. There would be no way that the FX, from October 2012, could compete with an entirely new series of processors from May 2014. Rewinding 2 years ago, and the argument COULD be made, though the results would be the same, with intel fanboys pulling synthetic benches out of nowhere, and throwing in games that are already known to be optimized to one side or the other. 

 

Intel's CPU's currently win, because Amdahls law applies to every application, and the slower AMD cores suffer greatly in this area. Throwing a ton of cores at any single application (a game, for example) will not result in more performance. Eventually, diminishing returns apply to core counts. The 8 cores came in handy when doing extreme multi tasking, and that was it. The FX series is still great for people that want to play games, and work at the same time. However, with the socket being dead, i do not suggest anyone to buy them at this point in time, unless the price were to be extremely right for their budget.

 

I still see the thread is full of graphs being thrown at each other, without anyone actually using the FX CPU's and the intel CPU's for themselves to form their own opinion on the subject, so i predict it will end the same way regardless. Even when everyone agree's to intel's superiority in this scenario, AMD is still going to be bashed until there is nothing left. To those that say the FX was not formidable, and still claim it is "twice as slow as intel", i call you a fool.

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Problem is, you made the thread completely impossible for anyone to make such a claim. There would be no way that the FX, from October 2012, could compete with an entirely new series of processors from May 2014. Rewinding 2 years ago, and the argument COULD be made, though the results would be the same, with intel fanboys pulling synthetic benches out of nowhere, and throwing in games that are already known to be optimized to one side or the other. 

 

Intel's CPU's currently win, because Amdahls law applies to every application, and the slower AMD cores suffer greatly in this area. Throwing a ton of cores at any single application (a game, for example) will not result in more performance. Eventually, diminishing returns apply to core counts. The 8 cores came in handy when doing extreme multi tasking, and that was it. The FX series is still great for people that want to play games, and work at the same time. However, with the socket being dead, i do not suggest anyone to buy them at this point in time, unless the price were to be extremely right for their budget.

 

I still see the thread is full of graphs being thrown at each other, without anyone actually using the FX CPU's and the intel CPU's for themselves to form their own opinion on the subject, so i predict it will end the same way regardless. Even when everyone agree's to intel's superiority in this scenario, AMD is still going to be bashed until there is nothing left. To those that say the FX was not formidable, and still claim it is "twice as slow as intel", i call you a fool.

 

-MageTank

I've used both CPUs, with using the Intel as my daily, but I do have experience in both.  When it is so bad that my friend with the FX can't play the same games as me, that is a problem.

 

Also, going back to 2 years ago, even the i5-2500k is still handing it to the FX.  Its not like the FX was even good when it was first launched.

"I genuinely dislike the promulgation of false information, especially to people who are asking for help selecting new parts."

Link to comment
Share on other sites

Link to post
Share on other sites

What games can the Intel play that the FX cannot? Now you've peaked my curiosity to the point in which i'm going to grab my FX box just to start testing things. Also, the i5-3570k didn't perform any better than the FX at gaming, you expect me to believe the 2500k can out-perform it to such a degree that its extremely noticeable? 

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

What games can the Intel play that the FX cannot? Now you've peaked my curiosity to the point in which i'm going to grab my FX box just to start testing things. Also, the i5-3570k didn't perform any better than the FX at gaming, you expect me to believe the 2500k can out-perform it to such a degree that its extremely noticeable? 

 

-MageTank

MMOs, DayZ, ARMA3, Indies, Emulators, games that are heavily CPU bound.

 

H93GZC3.png

---

batman.png

---

civilization.png

---

http--www.gamegpu.ru-images-stories-Test

---

http--www.gamegpu.ru-images-stories-Test

---

http--www.gamegpu.ru-images-stories-Test

---

Even this supposedly very good multi-threaded game, Call of Duty:Advanced Warefare runs better on an i3 than an FX9

http--www.gamegpu.ru-images-stories-Test

---

d1b73da9_http--www.gamegpu.ru-images-sto

---

http--www.gamegpu.ru-images-stories-Test

---

http--www.gamegpu.ru-images-stories-Test

---

60-Bioshock-R9-295X2.png

---

65-DiRT-3-R9-295X2.png

---

arma3_1920.png

---

bf4_cpu_radeon.png

You have to OC an FX8 to 5Ghz just to match an i5-4440 at stock in BF4 multiplayer with an R9 290X.

---

civ_1920.png

---

csgo_1920.png

---

crysis3_1920_2.png

---

fc3_1920.png

---

fc4_n_1920.png

---

starcraft_1920.png

---

gta4_1920.png

---

rome2_1920.png

---

witchercpu_1920.png

This one above is Witcher 2

---

assassin_1920n.png

---

fsx_1920n.png

---

These are just a few games, and obviously skewed towards Intel, but my point is to try and illustrate that some games run very poorly on the weak cores on FX processors.  Why buy a processor that can only play 4 out of 5 games, when you can pay the same and play 5 out of 5 games?

 

There was a free MMO that was recently released called ArcheAge.  It was built on the CryEngine, but the FX processors had a massive problem playing the game.  It was so severe, that it was not uncommon for people to shout out their PC specs when looking for a group.  This is a game that was built around massive world encounters where you have 100+ people on the screen, multiple times daily.  Raiding was impossible with FX processors in almost all MMOs.

 

 

^2 man grouping result in Guild Wars2.  Raiding = not going to acceptable frame rates.

 

 

^DayZ Online Results:  This is online with the guy running up a hill doing nothing, only getting 28fps.  As soon as he gets in a firefight or enters a town, boom, down to the 10s.

https://www.youtube.com/watch?v=xDJ5655on6A

"I genuinely dislike the promulgation of false information, especially to people who are asking for help selecting new parts."

Link to comment
Share on other sites

Link to post
Share on other sites

I played Guild Wars 2 with my FX, at times where 200 people were on a world boss, and i did fine. I am calling shenanigans on that skyrim graph, as mine is heavily modded with 4k textures and i still average over 90fps with a 2gb GTX 770, with my min FPS never dropping under 60. I played the ArcheAge beta, and i recall both Intel and AMD CPU's complaining about the frame rates, and as soon as Nvidia released graphics drivers, i myself had no problem. Granted, we never had 100+ people on the screen at any given time, but it was a beta, and not many people were doing the same things at the same time.

 

I can say out of the games that are in my steam library, performance difference between the 8320 and the i5 3570k is seldom noticeable. Only time i've seen miracle frame rates from the intel, would be in certain MMO scenario's. In WoW, the 3570k managed to gain 30+ more FPS than the 8320. To be expected from most MMO's. 

 

I have no idea what went wrong in that BF4 graph, but it can't be accurate either. In 64 man servers, my stock 8320 still averaged 40-50fps with a GTX 770. Any source to these graphs? Do they include which drivers were used, what patch update each game was on? You pluralized "games" meaning multiple games were unplayable on the FX when compared to intel, but i dont see many games in this graph that are "unplayable" except maybe flight simulator. I don't own many of these games, but the ones that i do own differ greatly in the performance listed by these graphs, and my PC is below their specs.

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Do I need to bring out the per thread performance again?

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I played Guild Wars 2 with my FX, at times where 200 people were on a world boss, and i did fine. I am calling shenanigans on that skyrim graph, as mine is heavily modded with 4k textures and i still average over 90fps with a 2gb GTX 770, with my min FPS never dropping under 60. I played the ArcheAge beta, and i recall both Intel and AMD CPU's complaining about the frame rates, and as soon as Nvidia released graphics drivers, i myself had no problem. Granted, we never had 100+ people on the screen at any given time, but it was a beta, and not many people were doing the same things at the same time.

 

I can say out of the games that are in my steam library, performance difference between the 8320 and the i5 3570k is seldom noticeable. Only time i've seen miracle frame rates from the intel, would be in certain MMO scenario's. In WoW, the 3570k managed to gain 30+ more FPS than the 8320. To be expected from most MMO's. 

 

I have no idea what went wrong in that BF4 graph, but it can't be accurate either. In 64 man servers, my stock 8320 still averaged 40-50fps with a GTX 770. Any source to these graphs? Do they include which drivers were used, what patch update each game was on? You pluralized "games" meaning multiple games were unplayable on the FX when compared to intel, but i dont see many games in this graph that are "unplayable" except maybe flight simulator. I don't own many of these games, but the ones that i do own differ greatly in the performance listed by these graphs, and my PC is below their specs.

 

-MageTank

200 people on a world boss is impossible you'll be having playable frame rates. That would be like 4 FPS on AMD and 8 FPS on Intel or even 0-1 FPS for both CPU's. You'll just see performance gains from Intel if your CPU was bottlenecking your GPU. Don't expect a difference between a pentium 1 and a 5960x if they both pushed 4x 980's each to 99% usage.

Link to comment
Share on other sites

Link to post
Share on other sites

200 people on a world boss is impossible you'll be having playable frame rates. That would be like 4 FPS on AMD and 8 FPS on Intel or even 0-1 FPS for both CPU's. You'll just see performance gains from Intel if your CPU was bottlenecking your GPU. Don't expect a difference between a pentium 1 and a 5960x if they both pushed 4x 980's each to 99% usage.

 

I can easily log on GW2, go to World Vs World and show you if it would make you feel better. Far more people in WvW than there is on world bosses. In WvW, i average 30fps on max details, depending on what exactly is going on in the frame. GW2 also had a major optimization patch in sep 2013, making it far more playable than it was at launch. 

 

-MageTank

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×