Jump to content

AMD FX 8370 vs Intel I7 5960x [GTX 970 SLI 4K benchmarks]

This is frames per second, it says so right in the top-right corner. It means the 0.1% low of the 5960X was 14-17 fps, and for the 8370 it was 32-35 fps.

It means at the 0.1% variance level and the 1% variance level, the 5960X, according to this benchmark, did worse in both occasions.

 

That means the numers are very strange, and it doesn't account for the non-existant scaling with overclocking eventhough CPU seems to matter in this benchmark. 

 

Scaling at GPU bottleneck

CPU_02.png

 

Scaling at CPU bottleneck

CPU_03.png

 

The graph from the benchmark in question shows both happening, which makes no sense.

 

Buddy, I wrote the article. I know what it says. Do me a favor, run a game of your choice and record the FPS in FRAPs make sure "min max avg" and "frame time variance" is checked. Once done, open the frame time variance file in FRAFS (google it) and look at the results. You'll notice that the 1% and 0.1% results are measured in milliseconds and FPS (both are the exact same values, just displayed differently) this is not at all the minimum or maximum FPS recorded. Instead, if is the average of the worst FPS you can expect 99 percent of the time (1%) and 99.9 percent of the time (0.1%). These metrics are used to quantify noticeable stutter or jutter that can appear in games. 

Link to comment
Share on other sites

Link to post
Share on other sites

Buddy, I wrote the article. I know what it says. Do me a favor, run a game of your choice and record the FPS in FRAPs make sure "min max avg" and "frame time variance" is checked. Once done, open the frame time variance file in FRAFS (google it) and look at the results. You'll notice that the 1% and 0.1% results are measured in milliseconds and FPS (both are the exact same values, just displayed differently) this is not at all the minimum or maximum FPS recorded. Instead, if is the average of the worst FPS you can expect 99 percent of the time (1%) and 99.9 percent of the time (0.1%). These metrics are used to quantify noticeable stutter or jutter that can appear in games. 

 

Makes sense that you're trying to dilute my argument with your frametime red herring if you wrote this article.

Still doesn't explain the other factors.

 

 

 

Very interesting results. We can see that in terms of average FPS the 5960X outpaces the AMD chip by at least 5% (about 3-4 FPS). However, looking at our frame time variance it is much worse, showing what would be a fairly sutter-y experience. This is exactly the reason these metrics are so important as they can reveal hidden performance issues.

 

Or your disingenuous way of interpreting the results for the audience. If as you say the 0.1 and 1% are in milliseconds, why are you putting them in a graph that has the legend say "fps" on the x-axis. And why do you not disclose the fact that lower= better for frametimes.

Link to comment
Share on other sites

Link to post
Share on other sites

Makes sense that you're trying to dilute my argument with your frametime red herring if you wrote this article.

Still doesn't explain the other factors.

 

 

Or your disingenuous way of interpreting the results for the audience. If as you say the 0.1 and 1% are in milliseconds, why are you putting them in a graph that has the legend say "fps" on the x-axis. And why do you not disclose the fact that lower= better for frametimes.

 

Because I displayed the results in FPS?  as i stated FRAFS displays in both ms and FPS, they are the same values just measured using different metrics. This is not an uncommon practice, GamersNexus does it all the time. It is easier. Also, lower does not equal better. Lower ms = higher FPS, not lower. I also describe the frame time variance in the testing methodology. 

Link to comment
Share on other sites

Link to post
Share on other sites

Buddy, I wrote the article.

 

Why did you bother? Like... you went out of your way to create a GPU bottleneck to then prove that you had made a GPU bottleneck? You could very well have engineered a scenario to "prove" that anything more than a Pentium is unnecessary by similar tactics. Other than being obstinately disingenuous for its own sake, what was it you were actually trying to show from this?

Link to comment
Share on other sites

Link to post
Share on other sites

Because I displayed the results in FPS?  as i stated FRAFS displays in both ms and FPS, they are the same values just measured using different metrics. This is not an uncommon practice, GamersNexus does it all the time. It is easier. Also, lower does not equal better. Lower ms = higher FPS, not lower. I also describe the frame time variance in the testing methodology. 

 

So you divided 1 by the frametimes and added them as fps to the graph. Then my original comment, saying in your results the min. fps is lower on the 5960X, still holds. And I said lower frametimes = better. Unless you're arguing lower fps = better? You're deliberatly trying to confuse/dilute the topic to rationalize your silly article where you pander to the AMD crowd to generate clicks. It's people like you that perpetuate the endless AMD vs. Intel debates on this forum, you disingenuous hack.

 

How many people do you think will now take to youtube or other forums now, armed with your article, and say "lol look 150 dollar CPU beating 1000 dollar CPU. Intel so overpriced"

Ohwait;

http://linustechtips.com/main/topic/429785-amd-fx-build-vs-i5-build/page-2#entry5763852

Link to comment
Share on other sites

Link to post
Share on other sites

Without going into the AMD vs Intel war that seems to be raging here at the moment, I've done some benchmarks of my own in the past where I under-clocked my CPU (2600k) all the way down to 2,1 GHz. The original purpose of those benchmarks had nothing to do with this, but the funny thing I did see there is that my performance in The Witcher 3 did show something that seems to rear it's head in these benchmarks as well.

 

At 2,1 GHz frame-rates in games at 1080p were obviously significantly worse than when overclocked to 4,6 GHz (about 40 FPS delta I believe, I'll have to check to be sure) but once I started to increase the resolution, the frame-rates at 2,1 GHz started to approach the 4.6 GHz results quickly. At 4K there was less than 10 FPS delta between my CPU overclocked and underclocked. Funny thing is that the frame-rates at 4K @ 2,1 GHz were significantly higher than they were with 1080p @ 2,1 GHz

 

It does really show that what most people claim is true: Once you get to 4K and above, the CPU bottlenecks all but disappear.

 

I'll try and post the data here when I get home later tonight. Perhaps I'll run some Benchmarks at different resolutions up to 5K so I can pour it into a nice graph.

Desktop:     Core i7-9700K @ 5.1GHz all-core = ASRock Z390 Taichi Ultimate = 16GB HyperX Predator DDR4 @ 3600MHz = Asus ROG Strix 3060ti (non LHR) = Samsung 970 EVO 500GB M.2 SSD = ASUS PG279Q

 

Notebook:  Clevo P651RG-G = Core i7 6820HK = 16GB HyperX Impact DDR4 2133MHz = GTX 980M = 1080p IPS G-Sync = Samsung SM951 256GB M.2 SSD + Samsung 850 Pro 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

Every now and then there a thread that is created which both confuses me and interests, and this is one of them. Unfortunately threads with such outcomes for myself always seem to "controversial" it would seem. For whatever reason "Controversial" threads seem to pit knowledgeable users against other knowledgeable users, but this is overshadowed by whose knowledge is actually correct. And that is what the reader of this thread must determine for themselves. All the benchmarks in the world won't convince a stubborn user to change their position no matter how right or wrong that user is. As it is their choice to believe what they do. 

 

So I find these types of threads fascinating for one simple reason - everyone thinks they are right and some are outright correct and some are arguably correct but many are simply wrong because they don't know what they are talking about and have been led down a path of misinformation. This misinformation comes from in the grand scheme of things the, shall I say "tech news media" and the cluster#$%@ it is. Countless reviewers who form different conclusions and sell them to the reader. The reviewer believes in their work which is fine, but that does not mean their work is correct or should be believed yet most of the time it is. 

 

Years worth of bad reviews filled with bad data and wrong thoughts have formed a belief that AMD is terrible for gaming. However as we all know it is that such a conclusion is not that simple as there are many factors in many different scenarios which can determine the outcome of which CPU is better. But this key importance is still being ignored. 

Why? 

Its again fairly simple the review articles leave that out of it.  What has the author of this review done? They have tested X cpu against X cpu. But these cpu's are known by the somewhat aware PC hardware peoples to be in entirely different categories and are sold/exist for different reasons. Very few, would ever buy a 5960x to game on solely. So why try to show an audience that, when in fact the people who should care about that CPU's performance are busy doing real work with their computers and not reading one of hundreds of these types of reviews.  

 

The long and the short of it is: The article is bent and while the data may be legit, the article has no point valid point other than it shows the AMD FX line is not all horrid processor but it isn't the best - which we already knew. So other than pure click bait or for marketers this set of benchmarks is nothing new or special. And it is an example in my opinion of the poor articles/review pieces that are wrong or misleading that have corrupted many techies minds. This why newbies come onto the forum and ask if they need an i7 or $1000 CPU because they want to go with an Intel CPU, becuase becuase becuase.... 

 

 

If this a FX-8370 against a i5-6600k or 6700k or a deveil canyon version of the two, the article/review would be much more compelling, and my opinion would possibly change. This whole thing also grinds my gears because AMD used it in Facebook ad.....  

 

And because I see the Author is/was browsing this thread, I would like to ad that this isn't meant to be entire attack on your piece but criticism. I mean no harshness against you or your organization. I just wish to see a better standard in the entire industry.   

 

At the present local time of 3:10AM, I don't care about grammar that much. 

Spoiler

Corsair 400C- Intel i7 6700- Gigabyte Gaming 6- GTX 1080 Founders Ed. - Intel 530 120GB + 2xWD 1TB + Adata 610 256GB- 16GB 2400MHz G.Skill- Evga G2 650 PSU- Corsair H110- ASUS PB278Q- Dell u2412m- Logitech G710+ - Logitech g700 - Sennheiser PC350 SE/598se


Is it just me or is Grammar slowly becoming extinct on LTT? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I dont believe this shit one second there is no way an OC 5960x in sli is outperformed by a 8370, and those games are not dx12 either so fuck that article is fake,shitty website.

All fx 8xxx are shit compared to ivy bridge and beyond, even sandy bridge maybe.

Link to comment
Share on other sites

Link to post
Share on other sites

I dont believe this shit one second there is no way an OC 5960x in sli is outperformed by a 8370, and those games are not dx12 either so fuck that article is fake,shitty website.

All fx 8xxx are shit compared to ivy bridge and beyond, even sandy bridge maybe.

Yup...just when one user comes with a solid explanation and tries to calm tbe flame war...comments like yours come up to start it again.

I'd ban every comment like this one even if the "statement" might be right.

You can say it politely and back up your idea with arguments.

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

So you divided 1 by the frametimes and added them as fps to the graph. Then my original comment, saying in your results the min. fps is lower on the 5960X, still holds. And I said lower frametimes = better. Unless you're arguing lower fps = better? You're deliberatly trying to confuse/dilute the topic to rationalize your silly article where you pander to the AMD crowd to generate clicks. It's people like you that perpetuate the endless AMD vs. Intel debates on this forum, you disingenuous hack.

 

How many people do you think will now take to youtube or other forums now, armed with your article, and say "lol look 150 dollar CPU beating 1000 dollar CPU. Intel so overpriced"

Ohwait;

http://linustechtips.com/main/topic/429785-amd-fx-build-vs-i5-build/page-2#entry5763852

 

Sir i do not aim to confuse or dilute any argument. I'm fairly certain there's been a miscommunication somewhere along the line. It is now 6:45  AM local and I've been typing a lot of replies to a lot of different folks, so perhaps I did not take the proper amount of time to explain. I honestly don't know what this means "So you divided 1 by the frametimes and added them as fps to the graph." I did not divide anything, as I said FRAFS does all the frame time variance calculations, I simply add them to the graphs. It does not say "min max average" so assuming that is what those results are is simply the product of not reading the testing methodology, which I can't help. 

 

 

If this a FX-8370 against a i5-6600k or 6700k or a deveil canyon version of the two, the article/review would be much more compelling, and my opinion would possibly change. This whole thing also grinds my gears because AMD used it in Facebook ad.....  

 

And because I see the Author is/was browsing this thread, I would like to ad that this isn't meant to be entire attack on your piece but criticism. I mean no harshness against you or your organization. I just wish to see a better standard in the entire industry.   

 

At the present local time of 3:10AM, I don't care about grammar that much. 

 

I appreciate the mature and thoughtful criticism and I take no personal offense to what you or anyone else has said here. I also understand that it can be irritating to see AMD use this report to advertise their processors. I really can honestly say that I had no connection with that as I never asked them to post the article nor did I even share it with them.

 

My original intention with the article was to showcase that not only is AMD a decent alternative for gaming, but that you don't need an expensive CPU to game with and still play modern and demanding titles or your favorite eSports titles. I've seen a lot of people claim that you needed an Intel CPU to play CS:GO competitively and I always found these types of claims to be without merit so I decided to put my money where my mouth is and actually do the tests. I suppose there are some areas I could have improved my testing methods in and i the future will consider all that others have suggested but I write these reports out of love for technology and PC gaming, I do not hold any loyalties to one company or another -- not implying you suggested that just stating it for the record -- and my aim is always to help out the little guy. 

 

With all that said, it is time I get some sleep.

Link to comment
Share on other sites

Link to post
Share on other sites

I would believe many things, like it being on pair or something if not all 8 cores are used on 5960x (Linus did a video about that) even with the fact that 8370 doesn't really have 8 cores as such, but when you see that the 8370 has higher minimum FPS than 5960x, that's when you realize there is something wrong with the article. 

 

fx-8370-vs-5960x_gaming-gtav_gtx970-sli.

 

17 vs 36 minimum? Yeah not happening.

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

Yup...just when one user comes with a solid explanation and tries to calm tbe flame war...comments like yours come up to start it again.

I'd ban every comment like this one even if the "statement" might be right.

You can say it politely and back up your idea with arguments.

Click on 2x770 sli min FPS and shut up, not even 9590 can beat it, the am3 platform is too old and so is the cpu arhitecture it just cant in anyway outperform a x99 cpu especially in dual card config, and  the article doesnt make sense why does it even talk about DX12? none of those games have dx12 and we wont have any dx12 game any time soon.

> http://www.anandtech.com/show/8426/the-intel-haswell-e-cpu-review-core-i7-5960x-i7-5930k-i7-5820k-tested/6

Link to comment
Share on other sites

Link to post
Share on other sites

Without going into the AMD vs Intel war that seems to be raging here at the moment, I've done some benchmarks of my own in the past where I under-clocked my CPU (2600k) all the way down to 2,1 GHz. The original purpose of those benchmarks had nothing to do with this, but the funny thing I did see there is that my performance in The Witcher 3 did show something that seems to rear it's head in these benchmarks as well.

 

At 2,1 GHz frame-rates in games at 1080p were obviously significantly worse than when overclocked to 4,6 GHz (about 40 FPS delta I believe, I'll have to check to be sure) but once I started to increase the resolution, the frame-rates at 2,1 GHz started climbing rapidly. At 4K there was less than 10 FPS delta between my CPU overclocked and underclocked. Funny thing is that the frame-rates at 4K @ 2,1 GHz were significantly higher than they were with 1080p @ 2,1 GHz

 

It does really show that what most people claim is true: Once you get to 4K and above, the CPU bottlenecks all but disappear.

 

I'll try and post the data here when I get home later tonight. Perhaps I'll run some Benchmarks at different resolutions up to 5K so I can pour it into a nice graph.

 

While i have not done any in depth testing i can also say that whenever i play games at 1080p my CPU utilization goes 10% higher than when i play at 4k (native). Meanwhile at 4k gpu utilization stays at 99% constantly and power usage shoots up by about 50W.

I would really like someone to explain why when we get to a GPU bottlenecked situation the CPU is used less instead of more or the same. One would think that because the GPU is used more the cpu would as well since it pushes more info to the GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

 My original intention with the article was to showcase that not only is AMD a decent alternative for gaming, but that you don't need an expensive CPU to game with and still play modern and demanding titles or your favorite eSports titles. 

 

And it shows. Because any unbiased reseacher would've looked at his results and be like "hang on, this doesn't make any sense. Let me try and find out what went wrong". Unless you're trying to swing a story. Therefor this article seems incredibly disingenuous, and appears to be nothing more than pandering to an audience that is fairly unrepresented in tech these days. And not because of some wide-spread conspiracy or hatred for people running different silicon, but because they're just not good gaming CPU's. Only the disingenuous techsites/youtubers still present results that these ultra-biased people can add to their repetoir of benchmarks to post at the 10.000th AMD vs. Intel topic like absolute cretins. And if secretly this was your intention, you're not helping anyone but yourself by swinging a story that is simply not representative but gets you lots of clicks. Just because you know this benchmark will be frequently distributed by pundits.

 

Dividing 1 by the periodtime in miliseconds (frametime) gets you the frequency (framerate). I expected at least some sort of basic math knowledge from someone writing tech articles. 

 

 

Good, it puts out the 8370 haters claiming "bottlenecks"

 
No, it doesn't. One fraudulent benchmark does not an argument make.
Link to comment
Share on other sites

Link to post
Share on other sites

I frankly am not going to believe this without video evidence that the setups used the same RAM speed and PCIe mode. I know DDR4 will have higher latency, but 2 nanoseconds isn't our margin of error. Nothing in this article passes the smell test, and the mathematical explanations are written with so little formality when they differ from the benchmarking standards we see from Anandtech, pcperspective, jayztwocents, and others. It's like asking for lobster and getting monk fish. Same flavor, but very different product. If you're going to differ from benching standard procedures, you better be able to shore up your results much more completely and formally in the math.

There is no way you're going to get multiple milliseconds in difference in minimum frame time in favor of AMD. The 5960X is the vastly superior chip even at 3GHz. They use the same instructions in the graphics routines, and you can consult with Agner Fog on instruction time latencies for every x86 architecture AMD or Intel up through Haswell. These minimum frame time results smell to high Heaven in the first place given Intel has the cache advantage and the superior architecture at all of the useful instructions. Beyond that, your math is not remotely justified. Either you rigged this, you've cooked your numbers, or you stepped into one doozy of a statistical outlier in your runs. There is no way the end result is that Intel's minimum times are higher.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I frankly am not going to believe this without video evidence that the setups used the same RAM speed and PCIe mode. I know DDR4 will have higher latency, but 2 nanoseconds isn't our margin of error. Nothing in this article passes the smell test, and the mathematical explanations are written with so little formality when they differ from the benchmarking standards we see from Anandtech, pcperspective, jayztwocents, and others. It's like asking for lobster and getting monk fish. Same flavor, but very different product. If you're going to differ from benching standard procedures, you better be able to shore up your results much more completely and formally in the math.

There is no way you're going to get multiple milliseconds in difference in minimum frame time in favor of AMD. The 5960X is the vastly superior chip even at 3GHz. They use the same instructions in the graphics routines, and you can consult with Agner Fog on instruction time latencies for every x86 architecture AMD or Intel up through Haswell. These minimum frame time results smell to high Heaven in the first place given Intel has the cache advantage and the superior architecture at all of the useful instructions. Beyond that, your math is not remotely justified. Either you rigged this, you've cooked your numbers, or you stepped into one doozy of a statistical outlier in your runs. There is no way the end result is that Intel's minimum times are higher.

I think you hit the nail on the head. I've read through this whole thread and nobody seems to have thought about looking at the discrepancies in system RAM. Only in the rarest of cases will the extra latency in DDR4 manifest itself, and I think we're seeing that in the minimums. It's not that the 5960x is slower, it's that while that latency is a problem (maybe due to offloading vram?) The cpu isn't getting fed properly.

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

I think you hit the nail on the head. I've read through this whole thread and nobody seems to have thought about looking at the discrepancies in system RAM. Only in the rarest of cases will the extra latency in DDR4 manifest itself, and I think we're seeing that in the minimums. It's not that the 5960x is slower, it's that while that latency is a problem (maybe due to offloading vram?) The cpu isn't getting fed properly.

I did mention it. But I think it is possible that a confluence of factors is occurring. I asked if they would take a look at a socket 1150 cpu and see what results they get.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Try and crank something that is more powerful into the system, and you'll see how the cheaper CPUs will go out quickly.

Just look at Jayz system, 3 Titan X bios hacked if im not mistaken, and those will put your CPU behind, so you must overclock the CPU.

CPU: i7 5820k @4.5Ghz | Mobo: MSI X99A SLI Plus | RAM: 16GB Crucial Ballistix DDR4 Quad Channel | GPU: GTX 970 @ 1579 Mhz | Case: Cooler Master HAF 922 | OS: Windows 10

Storage: Samsung 850 Evo 250GB | PSU: Corsair TX750 | Display: Samsung SyncMaster 2233 & SyncMaster SA350 | Cooling: Cooler Master Seidon 120M

Keyboard: Razer Lycosa | Mouse: Steelseries Kana | Sound: Steelseries Siberia V2

Link to comment
Share on other sites

Link to post
Share on other sites

That's because the GPU is being crushed lol.

 

Also not benchmarking cpu bound games.

Mobo: Z97 MSI Gaming 7 / CPU: i5-4690k@4.5GHz 1.23v / GPU: EVGA GTX 1070 / RAM: 8GB DDR3 1600MHz@CL9 1.5v / PSU: Corsair CX500M / Case: NZXT 410 / Monitor: 1080p IPS Acer R240HY bidx

Link to comment
Share on other sites

Link to post
Share on other sites

The attacking and bullying in this thread is sickening. Some of you are trying way too hard to discredit the author of that article. And others are just bashing because being of fanboyism. Makes me lose faith in the forums

PC Audio Setup = Beyerdynamic DT 770 pro 80 ohm and Sennheiser pc37x (also for xbox) hooked up to Schiit Fulla 3

Link to comment
Share on other sites

Link to post
Share on other sites

:huh:  :huh:  :huh:

 

when did the FX ever beat the holy 5960X?

 

that is like comparing a Toyota 4AGE 20V to a V8 Mustang Boss 302 engine

uhm... throw enough corners at the mustang, and it wont be much faster then the toyota anyway, cuz... Murica, modern cars still using suspension taken from 1830s horsewagons. As soon as there is a straight, mustang will win...

Link to comment
Share on other sites

Link to post
Share on other sites

This is pretty normal for 4k from what I've seen of other benchmarks where the CPU doesn't really change much. Anyways, that being said, I hate how CPU benchmarks are not done in CPU intensive games. Want to not go below 20 FPS and crash during boss fights or WvW in Guild Wars 2? Better look at the Crysis benchmarks to find out which CPU to buy! Drives me insane.

Ive had just as many crashes during WvW on my 8320 as my current i7 4790k (both are/were OCd).... GW is just atrocious in terms of API and it lacks a LOT of optimizations, on both server side, client to server side and the game itself..

 

I mean, at what point does it make sense that me living in EU, must be routed through the login servers in Houston Texas, before being patched back to Koeln/Hamburg Germany???? And best part is, when somebody DDOs the US servers (like when some nabs hit the WIldstar Servers, situated in the same server senter as the NA GW2 servers) then ALL EU PLAYERS got rekt cuz login servers forces us to stay synced with the NA servers (this is also why i didnt notice much difference between being on NA and EU servers in terms of ping-lag.....)

 

So yeah, GW2 crashes. NOT CPU bound for the most part.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×