Jump to content

Far Cry 4 GPU Benchmarks

BiG StroOnZ

maybe its just luck but i have the non patched version and an old graphics driver: with my 7950 @ 1050core/1250Mhz Vram @1080p it runs on ultra-ish (with smaa) and a i5 3570K @4.2Ghz and i never see dips below 35. While roaming in the jungle it's about 75 and in the city or crawded area drops to 38-50fps. 

 

Still, its not that cpu intensive as FC3 and imho looks worse than previous title.

 

btw: vram usage is arround 2500MB in dedicated and 600MB on dynamic.

Playing it right now :D

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah because I'm going to trust any source that has a 280x beating a 780... in an NVIDIA GamesWorks title nonetheless  :rolleyes:

 

Benchmarks_Far_Cry_4_-_1080p-pcgh.png

 

Benchmarks_Far_Cry_4_-_1440p-pcgh.png

 

These are with the Catalyst 14.11.2 beta, So maxed out settings and latest drivers for both camps.

 

http://www.pcgameshardware.de/Far-Cry-4-PC-256888/Specials/Technik-Test-Benchmark-1143026/

 

 

You mean with Tree relief (tessellation), that was literally added in the day 1 patch? Here is tree tessellation. It was literally added after people already had the game as a last second thing for benchmarks.

 

http://www.geforce.com/whats-new/guides/far-cry-4-graphics-performance-and-tweaking-guide

 

Quote from Nvidia's "optimization guide". 

 

Trees Relief
 
"Exclusive to the PC is Trees Relief, the tessellation of tree trunks to increase detail levels that bit further. Note how the shadows are affected by the geometric detail added by tessellation. Were this a normal map or bump map added by the Texture Quality setting, the shadow would remain unaffected, lying flatly across the tree's surface as it does in the 'Off' image."

 

Funny thing? Nvidia CHANGED the tree picture in that article, because their previous pictures with shadows on the tree looked horrible, which they TOLD you (and they look HORRID in game). No one should be running with this dumb setting on. It is a performance hit (less on Nvidia, but still a performance hit)  for a worse looking game. Nvidia always tries to add or pressure the dev into tessellation on objects in Game Works games, even when it looks WORSE (and Ubisoft thankfully made the last second addition OPTIONAL). Tessellation on water can look decent. Tessellation on an already good texture which will have shadows on it? Looks horrid. That is why in CDPR's Witcher 3 Nvidia tech video they show tessellation on water only. 

 

They said it looks like garbage on everything else. Why? Cus it does.

 

http://www.pcgameshardware.de/The-Witcher-3-PC-237266/Specials/The-Witcher-3-welche-Grafikkarte-welche-CPU-1107469/

 

"Balázs Török: Yes, we have currently tessellation for the landscape and view of the water in our rendering pipeline. We have made ​​some tests in which we have applied tessellation to other objects, but that has not brought the desired result. And actually we need for our objects do not really tessellation. This is only the overhead increases, so let's use tessellation only for the scenery and the water and leave them with everything else away."

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah because I'm going to trust any source that has a 280x beating a 780... in an NVIDIA GamesWorks title nonetheless  :rolleyes:

 

Benchmarks_Far_Cry_4_-_1080p-pcgh.png

 

Benchmarks_Far_Cry_4_-_1440p-pcgh.png

 

These are with the Catalyst 14.11.2 beta, So maxed out settings and latest drivers for both camps.

 

http://www.pcgameshardware.de/Far-Cry-4-PC-256888/Specials/Technik-Test-Benchmark-1143026/

They're using different settings than GameGPU and TechSpot. They're using some Nvidia biased presets which they mention in the article.

Presets which actually look worse enabled than using the non-Nvidia alternative. Example HBAO+ ambient occlusion looked worse than SSBC which the developers created from scratch.

Link to comment
Share on other sites

Link to post
Share on other sites

They're using different settings than GameGPU and TechSpot. They're using some Nvidia biased presets which they mention in the article.

Presets which actually look worse enabled than using the non-Nvidia alternative. Example HBAO+ ambient occlusion looked worse than SSBC which the developers created from scratch.

 

Yeah HBAO + looks ok. Might look better than SSBC, I don't know. I play with Ultra presets, with HBAO+, Nvidia God Rays (enhanced) and fur on and my highs are way above 40 lol. My FPS is closer to the GTX 980 on my 290 dollar Tri-x clocked at 1100/1145. Soft shadows looks like garbage though. I run Ultra instead. 

 

The only Nvidia setting I can say for sure that looks "better" is Enhanced (Nvidia God Rays). They really look good in shady areas with light showing through the trees with shadows (tree tessellation ruins that though lol). I don't know if this is Nvidia God Rays or a Ubisoft solution though, cus it WAS named Nvidia God Rays and it was changed to enhanced in the Day 1 patch.

 

Also that 58 low on the GTX 980? Is BS. This game stutters on everything (as Total Biscuit said with his 980) when you drive vehicles, and this was prob a benchmark that didn't include driving a vehicle, where the CPU is what is holding you back. If the benchmark was also running through trees, it would be very biased towards the Nvidia cards.

 

I would say without Tree Tessellation a R9 290 is between a GTX 970/980 and it all depends what kind of OC you are running at on those 3 cards.  I have had highs higher than there GTX 980 with Nvidia presets (prob due to bandwidth on my ram) and just tree tessellation off and SMAA.  I can tell you that tree tessellation kills performance on AMD cards, but no one should be running it anyways. Looks worse. The shadows in this game at certain times of day are beautiful and probably the best part of the game.

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

They're using different settings than GameGPU and TechSpot. They're using some Nvidia biased presets which they mention in the article.

Presets which actually look worse enabled than using the non-Nvidia alternative. Example HBAO+ ambient occlusion looked worse than SSBC which the developers created from scratch.

 

Regardless of what settings they chose, a 280x will not be faster than a 780. a 280x is AT LEAST 20% slower than a 780, that's minimum. So there would be no reason, in an NVIDIA optimized game for an AMD card that is already slower than a 780, to be miraculously faster. In multiple resolutions on top of that.

 

I also read through the article and cannot find where they stated they used the "NVIDIA Preset" perhaps you can quote them for me because I can't seem to find where they mentioned that at all as I sit here reading the translated article. The only thing I could find is the following, "But we turn off all the obvious Nvidia effects and select the highest level, AMD graphics cards can clearly those of Nvidia catch up"

 

Here's more from other sources:

 

dolwep.jpg

 

2zz3m8j.jpg

 

15yxn6h.jpg

 

http://www.purepc.pl/karty_graficzne/wymagania_far_cry_4_niskie_test_kart_graficznych_i_procesorow?page=0,6

 

Even with the MSI 780 only at 860MHz, 280X is still not faster than a 780. As soon as you put it at a normal clock like a 1000MHz like on the Windforce card, the 280X isn't even close.

 

In those GameGPU benches they have a 280x 2fps slower than a 780 Ti @ 4K. What?!  In TechSpots they have a 280x 1 fps faster than a 780 @ 1600P. LOL WHAT?!

 

The only thing I can think of is they are underclocking their 780's and 780 Ti's because it makes absolutely no sense whatsoever. But even bringing the 780 down to 860MHz, which most non-references don't even run at. 280X is still slower. Even worse is the MSI 780 Gaming doesn't even come in at 863MHz stock, it comes in at 954MHz with a 1006MHz boost. Which means even in this benchmark they were underclocking the 780. As a matter a fact, they were underclocking the 780 Ti also, because the DirectCU 780 Ti they used in this benchmark does not come in at 875MHz stock, it comes in also @ 954MHz with a 1020MHz boost. Which is bananas. I have no idea why they would do that. But it's the only logical explanation available for these ridiculous benchmarks that are coming out for this game. They are underclocking certain cards on some of these websites.

Link to comment
Share on other sites

Link to post
Share on other sites

Regardless of what settings they chose, a 280x will not be faster than a 780. a 280x is AT LEAST 20% slower than a 780, that's minimum. So there would be no reason, in an NVIDIA optimized game for an AMD card that is already slower than a 780, to be miraculously faster. In multiple resolutions on top of that.

 

I also read through the article and cannot find where they stated they used the "NVIDIA Preset" perhaps you can quote them for me because I can't seem to find where they mentioned that at all as I sit here reading the translated article. The only thing I could find is the following, "But we turn off all the obvious Nvidia effects and select the highest level, AMD graphics cards can clearly those of Nvidia catch up"

 

Here's more from other sources:

 

dolwep.jpg

 

2zz3m8j.jpg

 

15yxn6h.jpg

 

http://www.purepc.pl/karty_graficzne/wymagania_far_cry_4_niskie_test_kart_graficznych_i_procesorow?page=0,6

 

Even with the MSI 780 only at 860MHz, 280X is still not faster than a 780. As soon as you put it at a normal clock like a 1000MHz like on the Windforce card, the 280X isn't even close.

 

In those GameGPU benches they have a 280x 2fps slower than a 780 Ti @ 4K. What?!  In TechSpots they have a 280x 1 fps faster than a 780 @ 1600P. LOL WHAT?!

 

The only thing I can think of is they are underclocking their 780's and 780 Ti's because it makes absolutely no sense whatsoever. But even bringing the 780 down to 860MHz, which most non-references don't even run at. 280X is still slower. Even worse is the MSI 780 Gaming doesn't even come in at 863MHz stock, it comes in at 954MHz with a 1006MHz boost. Which means even in this benchmark they were underclocking the 780. As a matter a fact, they were underclocking the 780 Ti also, because the DirectCU 780 Ti they used in this benchmark does not come in at 875MHz stock, it comes in also @ 954MHz with a 1020MHz boost. Which is bananas. I have no idea why they would do that. But it's the only logical explanation available for these ridiculous benchmarks that are coming out for this game. They are underclocking certain cards on some of these websites.

HBAO+ and GodRays are GameWorks. Even though the benchmarks don't favor Nvidia entirely. Using SSBC instead of HBAO+ would yield better visuals and better performance on AMD hardware.

As pointed out by PCGamesHardware, the GameWorks settings do not only run slower on AMD GPUs, they run faster on Nvidia GPUs. So using something like SSAO or the engine built-in SSBC would be faster on AMD, slower on Nvidia so the 280X and the 780 would switch positions.

Also please remember that Far Cry 4, just like Ryse Son of Rome and Dragon Age, has been optimized for the console GPUs which are based on AMD's GCN architecture. So no matter how optimized it is for Nvidia it doesn't overcome the fact that he game was designed from the ground up on AMD's GPU architecture.

Running the game at the Ultra preset without changing any setting the R9 280X IS faster than the GTX 780.

Also what nonsense is this about 1000mhz being a "normal" clock for a GTX 780. It's an aftermarket GPU with a very hefty overclock. Apples to Apples would be stock vs stock, not stock vs heavily overclocked. 1000Mhz is the stock clock speed for the 280X

I also find it very suspicious that the reviewer lists the AMD GPUs with boost clock speeds, and the Nvidia GPUs with base clock speeds. The base clock speed for the 280X is 850Mhz and the boost for the Gaming 780 is over 1000mhz.

Link to comment
Share on other sites

Link to post
Share on other sites

HBAO+ and GodRays are GameWorks. Even though the benchmarks don't favor Nvidia entirely. Using SSBC instead of HBAO+ would yield better visuals and better performance on AMD hardware.

As pointed out by PCGamesHardware, the GameWorks settings do not only run slower on AMD GPUs, they run faster on Nvidia GPUs. So using something like SSAO or the engine built-in SSBC would be faster on AMD, slower on Nvidia so the 280X and the 780 would switch positions.

Also please remember that Far Cry 4, just like Ryse Son of Rome and Dragon Age, has been optimized for the console GPUs which are based on AMD's GCN architecture. So no matter how optimized it is for Nvidia it doesn't overcome the fact that he game was designed from the ground up on AMD's GPU architecture.

Running the game at the Ultra preset without changing any setting the R9 280X IS faster than the GTX 780.

Also what nonsense is this about 1000mhz being a "normal" clock for a GTX 780. It's an aftermarket GPU with a very hefty overclock. Apples to Apples would be stock vs stock, not stock vs heavily overclocked. 1000Mhz is the stock clock speed for the 280X

I also find it very suspicious that the reviewer lists the AMD GPUs with boost clock speeds, and the Nvidia GPUs with base clock speeds. The base clock speed for the 280X is 850Mhz and the boost for the Gaming 780 is over 1000mhz.

 

If your little theory held true about cards "switching positions" then it would apply to all NVIDIA cards not just the 780 which clearly is not the case at all. And for the second time, the 280X is 20% slower than a 780 on average. They aren't switching positions in any game, regardless. Unless there is funny business going on; errors in benchmarking, downclocking cards, etc. etc. something out of the norm.

 

There have been many different games that were made for consoles that also didn't have a 280X magically being faster than a 780. So your "optimized for GNC Architecture" bologna doesn't hold true, not even in the examples you mentioned:

 

Let's look at Ryse: Son of Rome benchmarks:

 

ryse_1080p_high.png

 

Nope, no 280X beating a 780 there (again even with a 780 reference clock speed of 863MHz).

 

Let's look at Dragon Age Inquisition (another game you mentioned):

 

http--www.gamegpu.ru-images-stories-Test

 

Again, a 280X below a 780. As you would expect as the 780 is on par with a R9 290. Which is normal and expected.

 

 

There is no optimization of any architecture that is magically going to make a 280X suddenly faster than a 780. No matter how you put it.

 

 

I really don't see your logic in trying to find some sort of explanation as to why a 280X will ever be faster than a 780 regardless of the benchmarks I post for you that prove otherwise. 280X is slower than a 780. Known fact. 780 is on par with a R9 290. Known fact. Trying to maneuver some sort of explanation as to what scenario a 780 will suddenly be slower than a 280X seems illogical. 280X is a slower card than a 780, period. Any benchmark showing anything otherwise is invalid. Go look to another more reliable source. 

 

Also, 1000MHz is pretty standard for a 780, calling it a "hefty overclock" is laughable. Almost all non-reference 780's boost to 1000MHz. If they don't then they are being underclocked. The only time that might be considered a "hefty" overclock is if you are running a reference 780 which starts out @ about 860MHz. But even reference 780's can clock to 1200MHz no problem, so still @ 1000MHz far from what someone would call a "hefty overclock." But for a non-reference card like the MSI 780 Gaming that is clocked higher than 863MHz out of the box, clearly having it at 863MHz when it's suppose to be in the 950MHz range is sandbagging the card by a decent amount. As I pointed out, they are underclocking the cards in these benchmarks. Because in no world, or no scenario is a 280X faster than a 780. Better yet, a 280X is not even on par with a 780. A 780 is on par with a R9 290 and vice versa. So if you want to look at benchmarks that have a 780 getting beaten by a 280X, or a 280X being only 2fps slower than a 780 Ti @ 4K and believe them. Then that's on you but for anyone else with common sense, they can clearly decipher that something exceedingly wrong is going on for those sources when other sources are not displaying such craziness. 

Link to comment
Share on other sites

Link to post
Share on other sites

I really don't see your logic in trying to find some sort of explanation as to why a 280X will ever be faster than a 780 regardless of the benchmarks I post for you that prove otherwise. 280X is slower than a 780. Known fact. 780 is on par with a R9 290. Known fact. Trying to maneuver some sort of explanation as to what scenario a 780 will suddenly be slower than a 280X seems illogical. 280X is a slower card than a 780, period.

Agree. Everybody knows that the 780 is in a different performance league.

This is an anomaly. All it means is Nvidia needs to improve their drivers for this game.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×