Jump to content

First Fallout 4 performance results

SteveGrabowski0

or the 280x

or the Fury X :(

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

It's astounding that people still cannot see it. NVidia tech like GameWorks, god rays, etc. hurt performance for everyone, but especially AMD. With source code being unavailable for AMD, they will always suffer in these games, with the resulting blame on AMD drivers. It's getting old fast and people should know better by now.

 

Look at the bloody numbers instead of the percentage. Factoring in the natural margin of error in benchmarks they both gain the exact same amount of FPS. Removing the margin of error AMD gain MORE. Percentages are meaningless without looking at the actual numbers as well. Of course AMD gained a higher percentage, 22.6 fps at 61.2 is a higher percentage than 22.4 fps at 79.2. With Gameworks on they are 18fps apart, with it off they are 17.8 apart. The performance delta between them doesn't change. Gameworks has the exact same fps hit between both cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Whoa bro slow down, will this run on my Nvidia GTX 285? I mean the guy at Best Buy told me it had 1GB and that that was a lot! 

Link to comment
Share on other sites

Link to post
Share on other sites

God rays, that thing tanked my fps while playing ac4 bf. I have a old NVidia card and still turn off those option because they are so bad performance wise. I'll buy this game when it hits the bargain bin. At least, updates will be up and stable.


Link to comment
Share on other sites

Link to post
Share on other sites

The test LITERALLY says you cannot compare the numbers between systems OR vendors. Your entire point is moot.

 

No, you misunderstood that. We already went over this in another thread. Here, I'll just copy what I siad:

 

 

The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors.

 

I didn't. I compared driver overhead.

 

you should be careful making conclusions about GPU performance when comparing API Overhead test results from different systems.

 

I didn't make any conclusions about GPU performance, but about CPU performance due to driver overhead.

 

Likewise, it could be misleading to credit the GPU for any difference in DirectX 12 performance between an AMD GPU and an NVIDIA GPU. 

 

I didn't credit any GPUs for performance difference, but rather the drivers.

 

The proper use of the test is to compare the relative performance of each API on a single system, rather than the absolute performance of different systems.

 

And the reason they say this is because:

 

Or, you could test a vendor's range of GPUs, from budget

to high-end, and keep the CPU fixed. But in both cases, the nature of the test means it will
not show you the extent to which the performance differences are due to the hardware and
how much is down to the driver. 

 

However, they also say that the test is not GPU intensive at all. Sure, theoretically if you had a crap GPU it could interfere with your testing because it's so slow it can't even render as much draw calls as your CPU is able to make, in which case your test would be flawed. In my case, I was CPU-bound so this is not an issue. And obviously on different systems you'll get different results. But it's not the absolute performance that matters, I never said that. It's the difference between AMD and Nvidia drivers, tested on the same system with the same CPU.

 

A lot of NVIDIA users look at TWIMTBP titles on launch day and feel good about things like 15% better performance than AMD. They don't stop to think about how much absolute performance has been sacrificed in order to achieve that goal, I.e. Is this game made in the optimal way with best possible performance in general.

 

Nothing has been sacrificed this time. Just set the godrays to low. It would have been an issue if we couldnt reduce the setting like we couldn't in The Witcher 3, until they added it later on. The way AMD would optimize it if they had access to the code is they would just reduce the tessellation of that effect in the driver. There's no reason to do that if you can do it yourself in the options.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

And for everyone who still thinks AMD performance suffers because of GW, here's a benchmark with GW off:

 

OCXqNvG.png

 

390X still performs significantly worse.

 

ANd here's a video from DigitalFoundry:

 

 

The guy says AMD cards have low utilization, which indicates it's the CPU overhead. Unless you people have some evidence to prove me otherwise, I will not continue this discussion. I've presented all the evidence to support my claims and you still refuse to admit the truth. 

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

And for everyone who still thinks AMD performance suffers because of GW, here's a benchmark with GW off:

 

390X still performs significantly worse.

 

ANd here's a video from DigitalFoundry:

 

 

The guy says AMD cards have low utilization, which indicates it's the CPU overhead. Unless you people have some evidence to prove me otherwise, I will not continue this discussion. I've presented all the evidence to support my claims and you still refuse to admit the truth. 

 

I'll have you know, the 390x costs less than the 980... Even if it performs worst than the 980... That's pretty much what its supposed to do...

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Sooooo.. How to start a shit storm about Fallout 4 not supporting 21:9 resolutions?

Potato

Link to comment
Share on other sites

Link to post
Share on other sites

 

God rays, that thing tanked my fps while playing ac4 bf. I have a old NVidia card and still turn off those option because they are so bad performance wise. I'll buy this game when it hits the bargain bin. At least, updates will be up and stable.

 

See you in years

Link to comment
Share on other sites

Link to post
Share on other sites

so with my rig (5820k, 16 GB ddr4, 2x Fury X (not running crossfire last night though)) was getting 60 FPS (according to fraps) most of the time with "normal play" diving to ~45 fps at times (lots of particle effects on screen etc) at 1440p. but what ever you do, don't use a high powered scope (for me looking through a high powered scope cause me to lose 25-30 fps).

Link to comment
Share on other sites

Link to post
Share on other sites

No, you misunderstood that. We already went over this in another thread. Here, I'll just copy what I siad:

 

I didn't. I compared driver overhead.

 

I didn't make any conclusions about GPU performance, but about CPU performance due to driver overhead.

 

I didn't credit any GPUs for performance difference, but rather the drivers.

 

And the reason they say this is because:

 

However, they also say that the test is not GPU intensive at all. Sure, theoretically if you had a crap GPU it could interfere with your testing because it's so slow it can't even render as much draw calls as your CPU is able to make, in which case your test would be flawed. In my case, I was CPU-bound so this is not an issue. And obviously on different systems you'll get different results. But it's not the absolute performance that matters, I never said that. It's the difference between AMD and Nvidia drivers, tested on the same system with the same CPU.

 

Nothing has been sacrificed this time. Just set the godrays to low. It would have been an issue if we couldnt reduce the setting like we couldn't in The Witcher 3, until they added it later on. The way AMD would optimize it if they had access to the code is they would just reduce the tessellation of that effect in the driver. There's no reason to do that if you can do it yourself in the options.

 

You have fundamentally misunderstood the feature test. They specifically say it's not a benchmark, it's a feature test to test the API differences on a single system. That is why they talk about relative performance on a single system:

 

The proper use of the test is to compare the relative performance of each API on a single system, rather than the absolute performance of different systems.

The focus on single-system testing is one reason why the API Overhead test is called a feature test  rather than a benchmark.​

 

As such you cannot compare any score (pr the performance increases) between AMD and NVidia (or Intel for that matter). The score is only there to show the benefits of DX12/Mantle over DX11. Nothing more.

 

Futuremark is developing a DirectX 12 benchmark that will be available after the public launch of Windows 10. That test will be designed specifically to compare the performance of different vendors' hardware with game-like workloads.​

 

This is what you need to come to any conclusion between the two vendors, and it's not out yet.

 

And for everyone who still thinks AMD performance suffers because of GW, here's a benchmark with GW off:

 

390X still performs significantly worse.

 

ANd here's a video from DigitalFoundry:

 

The guy says AMD cards have low utilization, which indicates it's the CPU overhead. Unless you people have some evidence to prove me otherwise, I will not continue this discussion. I've presented all the evidence to support my claims and you still refuse to admit the truth. 

 

Still a flawed conclusion. AMD's performance driver is not out yet for Fallout 4. Also bear in mind that this is an NVidia sponsored title. We know NVidia's programmers "helps" the devs create shaders that are specifically optimized for NVidia with no care of the performance consequences of the competition (see the Valve OpenGL dev's blog about this). This is one of the reasons why NVidia titles always suck on AMD, even if you disable the gimped GameWorks effects.

 

The reason we haven't seen an AMD driver, even though they just released one, is because AMD tends to not get game access until launch or very close to launch on NVidia sponsored titles. And people still thinks there is no anti competitive strategies from NVidia. Please.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Sooooo.. How to start a shit storm about Fallout 4 not supporting 21:9 resolutions?

Twitter bomb?

 

Do the usual thing when people get into a fanfare and wanna be heard. Social Media.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

Look at the bloody numbers instead of the percentage. Factoring in the natural margin of error in benchmarks they both gain the exact same amount of FPS. Removing the margin of error AMD gain MORE. Percentages are meaningless without looking at the actual numbers as well. Of course AMD gained a higher percentage, 22.6 fps at 61.2 is a higher percentage than 22.4 fps at 79.2. With Gameworks on they are 18fps apart, with it off they are 17.8 apart. The performance delta between them doesn't change. Gameworks has the exact same fps hit between both cards.

 

Your post makes little sense to me. Sure we have to look at the numbers, but percentage is equally as important. There's no doubt that GameWorks/God Rays gimp the performance on both teams, but things just runs worse on AMD when it's an Nvidia sponsored game. Look at my post above. It's not just a case of a single effect, but also the lack of a game driver from AMD atm, as well as the game having NVidia specific/optimized shaders that has nothing to do with GameWorks.

 

Either way, God Rays are stupidly taxing, especially on ultra. Just like WaveWorks and other Works effects, it makes little graphical differences, but just cripples performance for no good reason. Why would anyone defend such shitty programming/optimization/effects?

If it was a dev doing this they would be criticized by everyone.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Your post makes little sense to me. Sure we have to look at the numbers, but percentage is equally as important. There's no doubt that GameWorks/God Rays gimp the performance on both teams, but things just runs worse on AMD when it's an Nvidia sponsored game. Look at my post above. It's not just a case of a single effect, but also the lack of a game driver from AMD atm, as well as the game having NVidia specific/optimized shaders that has nothing to do with GameWorks.

 

Either way, God Rays are stupidly taxing, especially on ultra. Just like WaveWorks and other Works effects, it makes little graphical differences, but just cripples performance for no good reason. Why would anyone defend such shitty programming/optimization/effects?

If it was a dev doing this they would be criticized by everyone.

 

Your right, both should be looked at. Though saying God Rays don't make a lot of difference is silly. Have you seen screenshots of the game with and without it? The difference in the lighting is VERY apparent. Now if you're talking about anything above the low setting I will agree with you completely.

Link to comment
Share on other sites

Link to post
Share on other sites

Your right, both should be looked at. Though saying God Rays don't make a lot of difference is silly. Have you seen screenshots of the game with and without it? The difference in the lighting is VERY apparent. Now if you're talking about anything above the low setting I will agree with you completely.

 

You misunderstood, I'm saying the massive performance hit between God Rays on High and Ultra is not worth it. Just like WaveWorks in Watch Dogs (where ultra actually raises the contrast to look more unnatural than high). God Ray as an effect makes a big difference, true.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Fallout 4 seems to favor moar corez also. Good showing for the FX-8350 here. Even Faildozer is beating the i5-2500k here.

 

http--www.gamegpu.ru-images-stories-Test

 

Between this and Witcher 3 it seems like it's time to retire the gamer religion commandment that says i7s are the same as i5s in gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

Consoles are based on AMD hardware

AMD gpus results suck

AMD pls

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

Consoles are based on AMD hardware

AMD gpus results suck

AMD pls

 

285 beating the 960, AMDs results hardly suck.

Link to comment
Share on other sites

Link to post
Share on other sites

Battlefront runs well on AMD because it's not heavy on draw calls. And it's not heavy on draw calls because it doesn't have much objects to render, not because it's well optimized. Maps in Battlefront are essentially wastelands and tundras. 

Untrue Fallout 4 should be even less demanding as an offline game it doesn't need to calculate as much in the background normally you only need to calculate what is in direct view or surrounding not to mention that the draw distance of a Battlefield map is very comparable with that of Fallout 4.

Also in Battlefield and Battlefront every shot is a physical object with travel time and physics and it's calculated even outside of your view so you can get shot from the other side of the map while at the same time people, vehicles, and destruction are tracked in the background while displaying good graphics.

The reason why Battlefield and Battlefront run so much better than Fallout 4 is because they are amazingly optimized Frostbite even uses DX11.1 in Windows 8/8.1/10 and it takes full use of 8 cores as well as hyper threading.

 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

Fallout 4 seems to favor moar corez also. Good showing for the FX-8350 here. Even Faildozer is beating the i5-2500k here.

 

http--www.gamegpu.ru-images-stories-Test

 

Between this and Witcher 3 it seems like it's time to retire the gamer religion commandment that says i7s are the same as i5s in gaming.

You do realise that the 2500K is just as old, and its taken 4 years for it to actually become semi relevant against the 2500K in games?

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

You do realise that the 2500K is just as old, and its taken 4 years for it to actually become semi relevant against the 2500K in games?

 

That's kind of the point, games are starting to become more parallelized. The new Witcher 3 DLC loves cores too.

 

http--www.gamegpu.ru-images-stories-Test

Link to comment
Share on other sites

Link to post
Share on other sites

That's kind of the point, games are starting to become more parallelized. The new Witcher 3 DLC loves cores too.

 

http--www.gamegpu.ru-images-stories-Test

And tha's still a 5GHz FX 9590 just beating a 2600K which will be at a far lower clock speed. What we need is some one to benchmark the CPU in games at the same clock speed with the power consumption and heat (not as reported but actually measured).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×