Jump to content

AMD FSR3 demonstrated, along with AFMF

porina
3 hours ago, Arokhantos said:

OP may wanna update main post if he has not yet cos AMD AFMF is now also avaible on RDNA2

I view news posts as a snapshot of the time of posting, which is 2 months ago now. I don't usually update them indefinitely. Since this thread is still going, I'll put something in later when I have some time. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Fasterthannothing said:

Did you watch the video that's the entire point no one actually cares about driver level stuff. Average people who buy cards (not you and me on these type of tech forums) look at what Nvidia is putting into their hardware and go why would I buy AMD when it doesn't have half the features Nvidia does on their hardware.  And if someone does by Nvidia they can still use whatever AMD software is released if it's worth using. Go look at the steam hardware survey AMD didn't even get a single GPU in the top 10 usage.

 

However, if you look at the Steam Hardware survey, there are NO RTX 4000-series GPUs in the top 20.

They are all GTX 1000 / 1100 series, RTX 2000 and 3000-series. The RTX 4070 is #25 on the chart.

None of those supports DLSS3 or Frame Generation.

Intel Z390 Rig ( *NEW* Primary )

Intel X99 Rig (Officially Decommissioned, Dead CPU returned to Intel)

  • i7-8086K @ 5.1 GHz
  • Gigabyte Z390 Aorus Master
  • Sapphire NITRO+ RX 6800 XT S.E + EKwb Quantum Vector Full Cover Waterblock
  • 32GB G.Skill TridentZ DDR4-3000 CL14 @ DDR-3400 custom CL15 timings
  • SanDisk 480 GB SSD + 1TB Samsung 860 EVO +  500GB Samsung 980 + 1TB WD SN750
  • EVGA SuperNOVA 850W P2 + Red/White CableMod Cables
  • Lian-Li O11 Dynamic EVO XL
  • Ekwb Custom loop + 2x EKwb Quantum Surface P360M Radiators
  • Logitech G502 Proteus Spectrum + Corsair K70 (Red LED, anodized black, Cheery MX Browns)

AMD Ryzen Rig

  • AMD R7-5800X
  • Gigabyte B550 Aorus Pro AC
  • 32GB (16GB X 2) Crucial Ballistix RGB DDR4-3600
  • Gigabyte Vision RTX 3060 Ti OC
  • EKwb D-RGB 360mm AIO
  • Intel 660p NVMe 1TB + Crucial MX500 1TB + WD Black 1TB HDD
  • EVGA P2 850W + White CableMod cables
  • Lian-Li LanCool II Mesh - White

Intel Z97 Rig (Decomissioned)

  • Intel i5-4690K 4.8 GHz
  • ASUS ROG Maximus VII Hero Z97
  • Sapphire Vapor-X HD 7950 EVGA GTX 1070 SC Black Edition ACX 3.0
  • 20 GB (8GB X 2 + 4GB X 1) Corsair Vengeance DDR3 1600 MHz
  • Corsair A50 air cooler  NZXT X61
  • Crucial MX500 1TB SSD + SanDisk Ultra II 240GB SSD + WD Caviar Black 1TB HDD + Kingston V300 120GB SSD [non-gimped version]
  • Antec New TruePower 550W EVGA G2 650W + White CableMod cables
  • Cooler Master HAF 912 White NZXT S340 Elite w/ white LED stips

AMD 990FX Rig (Decommissioned)

  • FX-8350 @ 4.8 / 4.9 GHz (given up on the 5.0 / 5.1 GHz attempt)
  • ASUS ROG Crosshair V Formula 990FX
  • 12 GB (4 GB X 3) G.Skill RipJawsX DDR3 @ 1866 MHz
  • Sapphire Vapor-X HD 7970 + Sapphire Dual-X HD 7970 in Crossfire  Sapphire NITRO R9-Fury in Crossfire *NONE*
  • Thermaltake Frio w/ Cooler Master JetFlo's in push-pull
  • Samsung 850 EVO 500GB SSD + Kingston V300 120GB SSD + WD Caviar Black 1TB HDD
  • Corsair TX850 (ver.1)
  • Cooler Master HAF 932

 

<> Electrical Engineer , B.Eng <>

<> Electronics & Computer Engineering Technologist (Diploma + Advanced Diploma) <>

<> Electronics Engineering Technician for the Canadian Department of National Defence <>

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, -rascal- said:

However, if you look at the Steam Hardware survey, there are NO RTX 4000-series GPUs in the top 20.

4060 Laptop is 14th. If you total up all 40-series GPUs it comes to 6.7%. High end GPUs have been out for a year and the lower end ones only released and gaining momentum recently. It will take time for numbers to be competitive with previous generation which has had its sale life already. Listed 40-series GPU share is more than all of RDNA1, 2, 3 and Vega combined.

 

We can look at what % of GPUs support FSR3 Frame Generation based on AMD statement.

Minimum: 5700/20 series and above: 3.6% and 44.6% respectively, 48.2% total.

Recommended: 6000/30 series and above: 3.0% and 35.6% respectively, 38.6% total.

 

Note the Steam Hardware Survey only lists individual GPUs once they pass 0.15% share. There will be lower volume current gen GPUs missing because of that, especially on AMD side where so far only the 7900 XTX has managed that. So overall numbers for each category may be higher than shown here.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

4060 Laptop is 14th. If you total up all 40-series GPUs it comes to 6.7%. High end GPUs have been out for a year and the lower end ones only released and gaining momentum recently. It will take time for numbers to be competitive with previous generation which has had its sale life already. Listed 40-series GPU share is more than all of RDNA1, 2, 3 and Vega combined.

 

We can look at what % of GPUs support FSR3 Frame Generation based on AMD statement.

Minimum: 5700/20 series and above: 3.6% and 44.6% respectively, 48.2% total.

Recommended: 6000/30 series and above: 3.0% and 35.6% respectively, 38.6% total.

 

Note the Steam Hardware Survey only lists individual GPUs once they pass 0.15% share. There will be lower volume current gen GPUs missing because of that, especially on AMD side where so far only the 7900 XTX has managed that. So overall numbers for each category may be higher than shown here.

 

Those are very valid points.

Hence, I don't think we can rely on the Steam Hardware Survey as the sole source...

Intel Z390 Rig ( *NEW* Primary )

Intel X99 Rig (Officially Decommissioned, Dead CPU returned to Intel)

  • i7-8086K @ 5.1 GHz
  • Gigabyte Z390 Aorus Master
  • Sapphire NITRO+ RX 6800 XT S.E + EKwb Quantum Vector Full Cover Waterblock
  • 32GB G.Skill TridentZ DDR4-3000 CL14 @ DDR-3400 custom CL15 timings
  • SanDisk 480 GB SSD + 1TB Samsung 860 EVO +  500GB Samsung 980 + 1TB WD SN750
  • EVGA SuperNOVA 850W P2 + Red/White CableMod Cables
  • Lian-Li O11 Dynamic EVO XL
  • Ekwb Custom loop + 2x EKwb Quantum Surface P360M Radiators
  • Logitech G502 Proteus Spectrum + Corsair K70 (Red LED, anodized black, Cheery MX Browns)

AMD Ryzen Rig

  • AMD R7-5800X
  • Gigabyte B550 Aorus Pro AC
  • 32GB (16GB X 2) Crucial Ballistix RGB DDR4-3600
  • Gigabyte Vision RTX 3060 Ti OC
  • EKwb D-RGB 360mm AIO
  • Intel 660p NVMe 1TB + Crucial MX500 1TB + WD Black 1TB HDD
  • EVGA P2 850W + White CableMod cables
  • Lian-Li LanCool II Mesh - White

Intel Z97 Rig (Decomissioned)

  • Intel i5-4690K 4.8 GHz
  • ASUS ROG Maximus VII Hero Z97
  • Sapphire Vapor-X HD 7950 EVGA GTX 1070 SC Black Edition ACX 3.0
  • 20 GB (8GB X 2 + 4GB X 1) Corsair Vengeance DDR3 1600 MHz
  • Corsair A50 air cooler  NZXT X61
  • Crucial MX500 1TB SSD + SanDisk Ultra II 240GB SSD + WD Caviar Black 1TB HDD + Kingston V300 120GB SSD [non-gimped version]
  • Antec New TruePower 550W EVGA G2 650W + White CableMod cables
  • Cooler Master HAF 912 White NZXT S340 Elite w/ white LED stips

AMD 990FX Rig (Decommissioned)

  • FX-8350 @ 4.8 / 4.9 GHz (given up on the 5.0 / 5.1 GHz attempt)
  • ASUS ROG Crosshair V Formula 990FX
  • 12 GB (4 GB X 3) G.Skill RipJawsX DDR3 @ 1866 MHz
  • Sapphire Vapor-X HD 7970 + Sapphire Dual-X HD 7970 in Crossfire  Sapphire NITRO R9-Fury in Crossfire *NONE*
  • Thermaltake Frio w/ Cooler Master JetFlo's in push-pull
  • Samsung 850 EVO 500GB SSD + Kingston V300 120GB SSD + WD Caviar Black 1TB HDD
  • Corsair TX850 (ver.1)
  • Cooler Master HAF 932

 

<> Electrical Engineer , B.Eng <>

<> Electronics & Computer Engineering Technologist (Diploma + Advanced Diploma) <>

<> Electronics Engineering Technician for the Canadian Department of National Defence <>

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Stahlmann said:

Except with today's upscaling you lose next to no image quality (if you use the higher quality upscaling presets) while gaining quite a lot of performance. Watch some comparisons between upscaled and native specifically with newer DLSS implementations like Cyberpunk. At 4K the DLSS quality setting is practically indistinguishable from native resolution, which means it's basically free fps. In the case of Cyberpunk for example you can get insane performance uplifts bringing the game from unplayable (>30 fps) to very playable (close to 60 fps). We're talking about a 75-80% "free" fps boost at 1440p or 4K. Why wouldn't you want free fps?

 

Without upscaling:

 

image.thumb.png.2d617f7e2c9a4a2d03d135d8209d2dd7.png

 

With upscaling:

 

image.thumb.png.219857433f97d4cd56e39f1492ca336b.png

 

You can use super samling in practically ANY game, as it's a driver-side implementation that merely needs to be enabled. So your argument basically comes down to "i don't need it". Nothing wrong with that, but don't go around preaching it's useless because your preference of older games means you specifically don't get much use out of it.

 

Enabling DLSS and setting it to quality is a much bigger performance improvement and has muss less visual impact compared to tinkering with different graphics settings to figure out how to get 30% more fps without making the game look like garbage from 10 years ago.

Frame gen is none of those things though

 

In ALL cases frame gen you lose performance, you lose visual fidelity, and gain visual fluidity. It’s a bad trade to make number go big. But let it be stressed you LOSE performance.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, starsmine said:

Frame gen is none of those things though

 

In ALL cases frame gen you lose performance, you lose visual fidelity, and gain visual fluidity. It’s a bad trade to make number go big. But let it be stressed you LOSE performance.

In what way do you lose performance? Fps are higher. Yes, input lag doesn't improve to the same degree, but it's not worse than what you'd get by default even when using Reflex. You have to weigh up the up and downsides when deciding if you wanna use that feature. If it's implemented well the added smoothness outweighs the artifacting imo. 

 

 

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Stahlmann said:

In what way do you lose performance? Fps are higher. Yes, input lag doesn't improve to the same degree, but it's not worse than what you'd get by default even when using Reflex. You have to weigh up the up and downsides when deciding if you wanna use that feature. If it's implemented well the added smoothness outweighs the artifacting imo. 

 

 

We have a disconnect that FSR3 forced upon us that we never had to reckon with with our language. a lesson we learned once before with SLI where you got a bigger FPS number but worse performance with microstuttering. We kinda address it with frame pacing. 

here we dont micro stutter no. but latency goes up. EVERY TIME. and it's mitigated with Reflex/anti-lag+, But Reflex/anti-lag+ do not need frame gen, its a separate tech that you can just run. 

In general Bigger FPS = Lower latency/lag aka better performance. It was tied to each other because one was simply the inverse of the other. 


Frame Gen breaks this paradigm. you get Bigger FPS AND Larger Latency and Lag (the important thing for performance). The two are decoupled breaking the definition of performance we relied on to compare cards with (because FPS was the easiest and most consistent metric the measure with and was coupled). A concept we never really had to communicate to people before.

In ALL situations of Frame Gen being turned on, input lag is worsened. AKA, in ALL situations performance is lowered. What you gain is visual fluidity. The thing is though. what you notice at high frame rates in user interactive media is not visual fluidity but responsiveness.  The important thing between 60fps and 120fps is the FEEL, not the visuals. 60fps visually is already smooth due to how your eyes work. 

So when you go from the input lag of 60 fps, to the input lag of 50fps but the visuals of 120... its WORSE. 

Example with cyberpunk and DLSS (not FSR3, I dont know if anyone has published numbers for that yet)
image.thumb.png.8292e749b3ec564b34e685e243683e3a.png
Turning frame gen on you go from 47 to 62.6
So it performance wise in terms of how it feels for user interactivity, it feels like you went from 72 to 42 fps. even though you have the motion fidelity of 112 fps

or with performance you went from 32.7 to 52.1. 
so from a perfromance perspective for user activity, it feels like you went from 90 fps to 60ish fps, but you now have the motion fidelity of 142 fps

Also.. you got new temporal artifacts. so cool. 

AMD and Intel are rushing to make a Nvidia competitor, not because the tech is good or smart, but because they don't want people to try to argue that Nvidia is better cause Frame gen. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, starsmine said:

So when you go from the input lag of 60 fps, to the input lag of 50fps but the visuals of 120... its WORSE. 

Not every game is critically latency sensitive.

 

I refer back to my notes when I tried FSR3 FG in Forspoken:

FSR3 FG is enabled in the demo. On my main system (7920X, 4070) I initially tried ultra-high but fps practically died when I turned FSR3 FG on. Dropping to high it worked ok. Nvidia built in overlay reported around 70fps native at around 11-12ms latency. With FG on, that went to around 120fps and 20ms latency. Panning the camera I could see the increased smoothness of background elements. I did notice some fizzle on the main character's hair when FG is on that isn't there when off. Other than that, no new noticeable artefacts. I didn't notice any difference in responsiveness, but this game doesn't control well to my tastes anyway.

 

To summarise: can I see the increased smoothness? Yes. Did I feel an increase in latency? No. Did I see artefacts from FG? Yes. In this specific example it did provide a visual benefit at a cost of a visual artefact, with no noticeable change to responsiveness. Would I choose to use it if I were to play this game a lot? I dunno. At least the option is there.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, starsmine said:

We have a disconnect that FSR3 forced upon us that we never had to reckon with with our language. a lesson we learned once before with SLI where you got a bigger FPS number but worse performance with microstuttering. We kinda address it with frame pacing. 

here we dont micro stutter no. but latency goes up. EVERY TIME. and it's mitigated with Reflex/anti-lag+, But Reflex/anti-lag+ do not need frame gen, its a separate tech that you can just run. 

In general Bigger FPS = Lower latency/lag aka better performance. It was tied to each other because one was simply the inverse of the other. 


Frame Gen breaks this paradigm. you get Bigger FPS AND Larger Latency and Lag (the important thing for performance). The two are decoupled breaking the definition of performance we relied on to compare cards with (because FPS was the easiest and most consistent metric the measure with and was coupled). A concept we never really had to communicate to people before.

In ALL situations of Frame Gen being turned on, input lag is worsened. AKA, in ALL situations performance is lowered. What you gain is visual fluidity. The thing is though. what you notice at high frame rates in user interactive media is not visual fluidity but responsiveness.  The important thing between 60fps and 120fps is the FEEL, not the visuals. 60fps visually is already smooth due to how your eyes work. 

So when you go from the input lag of 60 fps, to the input lag of 50fps but the visuals of 120... its WORSE. 

Example with cyberpunk and DLSS (not FSR3, I dont know if anyone has published numbers for that yet)
image.thumb.png.8292e749b3ec564b34e685e243683e3a.png
Turning frame gen on you go from 47 to 62.6
So it performance wise in terms of how it feels for user interactivity, it feels like you went from 72 to 42 fps. even though you have the motion fidelity of 112 fps

or with performance you went from 32.7 to 52.1. 
so from a perfromance perspective for user activity, it feels like you went from 90 fps to 60ish fps, but you now have the motion fidelity of 142 fps

Also.. you got new temporal artifacts. so cool. 

AMD and Intel are rushing to make a Nvidia competitor, not because the tech is good or smart, but because they don't want people to try to argue that Nvidia is better cause Frame gen. 

The disconnect between our arguments is that i never argued frame gen is better or similar in latency compared to just using DLSS upscaling. My argument was that frame gen is the same latency as using native resolution with Reflex enabled.

 

This allows the user to pick betwen visual smoothness or latency.

 

Native, DLSS upscaling and DLSS frame gen all have their pros and cons. The user can then to decide how much extra performance they need and enable the relevant option. Or if they don't need any extra performance they can disable DLSS alltogether. (but also don't get the input lag advantage of just using upscaling without frame gen) Don't dismiss frame gen because it doesn't apply to your specific completely latency focused point of view.

 

An extra bit of latency compared to just running DLSS upscaling doesn't really matter in the games where you'd actually need frame gen. (for example Alan Wake, Cyberpunk or Witcher 3, and other very demanding games)

 

And in the games where you don't want the latency, fps are probably already high enough that you don't need frame gen. (for example Valorant, Overwatch, Counter Strike, etc.) In these situations you could still use DLSS upscaling though to furher reduce latency if necessary and supported by the game.

 

We can argue about the semantics on how it feels and how it looks all day, but in the end the real question is: Is it good enough? And you can only speak for yourself in this regard.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Stahlmann said:

The disconnect between our arguments is that i never argued frame gen is better or similar in latency compared to just using DLSS upscaling. My argument was that frame gen is the same latency as using native resolution with Reflex enabled.

In NO as in in ZERO games does Frame gen on native resolution have the same latency as Native with reflex enabled. Not a single one exists, because it's not possible. 
Compare apples to apples.
If you have Upscalling on in one, have it on in the other. Otherwise, you are arguing "I get more FPS/performance when I turn the settings down to Medium than when I have it on Ultra".
Upscalling is the same as turning your settings down to get more performance, and I think that's good, but it needs to be communicated that that is what you are doing. 

 

In ALL games, performance is worse with frame gen enabled. 
 

 

4 hours ago, Stahlmann said:

This allows the user to pick betwen visual smoothness or latency.

 

Native, DLSS upscaling and DLSS frame gen all have their pros and cons. The user can then to decide how much extra performance they need and enable the relevant option. Or if they don't need any extra performance they can disable DLSS alltogether. (but also don't get the input lag advantage of just using upscaling without frame gen) Don't dismiss frame gen because it doesn't apply to your specific completely latency focused point of view.

Yes, you gain extra performance from DLSS upscaling, I never disputed that. I'm exclusively talking about issues with Frame Gen.
Im pro DLSS/FSR upscaling when it is appropriately talked about (as in, its lowering your settings to get more FPS/performance)

There is no additional input lag using upscaling without frame gen, its straight-up increases performance across the board. (this is a 3/4ths truth, it adds extra pipeline steps, however it's just the rendering is twice as fast and the extra pipeline is the shortest of pipes, there could be an argument that if you are 100% CPU bound (Rare) that it increases latency by no more then a couple of ms, but you also have not gained fps)

 

4 hours ago, Stahlmann said:

An extra bit of latency compared to just running DLSS upscaling doesn't really matter in the games where you'd actually need frame gen. (for example Alan Wake, Cyberpunk or Witcher 3, and other very demanding games)

 

And in the games where you don't want the latency, fps are probably already high enough that you don't need frame gen. (for example Valorant, Overwatch, Counter Strike, etc.) In these situations you could still use DLSS upscaling though to furher reduce latency if necessary and supported by the game.

I don't disagree. I often find the argument that 30-40fps being "unplayable" is crap. But THOSE are the situations where end users are likely to want to turn on Frame Gen... except that will make your latency go to an area of unplayability, in games like the witcher. Its not a twitch shooter no, but you still need to act fast in fights. 

Run your FSR/DLSS upscaling to get better FPS/Performance, Turn down your settings to get better FPS/Performance.

But when you turn on Frame Gen, You get better FPS and worse Performance. It turns FPS into a misleading number as it's not longer coupled with performance (like we have seen with SLI micro stuttering)

If you are making the argument that playability isn't why we try to get higher FPS in games like Metro Exodus, or Cyberpunk or Alan Wake or whatever, why even benchmark those games? 

 

4 hours ago, Stahlmann said:

We can argue about the semantics on how it feels and how it looks all day, but in the end the real question is: Is it good enough? And you can only speak for yourself in this regard.

It needs to be communicated by Nvidia/AMD/Intel what exactly it's actually doing to your performance. The way Nvidia specifically pushes DLSS in all forms (except perhaps ray reconstruction) is incredibly misleading. That is what needs to be good enough. Because it pushes AMD to feel the need to keep up with the joneses and rush out AFMF to check the box of features, because users are buying Nvidia for DLSS features they do not understand because of Nvidia misleading about them. 

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, starsmine said:

In NO as in in ZERO games does Frame gen on native resolution have the same latency as Native with reflex enabled. Not a single one exists, because it's not possible. 

You've shown this chart yourself:

image.png.4385f3ebda4ef7116a260309acc94385.png

 

The total latency between "Native" (with Reflex enabled, as pointed out by Hardware Unboxed in their Video) and "DLSS Quality + Frame Gen" is the same.

 

54 minutes ago, starsmine said:

Compare apples to apples.

You cannot compare native with frame gen without also throwing upscaling into the mix. Simply because upscaling and frame gen can only be used together. So why should i start to compare theoretical scenarios that don't exist?

 

54 minutes ago, starsmine said:

If you have Upscalling on in one, have it on in the other. Otherwise, you are arguing "I get more FPS/performance when I turn the settings down to Medium than when I have it on Ultra".
Upscalling is the same as turning your settings down to get more performance, and I think that's good, but it needs to be communicated that that is what you are doing. 

 

In ALL games, performance is worse with frame gen enabled.

Again, you're trying to come up with your own definition of "performance". I'd argue performance is widely meant to represent fps, not latency or image quality. I agree that this is a new issue because of the introduction of frame generation, but the terminology is something the tech community as a whole has to sort out. If everyone starts using their own terms for different metrics, nobody will understand what anybody is talking about.

 

54 minutes ago, starsmine said:

Yes, you gain extra performance from DLSS upscaling, I never disputed that. I'm exclusively talking about issues with Frame Gen.
Im pro DLSS/FSR upscaling when it is appropriately talked about (as in, its lowering your settings to get more FPS/performance)

There is no additional input lag using upscaling without frame gen, its straight-up increases performance across the board. (this is a 3/4ths truth, it adds extra pipeline steps, however it's just the rendering is twice as fast and the extra pipeline is the shortest of pipes, there could be an argument that if you are 100% CPU bound (Rare) that it increases latency by no more then a couple of ms, but you also have not gained fps)

 

I don't disagree. I often find the argument that 30-40fps being "unplayable" is crap. But THOSE are the situations where end users are likely to want to turn on Frame Gen... except that will make your latency go to an area of unplayability, in games like the witcher. Its not a twitch shooter no, but you still need to act fast in fights.

I'd argue that for me 30 fps is mostly just unplayable because of the stutter introduced on modern displays with fast response times. The input lag isn't the big problem here. The latency coming from 30 fps is fast enough for most singleplayer games, but the visual stutter is very offputting. Even very latency sensitive games like Bloodborne can be played at 30 fps no problem, but it just looks horrible.

 

54 minutes ago, starsmine said:

Run your FSR/DLSS upscaling to get better FPS/Performance, Turn down your settings to get better FPS/Performance.

But when you turn on Frame Gen, You get better FPS and worse Performance. It turns FPS into a misleading number as it's not longer coupled with performance (like we have seen with SLI micro stuttering)

If you are making the argument that playability isn't why we try to get higher FPS in games like Metro Exodus, or Cyberpunk or Alan Wake or whatever, why even benchmark those games?

Counter question: Why are we measuring playability with fps, not latency? I do not think it's misleading at all to measure fps using frame generation, but it has to be pointed out that this is not native performance. And it probably also makes sense to include more input lag measurements in the future so the consumer has the most data available to make an informed decision on what settings they want to use.

 

54 minutes ago, starsmine said:

It needs to be communicated by Nvidia/AMD/Intel what exactly it's actually doing to your performance. The way Nvidia specifically pushes DLSS in all forms (except perhaps ray reconstruction) is incredibly misleading. That is what needs to be good enough. Because it pushes AMD to feel the need to keep up with the joneses and rush out AFMF to check the box of features, because users are buying Nvidia for DLSS features they do not understand because of Nvidia misleading about them. 

I  agree that misleading marketing is bad, but these technologies aren't just there for ticking off a box. If they're implemented correctly, they're a great tool for the consumer to get more out of their GPU. For that reason i also think these newer features are a legit selling point on top of just raw GPU performance. We're well past the days where that's the only thing that matters. Otherwise you'd probably see AMD having a much bigger market share than what they have now. Nvidia's current tone-deaf behavior towards consumers wouldn't work if consumers only care about raw rasterization performance. People also care about reliable drivers and the extra features. That's why the vast majority of GPU buyers still shop in the green camp.

 

But yes, they could market it more clearly. On the other hand, when is the last time you saw a big corporation having genuine and true to reality marketing? I'm not saying it's a good thing, but it should be expected.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, Stahlmann said:

You've shown this chart yourself:

image.png.4385f3ebda4ef7116a260309acc94385.png

 

The total latency between "Native" (with Reflex enabled, as pointed out by Hardware Unboxed in their Video) and "DLSS Quality + Frame Gen" is the same.

 

You cannot compare native with frame gen without also throwing upscaling into the mix. Simply because upscaling and frame gen can only be used together. So why should i start to compare theoretical scenarios that don't exist?

Its the only way to compare it
Frame gen makes you go from 47 to 62.6

You actually can run Frame gen without Super Resolution, YOU NEVER SHOULD. Super resolution and Reflex are on to hide the impact. If you want more performance you turn on SR(aka turn down the settings), not FG. and if you want more motion fluidity after that, then you turn on FG. But that is why the test explicitly states SR(quality level) + Frame Gen.

Frame gen + reflex, NO SR is a pointless test outside of being academic about it and does a poor job communicating what to expect as it's not a real use case. When you are already doing 5 runs per game on a dozen games, adding another 6-10 man-hours of testing to add that line to a video is not particularly worth it I would imagine. Most games may not allow it, but that isnt a block by nVidia.

Relex you are forced to run when you turn on Frame gen.  
 

53 minutes ago, Stahlmann said:

Again, you're trying to come up with your own definition of "performance". I'd argue performance is widely meant to represent fps, not latency or image quality. I agree that this is a new issue because of the introduction of frame generation, but the terminology is something the tech community as a whole has to sort out. If everyone starts using their own terms for different metrics, nobody will understand what anybody is talking about.

Im not coming up with my own definition. It's always been the definition. FPS was just the metric used to measure it because it was coupled to it
S + Alpha 1/fps = latency. 

the same way 1/hz = period. 
 

This equation was a mathematical fact so using performance and fps interchangeably was correct. Microstutter put a wrench in it while the equation still held true. because the average was correct, but the discreet measurements swung wildly as you had a heartbeat cadence (like music 4 quarter notes vs 2 eight notes, quarter rest and two eight notes, quarter rest. you had the same amount of notes per measure, the same avg latency, but clearly these are not equal cadences). Which is why frame pacing was used to communicate performance, but collecting frame pacing data is non-trivial, and explaining the chart or even reading the chart is non-trivial. However, because the equation still holds under micro stuttering,  1%, and .1% lows are a correct way to measure and inform the user about the micro stutter and the actual performance in ways average FPS hides. This is far easier to measure and far easier to communicate. 

Frame gen, BREAKS this equation. 
S + alpha 1/fps =/= latency when frame gen is enabled. You can no longer compare the FPS numbers to talk about performance specifically with Frame gen. It’s been decoupled. 

 

 

53 minutes ago, Stahlmann said:

 

I'd argue that for me 30 fps is mostly just unplayable because of the stutter introduced on modern displays with fast response times. The input lag isn't the big problem here. The latency coming from 30 fps is fast enough for most singleplayer games, but the visual stutter is very offputting. Even very latency sensitive games like Bloodborne can be played at 30 fps no problem, but it just looks horrible.

 

Counter question: Why are we measuring playability with fps, not latency? I do not think it's misleading at all to measure fps using frame generation, but it has to be pointed out that this is not native performance. And it probably also makes sense to include more input lag measurements in the future so the consumer has the most data available to make an informed decision on what settings they want to use.

Counter question explained above. outside of microstutter you didnt need to in the past. 

 

53 minutes ago, Stahlmann said:

 

I  agree that misleading marketing is bad, but these technologies aren't just there for ticking off a box. If they're implemented correctly, they're a great tool for the consumer to get more out of their GPU. For that reason i also think these newer features are a legit selling point on top of just raw GPU performance. We're well past the days where that's the only thing that matters. Otherwise you'd probably see AMD having a much bigger market share than what they have now. Nvidia's current tone-deaf behavior towards consumers wouldn't work if consumers only care about raw rasterization performance. People also care about reliable drivers and the extra features. That's why the vast majority of GPU buyers still shop in the green camp.

 

But yes, they could market it more clearly. On the other hand, when is the last time you saw a big corporation having genuine and true to reality marketing? I'm not saying it's a good thing, but it should be expected.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Stahlmann said:

The total latency between "Native" (with Reflex enabled, as pointed out by Hardware Unboxed in their Video) and "DLSS Quality + Frame Gen" is the same.

 

You cannot compare native with frame gen without also throwing upscaling into the mix. Simply because upscaling and frame gen can only be used together. So why should i start to compare theoretical scenarios that don't exist?

 its true however, do it with the right resolution. else dont compare.
Could be nice to see how it does in native resolution.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×