Jump to content

DLSS 3.0 Lock BYPASSED? DLSS 3.0 on an RTX 2070?

Haaselh0ff

Summary

 

Redditor "JusDax" has reportedly bypassed the DLSS 3.0 Frame Generation lock that prevented RTX 2000 and 3000 series cards from benefiting from the new AI generated frame feature. User reports that his framerate in Cyberpunk 2077 doubled from 35-40 FPS to 80 FPS. It doesn't even appear that the lock was super difficult to get around as stated in the article quote below :

 

Quotes

Quote

 

The user says that he was able to bypass the DLSS 3 lock on older GeForce RTX cards by adding a config file to remove the VRAM overhead in Cyberpunk. He ran the game at 1440p with HDR off & DLSS set to balanced mode. He was also able to select the Frame Generation tab under the Ultra ray tracing preset.

"DLSS Frame Generation doesn't seem to be hardware locked to RTX 40 series. I was able to bypass a software lock by adding a config file to remove the VRAM overhead in Cyberpunk. Doing this causes some instability and frame drops but I'm getting ~80 FPS on an RTX 2070 with the following settings:

2560x1440 res

HDR off

DLSS Balanced

DLSS Frame Generation ON

Ray Tracing Ultra preset

(I get ~35-40 FPS without Frame Generation and DLSS at Quality)

Edit: forgot to add some details"

u/JusDax via r/NVIDIA

 

My thoughts

 To start, I'm very interested to see how the performance is on the 3000 series GPUs. The issues the user experienced could very well just be something that would've been fixed with drivers had this tech actually been made for 2000 series GPUs. I don't want to jump to conclusions without more testing done by more trusted sources but it sure does seem like Nvidia may have artificially locked DLSS 3.0 behind a GPU upgrade for moneys sack (shocking to most of you, I'm sure). BUT AGAIN, I'd like to wait for more people to verify this is doable before jumping on the hate train. Very interested to see more of this because getting more performance from old cards is really interesting! ALSO would like to point out that just because it works does not mean it works as well as 4000 series in terms of how well the generated frame looks

 

Sources

https://wccftech.com/nvidia-dlss-3-frame-generation-lock-reportedly-bypassed-rtx-2070-gets-double-the-fps/

Link to comment
Share on other sites

Link to post
Share on other sites

*The hardware on Turing and Ampere are not powerful enough to complete the same task the way Ada does. 

 

As everyone seems to be harping on the "artificial limitation" as if they're now going to get the same thing on older cards. It won't be. Context is important. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, GuiltySpark_ said:

*The hardware on Turing and Ampere are not powerful enough to complete the same task the way Ada does. 

 

As everyone seems to be harping on the "artificial limitation" as if they're now going to get the same thing on older cards. It won't be. Context is important. 

4000 series having hardware that specifically helps with this and is just better at interpreting the frames VS 2000 and 3000 which are less capable and can lead to worsened image quality in the interpreted frames.

 

I dont imagine Nvidia wants to advertise this feature if the results are bad which could be the case for 2000 and 3000 series cards. Love to see some techtubers investigate this to know though.

Link to comment
Share on other sites

Link to post
Share on other sites

thats great news - i mean we all knew that nvidia was pulling stunt like that just to gauge gamers to buy overpriced 4000 because crypto now gone is, insane - hope somebody can making dlss3 easy to use for all gamers similar like fsr

would also be great just how rtx 20/30 compares with dlss3 in contrast to 4080/4070 and below

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Haaselh0ff said:

4000 series having hardware that specifically helps with this and is just better at interpreting the frames VS 2000 and 3000 which are less capable and can lead to worsened image quality in the interpreted frames.

 

I dont imagine Nvidia wants to advertise this feature if the results are bad which could be the case for 2000 and 3000 series cards. Love to see some techtubers investigate this to know though.

The "Optical Flow Accelerators" are in 2000 and 3000, just significantly slower/weaker and simply cannot do the same job hence it wasn't a feature that made sense to use on those cards. This is known information so the fact that someone screwed with some code and made it work anyway and these stories are making it seem like its the same thing and Nvidia just "hid" it from people is just silly. Ahh the sensationalist news articles will never end.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Blademaster91 said:

Interesting, I'd like to see someone try this on a 3000 series card, I didn't fall for the " Turing and Ampere isn't fast enough so you can't even try it" nonsense. If a 2070 is capable of frame interpolation then a 3080 or 3090 should be capable of using DLSS3.

I do agree that I want to see more testing, but like what GuiltySpark_ says, in that reason the Optical Flow Accelerators in 20 and 30 series gpus are just not made specifically for this job like the 40 series thus causing frames to be less accurate.

 

AGAIN THOUGH, I would love more testing on this because I am all for people getting more money out of their 20 and 30 series GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

the creator literally said it was maybe "possible", but likely not worth it.

But do wonder on how it would run, more so without the needed hardware. Would think the 3000 series got a decent AI options.

Also intel trying to support a lot of AI options (for better or worse) might be able to do something similar while having to see for battlemage.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, GuiltySpark_ said:

The "Optical Flow Accelerators" are in 2000 and 3000, just significantly slower/weaker and simply cannot do the same job hence it wasn't a feature that made sense to use on those cards. This is known information so the fact that someone screwed with some code and made it work anyway and these stories are making it seem like its the same thing and Nvidia just "hid" it from people is just silly. Ahh the sensationalist news articles will never end.

I know Nvidia said this, but it doesn't necessarily mean it's 100% true.

Say the 4090 is able to double frames using dlss3 with a supported game. At what point did Nvidia deem dlss3 "not worth it" on older cards? From what this user has supposedly done it's about half as beneficial.

TLDR: DLSS3 on a 2070 is a 50% boost in fps but is not worth it, but DLSS3 on a 4090 is a 100% boost in fps and the best thing in gaming since RGB.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Haaselh0ff said:

AGAIN THOUGH, I would love more testing on this because I am all for people getting more money out of their 20 and 30 series GPUs.

yes

1 hour ago, quakemarine20 said:

I know Nvidia said this, but it doesn't necessarily mean it's 100% true.

Say the 4090 is able to double frames using dlss3 with a supported game. At what point did Nvidia deem dlss3 "not worth it" on older cards? From what this user has supposedly done it's about half as beneficial.

TLDR: DLSS3 on a 2070 is a 50% boost in fps but is not worth it, but DLSS3 on a 4090 is a 100% boost in fps and the best thing in gaming since RGB.

stuff was left out. Also there is different "FPS" going on, and if you are not just producing a lot of garbage frames.

From the normal FPS you see from ingame, vs produced frames FPS to be shown on screen. where 60 to 100fps might be more like 70+generated for a total of 100. Have to really see and know more about this whole thing and everything else. Then there is also the resolution of generated images, speed in generation and how to generate it. Some work in older designs, but there is a limit and we could see an official support... maybe on 3000 cards, but that is more hope and the "value" they want to add towards their 4000 cards. Also the creator can his own reasons around it and focus on tech side not for marketing/other products and from nvidia themselves.

 

so if one ever ended up in an akward situation where you get 20 fps ingame but 40-60 shown by using it, the feel in movement by such low frames got to suck. (would need a test of it). also games could cheat too, where the AI generation "does the work for them".

 

From the OP's source.

Quote

While there's definitely a performance gain on older cards with DLSS 3 frame generation enabled, it should be pointed out that it wasn't without its fair share of issues. The user experienced instability and frame drops so running frame generation on DLSS 3 won't get you the most optimized gaming experience at the moment since it is designed with GeForce RTX 40 graphics cards in mind.

Quote

What we did see in our own testing was that the frame generation switch can be toggled on and off on an RTX 30 and RTX 20 graphics card but we didn't see any difference in FPS or other performance metrics.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Blademaster91 said:

Interesting, I'd like to see someone try this on a 3000 series card, I didn't fall for the " Turing and Ampere isn't fast enough so you can't even try it" nonsense. If a 2070 is capable of frame interpolation then a 3080 or 3090 should be capable of using DLSS3.

The scaling probably isn't linear. We need more details but the OFA seems to be a fixed function unit. In the Ada range it does the same regardless of the model. It is reported to be 3x more powerful than on previous gens, implying that Turing and Ampere OFA units may be the same. Ampere may not do that part of the work any faster than Turing.

 

Even on Ada there is some concern about latency implications of the technology. Guess what a slower implementation is going to do? Even more latency. The risk is, if they enabled it on older, people might try it and say it sucks for latency or similar reasons, giving the tech a bad image. Maybe nv could tune it to work good enough over time, but that time may not be now.

 

4 hours ago, quakemarine20 said:

TLDR: DLSS3 on a 2070 is a 50% boost in fps but is not worth it, but DLSS3 on a 4090 is a 100% boost in fps and the best thing in gaming since RGB.

DLSS 3 in its current form can only double the frame rate from base, BUT it seems there can be occasions where it has enough impact it lowers the base frame rate it is doubling from. Then it becomes more of a latency tradeoff, and it can impact frame pacing if the system gets bogged down. A native moderate consistent fps could be a better experience than a higher average fps but at much more variable frametime.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Even on Ada there is some concern about latency implications of the technology. Guess what a slower implementation is going to do? Even more latency. The risk is, if they enabled it on older, people might try it and say it sucks for latency or similar reasons, giving the tech a bad image. Maybe nv could tune it to work good enough over time, but that time may not be now.

First version of dlss looked pretty shit, Nvidia should do the right thing and enable it on the older cards. The leak did not mention any weird artifacts or insane latency. Never mind, it does mention frame drops and Instability.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TOMPPIX said:

First version of dlss looked pretty shit, Nvidia should do the right thing and enable it on the older cards. The leak did not mention any weird artifacts or insane latency.

I think they are doing the right thing. DLSS 3 likely isn't in a fit state to release on older GPUs, so it isn't. Look at the Digital Foundry video on DLSS3. Even on Ada it has some feature interaction limitations and could use more feature polish. If I were nvidia, getting that better would be my 1st priority, before putting resource into making it work on older gen GPUs. There's a world of difference between "it seems to run" and verifying it is release ready. Maybe at most you could argue for beta access at some point. I'm a 30 series owner. I'm curious about the tech, but I'd rather wait until it is ready if ever.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, TOMPPIX said:

First version of dlss looked pretty shit, Nvidia should do the right thing and enable it on the older cards. The leak did not mention any weird artifacts or insane latency.

But it did mention instability as well as inconsistency. 

 

I mean, instead of jumping on some conspiracy theory, isn't it more likely that the feature doesn't work well on older cards and that's why it was disabled? 

 

Or maybe they do what they did with RTX Voice. It gets enabled on the RTX cards first, then they evaluate how well it works on older cards, and if their finding is that it works well enough they roll it out to everyone. 

They have done that in the past. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, quakemarine20 said:



TLDR: DLSS3 on a 2070 is a 50% boost in fps but is not worth it, but DLSS3 on a 4090 is a 100% boost in fps and the best thing in gaming since RGB.

If it's 50% more FPS in total then that means the non-interpolated FPS actually went down by 25% then doubled with interpolation so you may get significant input lag. 

If you were at 60FPS base and with DLSS3 you're at 90 that means you're getting input lag from 45FPS at 22.22ms (without other factors) vs 60FPS at 16.66ms (without other factors). 

 

It is a tradeoff somebody may not care about at all. I think they should still make this an option and call it DLSS2+ instead while DLSS3 will give the full experience with not that many downsides. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, WereCat said:

I think they should still make this an option and call it DLSS2+ instead while DLSS3 will give the full experience with not that many downsides. 

If anything I think the name is the most questionable part and already we've seen some confusion on it. DLSS 3 makes it sound like a direct successor to DLSS 2 and DLSS 1. I guess if you view DLSS 3 as a superset of DLSS 2, maybe. But DLSS 3 seems to be a different toggle that could be used without DLSS 2 if someone wanted to. Maybe something like DLSS-X2 would be more descriptive, and keeps the door open if a future version might triple (-X3) the fps. But it is what it is.

 

Wait a sec, I just realised, if you went with my system above, you'd end up with DLSS 2 X2. Can I get a job on the USB naming group? 😄 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, porina said:

The risk is, if they enabled it on older, people might try it and say it sucks for latency or similar reasons, giving the tech a bad image.

Have the game display a warning, problem solved.....

Link to comment
Share on other sites

Link to post
Share on other sites

I bought a 3070 from a friend (yeah, I regret it now...), but it would be awesome if this 3.0 update becomes available to my card... I'd save some money instead of buying a 40xx card... Let's see if we'll have videos showing it working.

Link to comment
Share on other sites

Link to post
Share on other sites

Based on the Hardware Unboxed video I don't think that DLSS 3.0 is a very good feature.

I think it's more of a tool to show off better than actual performance in benchmarks and tests.

 

Visually it is worse than DLSS 2.0, has more latency and in general you would be better off sticking to other resizing and performance improving methods.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, GameRetro said:

Visually it is worse than DLSS 2.0, has more latency and in general you would be better off sticking to other resizing and performance improving methods.

It can be both visually worse and better, but it's true that it can't quite be compared to previous FPS as some is generated/"fake" frames.

Also that the issues DLSS 2.0 might become worse or a bit improved with DLSS 3.0. As we see now from digital foundry's video, it will be hard to implement a good solution for DLSS 3 and more of a workload, unless nvidia improves DLSS 3.0 to be easier to use and run as wanted with less artifacts and the Vsync issue, issues around effects/particles/other and UI.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Quackers101 said:

It can be both visually worse and better, but it's true that it can't quite be compared to previous FPS as some is generated/"fake" frames.

Also that the issues DLSS 2.0 might become worse or a bit improved with DLSS 3.0. As we see now from digital foundry's video, it will be hard to implement a good solution for DLSS 3 and more of a workload, unless nvidia improves DLSS 3.0 to be easier to use and run as wanted with less artifacts and the Vsync issue, issues around effects/particles/other and UI.

 

See this is the thing - it can't be better only worse. I doubt there are instances where naturally rendered frames can look worse than machine learning generated frames. At best they can look IDENTICAL.

 

In practice they are always worse, and the only question is to what extent. Is it acceptable, is it not, and is it distracting. And this is purely subjective since some people are more sensitive to random artifacts than others.

 

The previous rendered frame quality is the upper limit since it's what's used as a basis for AI generation of the next frame. Now in the future there might be "prettify" AI algorhitms that actually improve the visual fidelity of a frame - for example changing reflective surfaces to look better etc - in realtime. For now, this can only generate worse frames than the ones it uses as a basis.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, GameRetro said:

See this is the thing - it can't be better only worse. I doubt there are instances where naturally rendered frames can look worse than machine learning generated frames. At best they can look IDENTICAL.

A point to note is that native rendering does not necessarily result in the best perceived quality output as tradeoffs are already being made from the ideal output (if you had unlimited resource) and what is useably attainable. We already learnt that DLSS 2 output can look better than native. DLSS 3 generated frames from better than native DLSS 2 frames could still be better than native. It will vary, and it may not be all the time, but it can happen.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

A point to note is that native rendering does not necessarily result in the best perceived quality output as tradeoffs are already being made from the ideal output (if you had unlimited resource) and what is useably attainable. We already learnt that DLSS 2 output can look better than native. DLSS 3 generated frames from better than native DLSS 2 frames could still be better than native. It will vary, and it may not be all the time, but it can happen.

 

Here we're getting in the debate of upscaling. Coming from retro gaming background I agree there are processing techniques that may yield a cleaner or better result when upscaling a source, but the point I was making is the original quality can never be bested - i.e. it can't invent something that wasn't there.

 

In some areas things can be improved but my point was if you're aiming for something that will inject frames inbetween rendered frames (i.e. rendered frames are the gold standard, at least if you're looking for consistency) - it can never do anything better than the rendered frame.

 

Let's take antialiasing as an example. If the rendered frame was not antialiased, and had jagged edges on everything - then the best result for an AI generated frame would be a similar frame. Any smoothing or extra artifacting in the generated frame will feel out of place.

 

I was only discussing the frame generation as a technique. Now if you have rendered frames that are smooth, and with a variety of effects applied to them I would think it would be much simpler to "blend" a frame in that versus blending a frame inbetween sharp and "crisp" sort of looking frames.

 

But even then - the maximum and expected quality for rendering a frame would be a frame that's the same as a rendered frame. Even if you COULD generate a better frame, you shouldn't - unless you also tweak the rendered frames to look like it for better blending in.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GameRetro said:

the point I was making is the original quality can never be bested - i.e. it can't invent something that wasn't there.

I kinda get that, but DLSS 3 isn't meant to be used in isolation. Maybe we're arguing best vs worst case. As an illustration, say we have a native rendered output as scoring 10 on visual quality. With DLSS 2, it could vary by content say from 8 to 11. With DLSS 3, in the best case, you might take two 11 frames, and get a new 11 frame. In the worst case, you might have missing information so DLSS3 frames overall might vary say from 5 to 11. Expecting bit accuracy compared to native is the wrong goal, since native is not perfect either.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×