Jump to content

DLSS 3.0 Lock BYPASSED? DLSS 3.0 on an RTX 2070?

Haaselh0ff
3 minutes ago, porina said:

I kinda get that, but DLSS 3 isn't meant to be used in isolation. Maybe we're arguing best vs worst case. As an illustration, say we have a native rendered output as scoring 10 on visual quality. With DLSS 2, it could vary by content say from 8 to 11. With DLSS 3, in the best case, you might take two 11 frames, and get a new 11 frame. In the worst case, you might have missing information so DLSS3 frames overall might vary say from 5 to 11. Expecting bit accuracy compared to native is the wrong goal, since native is not perfect either.

Yeah I think we're overall saying the same thing but we're at semantics or how we phrase or view different things. I'm not saying the technology or algorhitms are bad. I'm just sayint that at the current implementation and the results it generates it's not more useful at all than DLSS 2.0 especially with the added input lag and only being somewhat viable in some games and only at high framerates where the input lag issues are even more of a problem since high framerate games are usually quick reaction time games too.

Link to comment
Share on other sites

Link to post
Share on other sites

As talked about, it depends on what your goal is with these solutions that we can see more of.

DLSS 3 use DLSS 2 features which will have it's pros and cons, and that the generated motion frames can sometimes do just that.

Sometimes giving a smoother and better motion, not as in "improving the native image" but supporting and giving some things that could very well improve the final outcome. Then there is about the viewed image vs when in motion, as some of these DLSS 3 generations could look god awful or help your eye to view certain motions or again make it even worse for motion. Where it at times can do the same things as higher HZ monitors can do, filling some of the motion (hopefully with little errors and high accuracy).

but of course doesn't replace just having a higher framerate and good HZ monitor, more so when we talk about 4K 120hz.

 

also how digital foundry tested it in some FPS limited situations which can show the problems with DLSS 3.

Would want to see more testing of that, like the frames from fast |i|i|i|i|i| to slow | i | i | i | i | , if "i" was generated an "|" was native image, or by vsync/gsync

Link to comment
Share on other sites

Link to post
Share on other sites

The engineers at NVIDIA deserve some recognition for this tremendous accomplishment in AI Rendering, and I think it is being overshadowed by false narratives that they have some hypothetical “on off switch” which can enable this feature on older architectures; at their discretion, but that they choose not to for increased sales of the Ada GPUs. This seems completely incorrect, yet people continue to spread this misinformation without any evidence to back this claim up. Nvidia however, has given a very convincing and logical reason why Frame Generation is not supported on 2000 and 3000 series cards, and until anyone provides some clear evidence to refute their claims, this is pure conjecture. I will also add that the cost of manufacturing, along with the increasing inflation and many other variables that we are left to speculate on, may indeed require NVIDIA, as a business, to launch these cards at this price point. People who compare these cards to price points 6 years ago are wildly mislead. No you would not pay 1600 dollars for a card in 2016, and you also wouldn’t get 16000 CUDA cores as well.

 

They do provide some key information in the Ada architecture white paper, that explains why this does not work on previous generations, and they have many other white papers on the new OFA, hopper, and other new architectural improvements that allow this to work. 
 

here’s the one on Ada

https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf

 

 this claim is completely false.

 

Anyone who actually wants to know the truth, should reserve the next few weeks researching the data NVIDA has provided, instead of falling victim 

to the dunning kruger effect.

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Matthewbrand98 said:

The engineers at NVIDIA deserve some recognition for this tremendous accomplishment in AI Rendering, and I think it is being overshadowed by false narratives that they have some hypothetical “on off switch” which can enable this feature on older architectures; at their discretion, but that they choose not to for increased sales of the Ada GPUs. This seems completely incorrect, yet people continue to spread this misinformation without any evidence to back this claim up. Nvidia however, has given a very convincing and logical reason why Frame Generation is not supported on 2000 and 3000 series cards, and until anyone provides some clear evidence to refute their claims, this is pure conjecture. I will also add that the cost of manufacturing, along with the increasing inflation and many other variables that we are left to speculate on, may indeed require NVIDIA, as a business, to launch these cards at this price point. People who compare these cards to price points 6 years ago are wildly mislead. No you would not pay 1600 dollars for a card in 2016, and you also wouldn’t get 16000 CUDA cores as well.

 

They do provide some key information in the Ada architecture white paper, that explains why this does not work on previous generations, and they have many other white papers on the new OFA, hopper, and other new architectural improvements that allow this to work. 
 

here’s the one on Ada

https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf

 

 this claim is completely false.

 

Anyone who actually wants to know the truth, should reserve the next few weeks researching the data NVIDA has provided, instead of falling victim 

to the dunning kruger effect.

 

GTX Titan X was $1500 after inflation, before taxes. So uhhh yeah....

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Matthewbrand98 said:

The engineers at NVIDIA deserve some recognition for this tremendous accomplishment in AI Rendering, and I think it is being overshadowed by false narratives that they have some hypothetical “on off switch” which can enable this feature on older architectures; at their discretion, but that they choose not to for increased sales of the Ada GPUs. This seems completely incorrect, yet people continue to spread this misinformation without any evidence to back this claim up. Nvidia however, has given a very convincing and logical reason why Frame Generation is not supported on 2000 and 3000 series cards, and until anyone provides some clear evidence to refute their claims, this is pure conjecture. I will also add that the cost of manufacturing, along with the increasing inflation and many other variables that we are left to speculate on, may indeed require NVIDIA, as a business, to launch these cards at this price point. People who compare these cards to price points 6 years ago are wildly mislead. No you would not pay 1600 dollars for a card in 2016, and you also wouldn’t get 16000 CUDA cores as well.

 

They do provide some key information in the Ada architecture white paper, that explains why this does not work on previous generations, and they have many other white papers on the new OFA, hopper, and other new architectural improvements that allow this to work. 
 

here’s the one on Ada

https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf

 

 this claim is completely false.

 

Anyone who actually wants to know the truth, should reserve the next few weeks researching the data NVIDA has provided, instead of falling victim 

to the dunning kruger effect.

 

Not buying it yet.

Nvidia has a history of artificially limiting features to certain upcoming hardware and being vague about the technical reasons for it.

Again their info is confusing. They don't say DLSS 3 can't work on Ampere, just that ADA is faster than Ampere (the word "twice" doesn't mean a whole lot, since we don't know what ADA chip is being compared to what Ampere chip). And then comes the suspect comment that their new technology "provides critical information to the DLSS 3 network" without clearly stating Ampere is unable to do so.

 

If they want DLSS 3.0 to be a 40X0 generation selling point, is their "official story" going to end up being that a 4050 is fast enough for DLSS 3, but a 3090 isnt? I sure hope they know better. Optical flow and turning 4K 24 FPS into 120 FPS is what my 3070 does every time I watch a movie. (SVP 4 pro)

I am sure they tweaked and refined and improved the proces, but the fundamentals are not new, and Ampere's abilities when it comes to advanced low latency interpolation horsepower probably work just fine with DLSS 3.0 if Nvidia wants it to, but history learns Nvidia prioritizes profits over being transparent about exclusive features and their requirements.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, jakkookkaj said:

They don't say DLSS 3 can't work on Ampere, just that ADA is faster than Ampere

I don't recall the original source but it has been said the OFA in Ada is 3x the perf of previous. If that is the key performance limiter for DLSS3, it is not that it can't work on earlier, but the performance may not be acceptable. 

 

2 hours ago, jakkookkaj said:

If they want DLSS 3.0 to be a 40X0 generation selling point, is their "official story" going to end up being that a 4050 is fast enough for DLSS 3, but a 3090 isnt? I sure hope they know better. Optical flow and turning 4K 24 FPS into 120 FPS is what my 3070 does every time I watch a movie. (SVP 4 pro)

OFA is a fixed function unit and doesn't scale with cores, in a similar way to video codec unit. If the hypothetical 4050 gets the same OFA unit as 4090 then yes, it would likely perform better than previous gen high end in that function.

 

SVP isn't exactly the same thing. DLSS 3 also gets motion data from game engine which video scalers wont have access to. It may be in interesting exercise to take recorded gaming footage and see how it goes after putting through SVP. Also what is the computational (and latency) cost of SVP? Doesn't matter if you're watching a video. You not only have to do this, but do it fast.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, porina said:

I don't recall the original source but it has been said the OFA in Ada is 3x the perf of previous. If that is the key performance limiter for DLSS3, it is not that it can't work on earlier, but the performance may not be acceptable. 

 

OFA is a fixed function unit and doesn't scale with cores, in a similar way to video codec unit. If the hypothetical 4050 gets the same OFA unit as 4090 then yes, it would likely perform better than previous gen high end in that function.

 

SVP isn't exactly the same thing. DLSS 3 also gets motion data from game engine which video scalers wont have access to. It may be in interesting exercise to take recorded gaming footage and see how it goes after putting through SVP. Also what is the computational (and latency) cost of SVP? Doesn't matter if you're watching a video. You not only have to do this, but do it fast.

 

4 hours ago, porina said:

I don't recall the original source but it has been said the OFA in Ada is 3x the perf of previous. If that is the key performance limiter for DLSS3, it is not that it can't work on earlier, but the performance may not be acceptable. 

 

OFA is a fixed function unit and doesn't scale with cores, in a similar way to video codec unit. If the hypothetical 4050 gets the same OFA unit as 4090 then yes, it would likely perform better than previous gen high end in that function.

 

SVP isn't exactly the same thing. DLSS 3 also gets motion data from game engine which video scalers wont have access to. It may be in interesting exercise to take recorded gaming footage and see how it goes after putting through SVP. Also what is the computational (and latency) cost of SVP? Doesn't matter if you're watching a video. You not only have to do this, but do it fast.

Good points, there is a cost, I think usually the GPU uses around 8-10% GPU when I use Optical Flow with SVP, but this is also the decoding of the HEVC 4K file, I think. I don't notice any latency issues, but the players I use could be making up for SVP latency to sync the audio without me knowing it.

One thing that makes me sceptical about nvidia's new tech that has no backward compatibility, is that the artifacts and issues with DLSS 3 as explained in the movie below look a lot like the artifacts I see in movies and TV shows when SVP settings are not tuned.

I wonder what motion data it gets from the game, optical flow is all about detecing motion using the video data, if it was really also getting positional information describing motion of objects rather than video data wouldn't there be less artifacts?

 

I was always a bit surprised there was no SVP technology for gaming yet, I always assumed the +- 1 frame latency was not acceptable, but I guess it is in many cases.

 

Link to comment
Share on other sites

Link to post
Share on other sites

See? This is what I hate about nVidia! IF the RTX 2000/3000 series can't do it as well as the new Lovelace cards, don't lock Turing and Ampere card owner out of it, simply allow and state categorically that performance and/or image may be somewhat compromised and allow users to decide. I understand nVidia wants to sell truckloads of their Lovelace cards, and is a business after all, it still bugs me that this smacks of planned obsolescence.

Main Rig: AMD AM4 R9 5900X (12C/24T) + Tt Water 3.0 ARGB 360 AIO | Gigabyte X570 Aorus Xtreme | 2x 16GB Corsair Vengeance DDR4 3600C16 | XFX MERC 310 RX 7900 XTX | 256GB Sabrent Rocket NVMe M.2 PCIe Gen 3.0 (OS) | 4TB Lexar NM790 NVMe M.2 PCIe4x4 | 2TB TG Cardea Zero Z440 NVMe M.2 PCIe Gen4x4 | 4TB Samsung 860 EVO SATA SSD | 2TB Samsung 860 QVO SATA SSD | 6TB WD Black HDD | CoolerMaster H500M | Corsair HX1000 Platinum | Topre Type Heaven + Seenda Ergonomic W/L Vertical Mouse + 8BitDo Ultimate 2.4G | iFi Micro iDSD Black Label | Philips Fidelio B97 | C49HG90DME 49" 32:9 144Hz Freesync 2 | Omnidesk Pro 2020 48" | 64bit Win11 Pro 23H2

2nd Rig: AMD AM4 R9 3900X + TR PA 120 SE | Gigabyte X570S Aorus Elite AX | 2x 16GB Patriot Viper Elite II DDR4 4000MHz | Sapphire Nitro+ RX 6900 XT | 500GB Crucial P2 Plus NVMe M.2 PCIe Gen 4.0 (OS)2TB Adata Legend 850 NVMe M.2 PCIe Gen4x4 |  2TB Kingston NV2 NVMe M.2 PCIe Gen4x4 | 4TB Leven JS600 SATA SSD | 2TB Seagate HDD | Keychron K2 + Logitech G703 | SOLDAM XR-1 Black Knight | Enermax MAXREVO 1500 | 64bit Win11 Pro 23H2

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, GamerDude said:

See? This is what I hate about nVidia! IF the RTX 2000/3000 series can't do it as well as the new Lovelace cards, don't lock Turing and Ampere card owner out of it, simply allow and state categorically that performance and/or image may be somewhat compromised and allow users to decide.

We're going around in circles. Its likely that nvidia isn't done with DLSS 3 on Ada yet given some of the observed limitations with it. Working on adding support to older product probably isn't their priority, but may come in time. Again, that someone claims to have got it working on older gen GPUs doesn't mean it is ready to be deployed to the masses. I've not followed up on it myself, has anyone else managed to get it working on older GPUs, or seen proof the original claim even happened?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, GamerDude said:

simply allow and state categorically that performance and/or image may be somewhat compromised and allow users to decide.

The user themselves stated "Doing this causes some instability and frame drops". Performance being ass is one thing, but instability is something you don't want from a feature that Nvidia, by enabling the setting, would imply is officially supported and intended to work. Their performance comparison is also uncertain, in my opinion, since they used two different DLSS presets. Their DLSS2 numbers was set to Quality, while the DLSS3 one was set to Balanced. Why not have both run at balanced or quality? Is DLSS 3 Balanced the same quality is DLSS 2 Quality? Was the performance gap no longer as impressive? Was the 2080 Ti not able to keep up or too unstable at DLSS3 Quality? All we can really tell is that supposedly it "works".

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, jakkookkaj said:

If they want DLSS 3.0 to be a 40X0 generation selling point, is their "official story" going to end up being that a 4050 is fast enough for DLSS 3, but a 3090 isnt? I sure hope they know better. Optical flow and turning 4K 24 FPS into 120 FPS is what my 3070 does every time I watch a movie. (SVP 4 pro)

This is a ridiculous comparison.

It's like saying "my phone can play a 4K video, so why can't it run Cyberpunk at 4K!?"

 

DLSS 3 is not like interpolating frames in a movie, just like decoding a 4K video is not the same as rendering it in real time in a game engine.

 

 

 

6 hours ago, porina said:

We're going around in circles. Its likely that nvidia isn't done with DLSS 3 on Ada yet given some of the observed limitations with it. Working on adding support to older product probably isn't their priority, but may come in time. Again, that someone claims to have got it working on older gen GPUs doesn't mean it is ready to be deployed to the masses. I've not followed up on it myself, has anyone else managed to get it working on older GPUs, or seen proof the original claim even happened?

I don't even think we can classify this as "getting it working".

The person who bypassed the lock experienced "instability" (I assume that means the game crashed) and frame drops.

 

Also, do we even have any evidence of them getting it working? It's just a random post on Reddit with zero evidence provided and when someone asked how he just went "I have shared as much as I could". When someone asked him to share the config to get it working he replied that it wouldn't work because you need a super special build of the game that only he has access to. It's straight up "my uncle works at Nintendo" tier rumors.

 

But I am not surprised that people will believe literally anything, even a random person on Reddit, as long as it fits the narrative that Nvidia are comically evil and bad.

Link to comment
Share on other sites

Link to post
Share on other sites

Then you get me, looking at upgrading to business account to get a 1000/100 connection 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×