Jump to content

Death Stranding (PC) supports DLSS 2.0, allowing 4K at 60+ FPS on any RTX GPU

illegalwater

Death Stranding's PC port supports DLSS 2.0, every RTX card can comfortably run 4K at 60+ FPS with it enabled.

Quote

As far as image quality goes, playing the game with and without DLSS, I couldn't tell whether it was on or off without looking at the settings or framerate. I've even got screenshots taken with DLSS using both the quality and performance modes, along with no DLSS and TAA (Temporal Anti-Aliasing). The performance mode looks just a tad worse, though at higher resolutions it's far less noticeable. Against TAA, I think DLSS quality mode looks better, partly because TAA tends to over blur things.

 

Moving on to performance, DLSS as usual makes less of a difference on the fastest RTX GPUs, particularly at lower resolutions. The RTX 2080 Ti basically performed the same at 1080p with or without DLSS (DLSS was 2% faster), hitting an apparent CPU limit of around 160-165 fps. At 1440p, DLSS quality mode provided a more noticeably 23% boost in framerates (156 fps vs. 127 fps), while at 4K it delivered a 34% improvement (105 fps vs. 78 fps). We didn't test the RTX 2080 Ti in DLSS performance mode, as Death Stranding isn't the sort of game that really needs much more than 60 fps, and at 4K it's already well above that mark.

 

The RTX 2060 is the bottom of the RTX line, and it benefited more from DLSS at lower resolutions. DLSS quality mode improved performance at 1080p from 103 fps to 128 fps, a 25% increase. 1440p went from 75 fps to 100 fps (33% faster), and 4K improved from 43 to 56 fps (31%). It looks like DLSS quality mode can boost performance by 30-35% overall, provided you're fully GPU limited. What about DLSS performance mode? That was enough to take even the RTX 2060 above 60 fps (77 fps to be precise) at 4K. Obviously there's a lot less rendering work going on, since DLSS performance mode renders at half the vertical and horizontal resolution (so 4K DLSS performance is 1080p with DLSS upscaling). Still, unless you're pixel peeping it's difficult to see the difference between the various upscaling modes at 4K.

The performance gains are impressive, each GPU practically moves up a tier with DLSS enabled. I'm hoping we see most new games implement DLSS over the next few years, especially as real time ray tracing goes mainstream.

 

Sources

https://www.tomshardware.com/news/death-stranding-pc-dlss-performance-preview

 

EDIT

 

Dell S2721DGF - RTX 3070 XC3 - i5 12600K

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS 2 is very impressive in the game Control so I know I will get good visuals and frame rate with this title.

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds promising, I'd like to exercise my TV's 4k60+.

 

On the flip side, it might suck being a game benchmarker... each variable increases the number of test scenarios. We might see going forwards:

AMD native rendering

nvidia native rendering

nvidia DLSS performance mode

nvidia DLSS quality mode

 

And in addition to the performance aspect, there'll probably be some comparison of subjective quality also, especially if you start trading between quality and performance.

 

Will AMD implement something similar with next gen?

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I'd prefer if DLSS was game agnostic. So you could use it with any game without it having to be coded specifically to use DLSS.

Link to comment
Share on other sites

Link to post
Share on other sites

Kojima once again makes a port that is well optimized.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS 2.0 is pretty much NVIDIA's killer feature right now. Happy to see it implemented in more games.

 

 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, porina said:

Sounds promising, I'd like to exercise my TV's 4k60+.

 

On the flip side, it might suck being a game benchmarker... each variable increases the number of test scenarios. We might see going forwards:

AMD native rendering

nvidia native rendering

nvidia DLSS performance mode

nvidia DLSS quality mode

 

And in addition to the performance aspect, there'll probably be some comparison of subjective quality also, especially if you start trading between quality and performance.

 

Will AMD implement something similar with next gen?

AMD also have a sharpening technology.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, RejZoR said:

I'd prefer if DLSS was game agnostic. So you could use it with any game without it having to be coded specifically to use DLSS.

Hopefully that's the goal of the next iteration of DLSS.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Inelastic said:

Hopefully that's the goal of the next iteration of DLSS.

Along with rtx...

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Letgomyleghoe said:

Along with rtx...

If they can pull just what RTGI shader for ReShade can do that would be insane. Coz even though it's just screen space ambient occlusion and lighting, it creates such amazing effect on top of nearly any game. Games from 2009 feel like they have better lighting, shadowing and depth with it than current new games.

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS has to be programed for each game.  It is not likely to become game agnostic any time soon.  you'd essentially need NVIDIA's super computer in your PC to to do that. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I still have doubts about all the Ai nonsense...

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mr moose said:

DLSS has to be programed for each game.  It is not likely to become game agnostic any time soon.  you'd essentially need NVIDIA's super computer in your PC to to do that. 

IIRC, DLSS 2.0 made it so that it doesn't have to be trained for a specific game as it used a more general model. It still needs to be implemented by the developers but the general model means it's easier to implement and supposedly should get better over time as the drivers get updated. 

1 hour ago, RejZoR said:

I still have doubts about all the Ai nonsense...

I thought the same Tbh and DLSS 1.0 really reinforced my doubts. But after seeing DLSS 2.0, I'm slowly coming around to it. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

For a long time people underestimated DLSS and Tensor cores 

but we see more and more that they are a feature that could give Nvidia cards much more performance for the same money as AMD cards 

 

let‘s hope AMD has something similar 

Edited by Drama Lama

Hi

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

hi

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, schwellmo92 said:

AMD also have a sharpening technology.

So do nvidia. I have no idea how many people actually use it since it seems to be silent on that front after the announcement.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, D13H4RD said:

IIRC, DLSS 2.0 made it so that it doesn't have to be trained for a specific game as it used a more general model. It still needs to be implemented by the developers but the general model means it's easier to implement and supposedly should get better over time as the drivers get updated. 

I thought the same Tbh and DLSS 1.0 really reinforced my doubts. But after seeing DLSS 2.0, I'm slowly coming around to it. 

It doesn't require game specific content, but it still needs to be trained.   I don't think that training can be done on your RTX card so while more games will be viably catered for, not all games will benefit like they did with other features like AA.   Time will tell how far we go I guess. I'm just not expecting wide spread adoption any time soon.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, D13H4RD said:

IIRC, DLSS 2.0 made it so that it doesn't have to be trained for a specific game as it used a more general model. It still needs to be implemented by the developers but the general model means it's easier to implement and supposedly should get better over time as the drivers get updated. 

I thought the same Tbh and DLSS 1.0 really reinforced my doubts. But after seeing DLSS 2.0, I'm slowly coming around to it. 

Making details out of thin air goes against all logic. If they can miraculously create details from nothing, then how come we still don't have algorithms that can actually upscale images this way? And images would be way easier task since they are static thing opposed to moving 3D graphics that are far more complex and even have 3rd axis.

 

Not to mentin this "Ai" would have to run in realtime and do all this upscaling magic without inducing any delay.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, illegalwater said:

Death Stranding's PC port supports DLSS 2.0, every RTX card can comfortably run 4K at 60+ FPS with it enabled.

The performance gains are impressive, each GPU practically moves up a tier with DLSS enabled. I'm hoping we see most new games implement DLSS over the next few years, especially as real time ray tracing goes mainstream.

 

Sources

https://www.tomshardware.com/news/death-stranding-pc-dlss-performance-preview

That's great news especially for the RTX 2060 owners. Hope more and more games implement this now.

System Specs:

  • CPU: AMD Ryzen 5 2600 @3.40ghz (6C/12T)
  • Motherboard: Asus Prime B450M-K
  • RAM: Corsair LPX 2*8GB DDR4 @3200mhz CL16
  • GPU: Zotac Nvidia GeForce RTX 2060 (TU-104 variant) 6GB GDDR6
  • Case: Antec GX202
  • Storage
  • SSD: WD 240GB,
    HDD: Toshiba 1 TB @7200rpm
  • PSU: Corsair CX550 (2017 grey unit)
  • Display(s): BenQ 22 inches monitor, 1080p @ 60hz
  • Cooling: 3 Antec case fans 120 mm (2 blue LED front intake and 1 non LED rear exhaust)
  • Sound: Behringer UMC22 USB Audio interface, AudioTechnica ATH M20x studio headphones
  • Operating System: Windows 10 Pro 64 bit.
Link to comment
Share on other sites

Link to post
Share on other sites

 

24 minutes ago, RejZoR said:

Making details out of thin air goes against all logic. If they can miraculously create details from nothing, then how come we still don't have algorithms that can actually upscale images this way? And images would be way easier task since they are static thing opposed to moving 3D graphics that are far more complex and even have 3rd axis.

 

Not to mentin this "Ai" would have to run in realtime and do all this upscaling magic without inducing any delay.

 

"Moving 3D graphics that are far more complex and even have 3rd axis" that you see while gaming are a series of rendered images (-> frames)

 

With DLSS 2.0 the shaders in RTX cards are free to render those images at a (aliased) lower resolution (i.e. 720p to target 1080p, therefore more frames produced in the same amount of time). Meanwhile the card's tensor cores are then used to upscale the image using an AI convolutional encoder in real time.

 

They don't make "details out of thin air", they have an exhaustively trained deep neural network that learns by comparing the results of it's processing to high quality 16K reference images.

 

I suggest to not underestimate AI and address it like it is something that "it is impossible and will never exist" because a lot of companies (including nVidia) have already understood AI potential and are shifting their focus and resources towards that.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Mamonos said:

 

 

"Moving 3D graphics that are far more complex and even have 3rd axis" that you see while gaming are a series of rendered images (-> frames)

 

With DLSS 2.0 the shaders in RTX cards are free to render those images at a (aliased) lower resolution (i.e. 720p to target 1080p, therefore more frames produced in the same amount of time). Meanwhile the card's tensor cores are then used to upscale the image using an AI convolutional encoder in real time.

 

They don't make "details out of thin air", they have an exhaustively trained deep neural network that learns by comparing the results of it's processing to high quality 16K reference images.

 

I suggest to not underestimate AI and address it like it is something that "it is impossible and will never exist" because a lot of companies (including nVidia) have already understood AI potential and are shifting their focus and resources towards that.

There is no way the ai works like that. Its just a grid based interpolation system with ai features to choose what parts of the image has to be rendered in full resolution. If you could really use that ai for true upscaling, every streaming platform would pay anything to get on it. True 8K image with 720p bandwidth.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Jeppes said:

There is no way the ai works like that. Its just a grid based interpolation system with ai features to choose what parts of the image has to be rendered in full resolution. If you could really use that ai for true upscaling, every streaming platform would pay anything to get on it. True 8K image with 720p bandwidth.

This makes far more sense than shit like Ai creating details out of nothing. Also the "Ai" training is BS imo. What really happens is modeling the thing to work with particular game. The same way like you have to fiddle with ReShade to get Depth Buffer to work and adjust all the stuff to get good results. It's probably similar here. Adjusting the selective rendering to work properly in the game and not trained to make shit out of nothing somehow. That part never made sense to me and neither all the Ai marketing speak. But you just mentioning selective detailed rendering and it all makes sense now.

Link to comment
Share on other sites

Link to post
Share on other sites

 

1 hour ago, RejZoR said:

Making details out of thin air goes against all logic. If they can miraculously create details from nothing, then how come we still don't have algorithms that can actually upscale images this way? And images would be way easier task since they are static thing opposed to moving 3D graphics that are far more complex and even have 3rd axis.

Seek out Taran's content on waifu2x. Video output is essentially no more than a sequence of 2D scenes if you ignore temporal information, which can be useful information in itself.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Jeppes said:

There is no way the ai works like that. Its just a grid based interpolation system with ai features to choose what parts of the image has to be rendered in full resolution. If you could really use that ai for true upscaling, every streaming platform would pay anything to get on it. True 8K image with 720p bandwidth.

 

Developing and training such networks to make sure they are able to produce reliable result is complex and expensive.

In addition to this, you need specific hardware (this is not necessarily expensive) on the end user side to achieve this kind of real time AI upscaling.

 

This is why you don't get it advertised by (say) Netflix. Moreover, they rather invest their money in content creation than in upscaling technologies because for most users the (amount of) entertainment content matters more than the resolution - and anyway ISP in most countries already provide enough bandwidth to deliver high res content (given the compression they use). Finally, they could rely on hardware manufacturer to upscale the content for them.

 

In contrary, hardware manufacturer are quite a lot into that.

nVidia Shield is using AI upscaling to go on their latest device, they can upscale 720p/1080p to 4K at 30fps in real time.

Samsung developed a comparable technology (MLSR) to upscale content for their new 8K TV lineup, LG and Sony are on board as well.

 

And this is just a small percentage of the applications that are based on deep neural networks.

Most of those company have large resources that can be browsed to understand their researches, if you are interested on it.  

 

A sample AI-based upscaling software (which therefore takes ages to render) that you can investigate and try is Gigapixel AI by Topaz.

They upscaled old movies with that with good results.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

Making details out of thin air goes against all logic. If they can miraculously create details from nothing, then how come we still don't have algorithms that can actually upscale images this way? And images would be way easier task since they are static thing opposed to moving 3D graphics that are far more complex and even have 3rd axis.

 

Not to mentin this "Ai" would have to run in realtime and do all this upscaling magic without inducing any delay.

It's not exactly making it from nothing as the model is trained using ultra high resolution samples, so it has an approximate idea of how to upscale the scene. 

 

And there actually are implementation of artificial intelligence-based upscalers like the aforementioned Topaz Gigapixel but also Photoshop's Preserve Details 2.0 upscaling algorithm and even waifu2x, which was specifically trained to upscale pics of anime styled images to a certain point. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, williamcll said:

Kojima once again makes a port that is well optimized.

Makes me wonder if maybe I could actually play it. My system was blowing away the consoles at launch, then devs seemed to universally decide "eh, fuck it. Optimization is hard."

Then again, a lot of games don't even run well on the intended console platform either.

#Muricaparrotgang

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×