Jump to content

nVidia building up for RTX release hype: 9 more DLSS games & more 4k perf comparisions (Plus a GN Video with details)

WMGroomAK
5 hours ago, Taf the Ghost said:

From the white paper.

 

Interesting stuff is actually below it on the new shader abilities. This is really more of a major refinement to Pascal. I fully expect Ampere to be 85% just a die-shrink of Turing.

 

I was expecting the rendering at a lower resolution then upscaling to a target higher resolution, it's the only realistic way they could achieve their performance increase claims. I'll be very interested to compare a true 4k render with no AA vs 4k DLSS (wonder what resolution it gets rendered at too, 1440p? or 1080p?, depends on game?).

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

I was expecting the rendering at a lower resolution then upscaling to a target higher resolution, it's the only realistic way they could achieve their performance increase claims. I'll be very interested to compared a true 4k render with no AA vs and 4k DLSS (wonder what resolution it gets rendered at too, 1440p? or 1080p?, depends on game?).

i'm thinking dgx does their thing to make use of it in the matrix form and passes it to the users and the users tensor handles all aa after fact

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, pas008 said:

i'm thinking dgx does their thing to make use of it in the matrix form and passes it to the users and the users tensor handles all aa after fact

One thing that will be interesting is whether all of this is optimized back into the game code, becomes a part of the nVidia Driver version or has some other implementation.  I could see this becoming a part of the game drivers as that would allow for easier optimizations after game release.

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/13/2018 at 4:47 PM, WMGroomAK said:

This neural network model is then distributed via GeForce Experience to end users who have a GPU with tensor cores and have the given game installed

this part

bugs me a little

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, pas008 said:

i'm thinking dgx does their thing to make use of it in the matrix form and passes it to the users and the users tensor handles all aa after fact

I'm more interested in the actual result, the Nvidia DLSS training isn't real time either so you're going to have to get supplemental updates from Nvidia for it to work at all and to fix errors or improve the image as time goes on. Those DLSS files I also wonder how big they are.

 

I don't like the reliance on Nvidia being so in the direct workflow for DLSS to work as well, that's not long term viable realistically or a good idea for the wider non big budget game development.

 

I would look at utilizing processing power during loading screens or something across all Geforce 20 series owners to render sample 64x Super Sample images then send those back to Nvidia for the DLSS training, that should be able to work on any game and would allow easy gradual visual quality improvements as more reference images come in to train DLSS.

 

The more out of band and out of direct control of Nvidia the better for long term support, DLSS support for games could die off quickly like dedicated multiplayer servers then actual implementations could also start to wain. New technology buzz only lasts so long so if it's not actually a good development process developers will just stop doing it, beyond the directly supported Nvidia titles where they would push for it's use.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

I'm more interested in the actual result, the Nvidia DLSS training isn't real time either so you're going to have to get supplemental updates from Nvidia for it to work at all and to fix errors or improve the image as time goes on. Those DLSS files I also wonder how big they are.

 

I don't like the reliance on Nvidia being so in the direct workflow for DLSS to work as well, that's not long term viable realistically for a good idea for the wider non big budget game development.

 

I would look at utilizing processing power during loading screens or something across all Geforce 20 series owners to render sample 64x Super Sample images then send those back to Nvidia for the DLSS training, that should be able to work on any game and would allow easy gradual visual quality improvements as more reference images come in to train DLSS.

 

The more out of band and out of direct control of Nvidia the better fro long term support, DLSS support for games could die off quickly like dedicated multiplayer servers then actual implementations could also start to wain. New technology buzz only lasts so long so if it's not actually a good development process developers will just stop doing it, beyond the directly supported Nvidia titles where they would push for it's use.

same here I want reviewers to actually have many screenshots for visual comparison

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, SteveGrabowski0 said:

If these cards were worth a shit the review embargo wouldn't be the day before launch.

what?

this is common practice

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/13/2018 at 5:03 PM, JediFragger said:

Thousands of engineers and nVidia can't even put together a half-decent graph. What a bunch of cocks.

 

Embarrassing shit.

You think engineers put together marketing graphs? xD

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, dizmo said:

You think engineers put together marketing graphs? xD

No, but they've gotta be able to do it better than the chimp who made that one!!!

 

:D

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, mr moose said:

If it works in the real world the way Nvidia are marketing it, then the hype is about gaining stupid amounts of processing power without any visual drop in quality.

Well you need a gpu with a stupid amount of power and a matching price tag in the first place!

Also, it's a bit weird Nvidia creates something that improves the visual quality and giving you more gpu power at the same time. I mean everything from gameworks causes a measurable hit so this seems a bit odd. I'm a bit sceptical about it because imo this seems a bit too good to be true.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, samcool55 said:

Well you need a gpu with a stupid amount of power and a matching price tag in the first place!

Also, it's a bit weird Nvidia creates something that improves the visual quality and giving you more gpu power at the same time. I mean everything from gameworks causes a measurable hit so this seems a bit odd. I'm a bit sceptical about it because imo this seems a bit too good to be true.

I'm not sure I follow, are you saying this is bad because they've done something good and therefore suspicious?    Just because gameworks needs a lot of computational power to make the most of it, doesn't mean that they were never aiming to achieve as much computational power as possible.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

These graphs suck .

a 1080ti does 70 fps 4k ultra settings .

Please quote or tag me @Void Master,so i can see your reply.

 

Everyone was a noob at the beginning, don't be discouraged by toxic trolls even if u lose 15 times in a row. Keep training and pushing yourself further and further, so u can show those sorry lots how it's done !

Be a supportive player, and make sure to reflect a good image of the game community you are a part of. 

Don't kick a player unless they willingly want to ruin your experience.

We are the gamer community, we should take care of each other !

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS sounds like a stupid idea. It should work more like DSR instead you'll be even more dependent on NVIDIA and only in supported titles. That's just stupid design.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, RejZoR said:

DLSS sounds like a stupid idea. It should work more like DSR instead you'll be even more dependent on NVIDIA and only in supported titles. That's just stupid design.

It's already in 25 titles and the card hasn't hit the shelves, I'd say regardless whether it is good or not you won't have to worry about it. You can still play all the games with AA instead,

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

25 titles. Out of how many games in existence? Hundreds of thousands. And only new games. What if I'd want it applied on a 1 year old game? Oh, tough luck right? This is why I hate features that are only applicable for new games. Not everyone plays just the new games.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

25 titles. Out of how many games in existence? Hundreds of thousands. And only new games. What if I'd want it applied on a 1 year old game? Oh, tough luck right? This is why I hate features that are only applicable for new games. Not everyone plays just the new games.

You mean old games like Ark: Survival evolved? Yeah, it has DLSS.

It is a new form to do AA, it does not seem to be hard to implement at all. 

 

Given the major performance uplift, every game that is GPU bound will want to include it in order to make people happy with their new flag ships. After all: People that spend 800+ on a GPU are more likely to also spend on DLC, Microtransactions etc. So i would not worry about missing support for it.

But like every form of AA: It won't be in every game. But it also does not need to be in every game in order to be a good thing.

Link to comment
Share on other sites

Link to post
Share on other sites

Can anyone with machine learning experience explain why they would need to manually do the whole training process on a game by game basis? Correct me if I'm wrong, but 'd assume that the datacenter-side side of the training is mainly just pattern recognition of objects and environments in the given game.

 

However, you look at a lot of AAA games like Tomb Raider or GTA and you have cities and farms and mountains and jungles, dozens of vehicles of different shapes and sizes, architecture that's all over the spectrum, all sorts of different human and animal models. At what point is there so much variety in one or two big games that you can generalize the algorithms to just work on everything? 

 

I'm probably just fundamentally misunderstanding how the AI process works, but if it's just extrapolating based on patterns then it seems like the level of variety in one AAA game is probably greater than the minor artistic differences you'd get with similar objects in a different game.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, RejZoR said:

25 titles. Out of how many games in existence? Hundreds of thousands. And only new games. What if I'd want it applied on a 1 year old game? Oh, tough luck right? This is why I hate features that are only applicable for new games. Not everyone plays just the new games.

It's been applied to 1 year old games.  PUBG has it, It is being deployed into games faster than any other tech introduced.  Are you also upset because physx isn't incorporated into all games? what about vulkan? hey what happened to mantle, is that stupid too?

 

It sounds like you just want to be angry.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Waffles13 said:

I'm probably just fundamentally misunderstanding how the AI process works, but if it's just extrapolating based on patterns then it seems like the level of variety in one AAA game is probably greater than the minor artistic differences you'd get with similar objects in a different game.

 

You are pretty much on point here, with only a minor misunderstanding:

You can NEVER have enough data. And the more you try to generalize, the more errors you get.

 

Basically, there is no reason to not do the process for every game. It takes a few hours for them to do on their servers and is a few MB of data for the user.

And the result will certainly be better if all the data points come from one game only. The art style on the pixel level can vary a lot more than you would notice when looking at the full picture.

But more data is more better. Always.

 

Maybe that is why GeforceExperience is in the mix, so not all people need to download data for a game they don't even have installed. But that is pure speculation on my part. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Waffles13 said:

 

i blame Linus for this, we are missing a DLSS video.

 

 

Also am i wrong in expecting something like a GTX 2050 with DLSS to beat a old school 1080 or 1080ti?

.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, asus killer said:

i blame Linus for this, we are missing a DLSS video.

 

 

Also am i wrong in expecting something like a GTX 2050 with DLSS to bit a old school 1080 or 1080ti?

Any GTX card will presumably be missing tensor cores, so DLSS will be out of the picture.

 

1 hour ago, Tech Enthusiast said:

Basically, there is no reason to not do the process for every game. It takes a few hours for them to do on their servers and is a few MB of data for the user.

And the result will certainly be better if all the data points come from one game only. The art style on the pixel level can vary a lot more than you would notice when looking at the full picture.

But more data is more better. Always.

I just hope they add some generalized profiles eventually, because no matter how trivial it would be you know that they're never going to go back and add profiles for older games that aren't super active.

 

Maybe they just keep adding profiles for every new game that comes out, and then use that data to constantly update the "base" profile(s) that work on anything else. Maybe have 2-3 different generalized profiles for "realistic", "cartoony", etc, selectable though GFE.

 

I guess you can make an argument that older games are easier to run and therefore just use supersampling, but if the tech is anywhere near as good as they claim, then presumably you could bump up way higher with a DLSS "faked" resolution than with straight rendering.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Waffles13 said:

I guess you can make an argument that older games are easier to run and therefore just use supersampling, but if the tech is anywhere near as good as they claim, then presumably you could bump up way higher with a DLSS "faked" resolution than with straight rendering.

For older games you could also use it to upscale the output resolution above what the original game engine supported, maybe. That would be nice especially for games that didn't support wide screen as well.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

For older games you could also use it to upscale the output resolution above what the original game engine supported, maybe. That would be nice especially for games that didn't support wide screen as well.

I didn't even think of that, good point.

 

It could also be really useful for games without proper UI scaling. Playing old RTSes at 1440p sucks sometimes.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

@WMGroomAK and @leadeater, Tim addressed DLSS a bit. Normal DLSS runs a game at some % lower than native then upscales via with specialized techniques per game. DLSS x2 is at native resolution and then uses the specialized techniques to replace most of the AA functions. So, somehow, all of the takes were correct.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×