Jump to content

BFV getting DLSS support and further RT improvements this week

shermantanker
8 hours ago, Taf the Ghost said:

It's a more efficient upscaling system that they can push to the fixed-function hardware in the GPU that would otherwise not be used. It's interesting, but it's still a solution in search of a problem.

I would say it's not in search of a problem, it's actually done two things to improve the product 1. free'd up cuda cores for more raw performance and 2. implemented a better upscalling process.

 

I see it more as an innovative improvement than a solution to a specific problem.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just hoping that DLSS will remove load from the CPU since Ray Tracing is too CPU demanding for my PC to handle it...

Corsair iCUE 4000X RGB

ASUS ROG STRIX B550-E GAMING

Ryzen 5900X

Corsair Hydro H150i Pro 360mm AIO

Ballistix 32GB (4x8GB) 3600MHz CL16 RGB

Samsung 980 PRO 1TB

Samsung 970 EVO 1TB

Gigabyte RTX 3060 Ti GAMING OC

Corsair RM850X

Predator XB273UGS QHD IPS 165 Hz

 

iPhone 13 Pro 128GB Graphite

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Taf the Ghost said:

It's a more efficient upscaling system that they can push to the fixed-function hardware in the GPU that would otherwise not be used. It's interesting, but it's still a solution in search of a problem. 

I think we are going to see a lot of that in the near future. Not so much with graphics cards yet, but with CPUs we are actually starting to get very near some very hard physical limitations.

Another name for "solution in search of a problem" is "experimentation".

Regarding DLSS specifically, I think Nvidia wants to push it because it will be cheaper for them in the long run. As it stands, to develop "profiles" for whatever AAA games are coming out, they have to have a team of people sit down, screw around with the settings and run some tests, and then find the best configuration to run the game at for the best experience. DLSS seems like it's closer to a solution to that problem than it is to fixing any non-existent problems client side.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

I would say it's not in search of a problem, it's actually done two things to improve the product 1. free'd up cuda cores for more raw performance and 2. implemented a better upscalling process.

 

I see it more as an innovative improvement than a solution to a specific problem.

From the reviews I saw, they said/showed that 2160P DLSS looks equivalent to 1800P with traditional AA, and that frame rates are the same.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mr moose said:

I would say it's not in search of a problem, it's actually done two things to improve the product 1. free'd up cuda cores for more raw performance and 2. implemented a better upscalling process.

 

I see it more as an innovative improvement than a solution to a specific problem.

Well, from the first run of testing, it's functionally no different than running a 4K screen at 1800p. GN only found it to be slightly better than running a 1440p Upscale with the normal upscalers. Further, there are other Upscaler systems that can be implemented that work on the current hardware, and it's not like Nvidia is offering automatic Dynamic Scaling. (That would have been very useful going forward.)

 

But the real kicker is that the Tensor Cores & RT Cores are actually for Pixar and those types of workloads, but the best they could do was come up with an upscaler that requires collection massive amounts of data for minimal result to the end user? (Unless the actual point is just to collect all of that "data" from "customers".) Whenever Ampere drops on 7nm, we'll see version 2 of the Tensor & RT Cores, hopefully with expanded functions that also improve the Gaming side of things in some manner.

 

Now if they could make DLSS a suite of post-launch/user-generated/always updating performance improving system, that'd be really interesting. The ability to tell the game to preload certain assets that might cause a normal hiccup would be a huge boon. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, straight_stewie said:

I think we are going to see a lot of that in the near future. Not so much with graphics cards yet, but with CPUs we are actually starting to get very near some very hard physical limitations.

Another name for "solution in search of a problem" is "experimentation".

Regarding DLSS specifically, I think Nvidia wants to push it because it will be cheaper for them in the long run. As it stands, to develop "profiles" for whatever AAA games are coming out, they have to have a team of people sit down, screw around with the settings and run some tests, and then find the best configuration to run the game at for the best experience. DLSS seems like it's closer to a solution to that problem than it is to fixing any non-existent problems client side.

At current, DLSS is the wrong approach for what to do with the fixed function parts of the Turing die. It really should be something more like a Dynamic Resolution Scaler that developers can add really easily, but it is operated by the Nvidia Driver and can use the RT cores to handle some part of that. Nvidia could have wrapped this together with a data tracking/gathering program so they have a performance feedback loop within every game. "Machine Learning-assisted Driver Optimization" or some such nonsense.

 

Right now, it's just a marketing gimmick to cover up Nvidia keeping the price/performance tiers the same because than can.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Taf the Ghost said:

Well, from the first run of testing, it's functionally no different than running a 4K screen at 1800p. GN only found it to be slightly better than running a 1440p Upscale with the normal upscalers. Further, there are other Upscaler systems that can be implemented that work on the current hardware, and it's not like Nvidia is offering automatic Dynamic Scaling. (That would have been very useful going forward.)

 

But the real kicker is that the Tensor Cores & RT Cores are actually for Pixar and those types of workloads, but the best they could do was come up with an upscaler that requires collection massive amounts of data for minimal result to the end user? (Unless the actual point is just to collect all of that "data" from "customers".) Whenever Ampere drops on 7nm, we'll see version 2 of the Tensor & RT Cores, hopefully with expanded functions that also improve the Gaming side of things in some manner.

 

Now if they could make DLSS a suite of post-launch/user-generated/always updating performance improving system, that'd be really interesting. The ability to tell the game to preload certain assets that might cause a normal hiccup would be a huge boon. 

 

I wouldn't throw the whole thing in bin based on initial performance reviews, there is still a lot of tweaking and games to come.  

 

Unless I am mistaken it does have dynamic scaling.  We just haven seen it in full swing yet.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, mr moose said:

I wouldn't throw the whole thing in bin based on initial performance reviews, there is still a lot of tweaking and games to come.  

 

Unless I am mistaken it does have dynamic scaling.  We just haven seen it in full swing yet.

It's already trending towards HairWorks/PhysX realm: interesting ideas that Nvidia just dropped because they really only wanted them to make AMD cards run worse. Though I also expect AMD to roll out Checkboard Rendering abilities at some point. Probably with Navi, which could be really interesting.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Taf the Ghost said:

It's already trending towards HairWorks/PhysX realm: interesting ideas that Nvidia just dropped because they really only wanted them to make AMD cards run worse. Though I also expect AMD to roll out Checkboard Rendering abilities at some point. Probably with Navi, which could be really interesting.

Pretty sure checkerboard rendering is a Sony IP and won't be used anywhere else

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Agost said:

Pretty sure checkerboard rendering is a Sony IP and won't be used anywhere else

where are you getting this info?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Taf the Ghost said:

It's already trending towards HairWorks/PhysX realm: interesting ideas that Nvidia just dropped because they really only wanted them to make AMD cards run worse. Though I also expect AMD to roll out Checkboard Rendering abilities at some point. Probably with Navi, which could be really interesting.

How do you figure? Most things like tessalation and asyncrnous compute that were successful weren't very good on release and took a good while for the technology to take off. To say that it's trending tward the likes of hairworks and physx is neglecting the fact that many technologies that are widely used today to quite some time for proper implementation. It's way to early to call at this point.  

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, pas008 said:

where are you getting this info?

Well, was only found on PS4 Pro until some time ago. Upon further inspection, it looks like also Intel and Microsoft are capable of CBR/DRR

However, it looks like Sony has sort of a patent over it (https://patents.google.com/patent/US20160005344A1/en) but it's probably just a method of doing it
 

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, pas008 said:

What does that have to do with anything?

You can put all your effort into something doesn't mean you will be good at it or do it right

Its the first game

Drivers/software/firmware/etc have many revisions because they lacked something previously

 

Well what I'm saying is that if this is a high effort application of the technology then I'd be having serious concerns over whether or not it's going to be relevant. Most companies won't have the skill or resources that Square Enix have either. So what good is it if 30% of the transistors on your GPU are there solely to produce an effect that is more cheaply done by alternate means on the rest of the GPU with almost identical performance and results?

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Brooksie359 said:

How do you figure? Most things like tessalation and asyncrnous compute that were successful weren't very good on release and took a good while for the technology to take off. To say that it's trending tward the likes of hairworks and physx is neglecting the fact that many technologies that are widely used today to quite some time for proper implementation. It's way to early to call at this point.  

Physx is pretty old and almost no new titles are implementing it in its GPU accelerated version.

Hairworks was pretty much a tessellation intensive trick to harm AMD gpu performance on some games (the latest of which is FF XV, with its infamously rigged benchmark), while actually killing performance on nvidia cards too. TressFX, on the other hand, is open source and much better performing on both brands

Tessellation was pushed by nvidia because, at the time, AMD GPUs really sucked at it. Just look at Crysis 2 and extreme use of tessellation, even for stuff beneath the ground.

Async compute is getting back because nvidia has just implemented it in Turing (after some improvements in Pascal, altough it wasn't fully async compatible), despite AMD pioneering it in late 2011. Notice how this is the only performance improving technology mentioned in this post.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Agost said:

Physx is pretty old and almost no new titles are implementing it in its GPU accelerated version.

Hairworks was pretty much a tessellation intensive trick to harm AMD gpu performance on some games (the latest of which is FF XV, with its infamously rigged benchmark), while actually killing performance on nvidia cards too. TressFX, on the other hand, is open source and much better performing on both brands

Tessellation was pushed by nvidia because, at the time, AMD GPUs really sucked at it. Just look at Crysis 2 and extreme use of tessellation, even for stuff beneath the ground.

Async compute is getting back because nvidia has just implemented it in Turing (after some improvements in Pascal, altough it wasn't fully async compatible), despite AMD pioneering it in late 2011. Notice how this is the only performance improving technology mentioned in this post.

My point is that dlss is still fairly new and it will take time for it to get better just like many other technologies. It could end up like physx but it could end up like asyncrnous compute. Only time will tell. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Taf the Ghost said:

It's already trending towards HairWorks/PhysX realm: interesting ideas that Nvidia just dropped because they really only wanted them to make AMD cards run worse.

Making your product perform better does not make the competitors products run worse. AMD's products didn't suddenly become worse at running the type of algorithms involved in whatever PhysX or HairWorks was doing just because they were discovered. They had the same performance for those algorithms all along and no one noticed until they tried to use them. That is to say, new technologies, including algorithms, are discovered, not invented. Atleast if you ask any theorist.

 

If AMD didn't want to compete in those markets by keeping up with the competition that was AMD's prerogative and the consequences of those decisions are wholly AMD's to carry. It is not Nvidia's responsibility to make sure that they don't do anything that might cause them to be a higher performance option than the competition. In fact, right the opposite is true: It is the responsibility of every business to make sure that they offer better options than the competition. Where things get fun is that some businesses will believe that that means offering a higher performance/quality product, and some businesses will think that that means offering a product with a better price/performance/quality ratio.

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Okay i'm confused as all hell.

 

Wheres this comparison between DLSS and upscaling coming from exactly. DLSS is a form of Anti-Aliasing, not upscaling.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, straight_stewie said:

Making your product perform better does not make the competitors products run worse. AMD's products didn't suddenly become worse at running the type of algorithms involved in whatever PhysX or HairWorks was doing just because they were discovered.

Well if you mean actual type then HairWorks is actually and example of that, TressFX ran performance unimpaired on both Nvidia and AMD while HairWorks on AMD ran like garbage. Both TressFX and HairWorks are functionally the same, HairWorks just is not vendor agnostic so any game implementing HairWorks makes AMD GPUs performan worse to get the same visual experience while the GPU is in fact capable of doing such a task without being performance impaired.

Link to comment
Share on other sites

Link to post
Share on other sites

now players will be able to see real time rendering on the dwarf transvestite german soldiers hair

ASUS X470-PRO • R7 1700 4GHz • Corsair H110i GT P/P • 2x MSI RX 480 8G • Corsair DP 2x8 @3466 • EVGA 750 G2 • Corsair 730T • Crucial MX500 250GB • WD 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, CarlBar said:

Okay i'm confused as all hell.

 

Wheres this comparison between DLSS and upscaling coming from exactly. DLSS is a form of Anti-Aliasing, not upscaling.

Because Super Sampling involves rendering the image at a higher resolution and is then down sampled. DLSS is in a way still that, it's based off of a Super Sampled source image and the resulting image is a higher resolution than the rendered imaged before DLSS is applied.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Agost said:

Pretty sure checkerboard rendering is a Sony IP and won't be used anywhere else

Not sure if that is true, but it wouldn't matter that much. MS would come with their own approach and it'll be in the next DX iteration with AMD most likely having hardware-level support for it. AMD really is in a weird spot with their Semi-Custom work, haha.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×