Jump to content

Battlefield V with DXR on tested from Techspot(Hardware Unboxed)

kiska3
11 hours ago, RejZoR said:

I have 3 out of 4 things with my setup.

 

+ great details because I can run anything at Ultra

+ high framerate

+ high refresh rate

- resolution

 

Frankly, resolution really isn't an issue, because slamming anything with even crappy FXAA and image is smooth enough at 1080p that individual pixels aren't an issue. If game gives me SMAA, then even better. And frankly, real-time reflections are so convincing in modern rasterized games that I really don't see any point in ray tracing until it's ready to be used full scale at high framerates. This now is great for a tech demo under ideal conditions, but not really usable. Maybe in a slow paced horror game where action is slow and lighting and shadows plays more important role.

 

Eventually I'll be running 4K monitor, but that'll probably happen like 5 years in the future if not further, when 4K becomes as mainstream as 1080p is right now.

 

Bear in mind ray tracing can do a lot more than just reflections, it's just DICE only implemented reflections.

 

4 hours ago, S w a t s o n said:

I must admit, based on the digital foundry video DICE did, where they explained it was built on the old titan v and not even using RT cores and already running up to 60 FPS I was expecting much better.

 

 

 

AFAIK the Titan V is based on a hybrid of the geforce and quadro setups and has a lot more raw compute chops than a 1080 or the like. As a result it's not surprising it did well.

 

 

 

Honestly at this point i'm convinced it's not the RT cores that are to blame. Given whats been said about it being usable on other cards it's clear that the pure compute way of doing it is still enabled which is raising several questions for me:

 

1. Are the RT cores even being utilized. (I suspect they are but bear with me). Or is the DXR being run entirely as a compute calculation on the Rasterization part of the GPU. On paper a 2070 with it's 6 Gigrays should be able to render the ray side of things, (assuming 3 master rays per pixel and 3 secondary bounces per reflection ray for a total of 15 rays per pixel), at 4K at around 48 FPS, certainly not 4K playable but perfectly valid from a "can produce usable frame rates", and the 2080Ti's 10 Gigarays should easilly break 60FPS.

 

2. Have DICE switched from their own denoising algorithm or are they still using it. The video above mentions the game has to render the scene twice. Once for raster, once for raytracing to use.If thats the case then it's perfectly reasonable we'd see at least a 1/3 drop in FPS< and probably closer to 40% in the real world, (worst would be a flat half), thats still going to result in better framerates than where seeing, but if the Rasterization side is having to handle the compute cost of denoising it suddenly makes a lot of sense. Doubly so when you note that at boost frequency the 2080Ti has basically exactly the same half and single precision FLOPS, but significantly less double precision compared to the Titan V and seems to be getting similar framerates.

 

 

 

I really want to see some VEGA 64 TITAN V and 1080Ti benchmarks now...

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CarlBar said:

 

Bear in mind ray tracing can do a lot more than just reflections, it's just DICE only implemented reflections.

 

 

AFAIK the Titan V is based on a hybrid of the geforce and quadro setups and has a lot more raw compute chops than a 1080 or the like. As a result it's not surprising it did well.

.

Of course it's much faster than a 1080 but the comparison is the 2080ti. I don't think double precision is used in that way but I'm not sure about the denoiser. I would assume it's the same one DICE showed before.

 

It just makes no sense as the titan v only has tensor cores and the demo was only using tensor cores. Turing has tensor cores AND RT cores.  They also stated they would be able to much more fully optimize and tweak it for increased  ray tracing resolution AND performance, not one or the other. And yet none of that has materialized. I think something went wrong or the guy talking to Digital Foundry was talking out his ass.

 

Edit: Right just realized you suggested they arent using the RT cores. While they did state they could get this DXR implementation to run on any hardware they are basing it on AT LEAST the tensor core method of RTX. They would have to tweak some stuff in the backend to properly support AMD. That is to say it's not pure compute. The tensor cores provided a large speedup even though it's nothing compared to the RT cores if NVIDIA is to believed. Brute force real time ray tracing should not yet be possible.

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, PhantomHawk11 said:

People expected a sizable increase in performance, if not from ray tracing (since it is so hard in real time) then from normal rendering. What they got was RTX 2080 performance being similar to 1080ti performance for the same price, and the RTX 2080ti being so expensive that it's not viable for the majority of consumers.

Did they? or is it just a lot of angry noise from people who either can't afford it or never had any attention to buy it anyway. I haven't heard from anyone who bought it actually complain about it yet.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Hypothetical question

 

If we are willing to take such a big hit so as to make a RTX 2080ti into a 1080p 60fps GPU, then are there other methods by which we can  improve the graphical fidelity by a larger degree rather than ray tracing?

 

What I am asking is if ray tracing wasn't here, and you told the devs that the PC performance target was only 1080p 60fps on a RTX 2080ti then would they come up with something better looking than this?

Link to comment
Share on other sites

Link to post
Share on other sites

@CarlBar

I know what ray tracing can do. I also know what can be done using rasterization that looks identical to 99% of people and runs 5x faster.

 

@mr moose

One thing is just throwing lots of cash at shit because you're literally shiting it and you want best of the best even if it's just 1% difference or throwing lots of cash at shit because it's worth that kind of money. RTX 2080Ti ain't the second. Sure, it's reasonably faster than GTX 1080Ti. Only problem there is the price that doesn't justify it. If RTX 2080Ti cost as much as GTX 1080Ti did on launch, I'd say, get it. But costing like 400€ more, sorry, just no. Ray tracing thing excluded and not even taken into consideration. It's not worth it even for 99% of things it'll actually be running. The RTX part, if they charged 100€ more, fine, have the premium coz it's a new fancy thing no one else has. But charging so much more that you can buy a whole mid range card for the difference, nope.

 

I could easily buy it. Bought GTX 1080Ti basically because why not as it justified the premium over vanilla GTX 1080 with dramatic performance bump for relatively low premium to pay. GTX 2080Ti doesn't. Especially not compared to GTX 1080Ti if you already have one.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, mr moose said:

Did they? or is it just a lot of angry noise from people who either can't afford it or never had any attention to buy it anyway. I haven't heard from anyone who bought it actually complain about it yet.

fair enough and true.

Still in tech there is always two kinds of people, those that think about a purchase and those that "just buy it". And once you buy it, especially for this insane amounts of money, it's not reasonable to expect anything else then bragging rights, saying it was a bad purchase (bad in the sense dollars paid for what you get from it) doesn't compute for this people.

Hell, people preorder without a single number, with just stupid unreadable graphs and vague promises 

 

those that care about money or what their money gets them, are obviously underwhelmed 

.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, RejZoR said:

@CarlBar

I know what ray tracing can do. I also know what can be done using rasterization that looks identical to 99% of people and runs 5x faster.

 

@mr moose

One thing is just throwing lots of cash at shit because you're literally shiting it and you want best of the best even if it's just 1% difference or throwing lots of cash at shit because it's worth that kind of money. RTX 2080Ti ain't the second. Sure, it's reasonably faster than GTX 1080Ti. Only problem there is the price that doesn't justify it. If RTX 2080Ti cost as much as GTX 1080Ti did on launch, I'd say, get it. But costing like 400€ more, sorry, just no. Ray tracing thing excluded and not even taken into consideration.

But you are ignoring a major part of the product in order to draw that conclusion, what's it's worth is different for everyone.  Just because you don't see a future in it doesn't mean it doesn't have one nor is worth the price to other peope.

15 minutes ago, RejZoR said:

It's not worth it even for 99% of things it'll actually be running.

Says who? you?  if you feel that way don't buy it.  No one is forcing you to, but don't ignore half the product and basically the reality of situation to justify that feeling.

 

15 minutes ago, RejZoR said:

The RTX part, if they charged 100€ more, fine, have the premium coz it's a new fancy thing no one else has. But charging so much more that you can buy a whole mid range card for the difference, nope.

If that's all it's worth to you then againi don;t buy it, but just remember that what you deem something to be worth doesn't change its actual costs and value.

15 minutes ago, RejZoR said:

I could easily buy it. Bought GTX 1080Ti basically because why not as it justified the premium over vanilla GTX 1080 with dramatic performance bump for relatively low premium to pay. GTX 2080Ti doesn't. Especially not compared to GTX 1080Ti if you already have one.

I could easily buy it tomorrow, but I am still happy with my 380.  however that doesn't make this an irrational upgrade path for those who want it.

 

Do you also tell people that a car company is overcharging because the horsepower between two models is barely different while ignoring that the more expensive model has other features like ESC and dash cam with GPS beacon? (which you may deem to be worthless) 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, asus killer said:

those that care about money or what their money gets them, are obviously underwhelmed 

And those who ignore the entire other side of the card and fail to see how it works or why it is such a milestone are also underwhelmed.  We can't help those people but point out that such beliefs are not grounded in reality.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/16/2018 at 10:05 AM, mr moose said:

 

That would indicate an error with expectation and lack of understanding of the product more than the inherent challenges of the problem it is solving.

It is a little unfair from a consumer point of view to not expect a nice performance with all the media hype. IMO this should have stayed on the quadro series for a while to let it mature a little bit, whilst being available to developers - or something along those lines.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, mr moose said:

And those who ignore the entire other side of the card and fail to see how it works or why it is such a milestone are also underwhelmed.  We can't help those people but point out that such beliefs are not grounded in reality.

you can always go back to play at 30fps just to see some shadows or better anti aliasing. Is it worth it? Wasn't linus that said we prefers good fps and refresh rate, don't we all? And pay through the roof for it.

It will be an amazing tech in the future, for now it makes little sense in most cases.

.

Link to comment
Share on other sites

Link to post
Share on other sites

@mr moose

Ignoring a major part of product no one can use... Yeah, that part. "Says who? You?". No, the amount of non existing games you can run on it using actual ray tracing. The 99% is actually very conservative statement because the amount of actual games that use it is nowhere near 1%. Like... NOWHERE NEAR.

 

Do you also attack every review site and reviewer who in the end makes a verdict of "too expensive for what it offers"? Because that's not an opinion but a factually based statement that applies to every rational person. And yeah, that also applies to cars. How many times reviewers said "not worth it" when updated version of car had like 15HP more and few "special" badges, but cost like 10k more? Sure, idiots will buy it, but that doesn't change the fact it's entirely irrational product. Why one wouldn't or shouldn't be allowed to express that? Just like people are allowed to buy such products, the same we're allowed to express they are irrational or make no sense...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RorzNZ said:

It is a little unfair from a consumer point of view to not expect a nice performance with all the media hype. IMO this should have stayed on the quadro series for a while to let it mature a little bit, whilst being available to developers - or something along those lines.

 

If someone buys it because some media outlet said it was great but didn't actually post any benchmarks then it's not unfair because of the product, it's stupidity on the consumers part, or unfair of the media to be so complacent with the position they hold.  

 

The specs were there and the basic operation of the GPU has been published long before anyone laid down any cash.  We all knew they have comparable cuda cores and addition tensor cores for new operations.   Pre-ordering and buying without actually paying any attention to the product is not the same as the product being over priced for the features it has.

11 minutes ago, asus killer said:

you can always go back to play at 30fps just to see some shadows or better anti aliasing. Is it worth it? Wasn't linus that said we prefers good fps and refresh rate, don't we all? And pay through the roof for it.

It will be an amazing tech in the future, for now it makes little sense in most cases.

No, FPS gamers maybe, but there are people who value everything else and we don't even have any evidence that FPS won't improve with time. 

 

11 minutes ago, RejZoR said:

@mr moose

Ignoring a major part of product no one can use...

That some people can't use to their liking yet.  That's a pretty important qualifier.  You can't just assume because its teething that it is completely wasted and going no where. 

 

11 minutes ago, RejZoR said:

Yeah, that part. "Says who? You?". No, the amount of non existing games you can run on it using actual ray tracing. The 99% is actually very conservative statement because the amount of actual games that use it is nowhere near 1%. Like... NOWHERE NEAR.

You have a crystal ball do you? You are talking as if you have evidence that no game ever will and that the tech won't improve.

 

11 minutes ago, RejZoR said:

Do you also attack every review site and reviewer who in the end makes a verdict of "too expensive for what it offers"? Because that's not an opinion but a factually based statement that applies to every rational person. And yeah, that also applies to cars. How many times reviewers said "not worth it" when updated version of car had like 15HP more and few "special" badges, but cost like 10k more? Sure, idiots will buy it, but that doesn't change the fact it's entirely irrational product. Why one wouldn't or shouldn't be allowed to express that? Just like people are allowed to buy such products, the same we're allowed to express they are irrational or make no sense...

If you want to use the car analogy, then what you are doing is comparing two cars, one with  marginally more horsepower, but completely new suspension technology and tire technology that when the when the roads are resurfaced will allow it car to go faster without needing said horsepower.  But you are trying to claim becasue the roads aren't ready right now that the new suspension is pointless and won't go anywhere.   

 

Thew problem here is you are completely ignoring that the tech is being used and that developers are keen to make it work.  Not only that but it is being used in animation software and the head of renderman is ecstatic about it.   Did you also dismiss every new iteration of DX becaseu no games made use of the features on launch? did you also dismiss openGL, openCL and Cuda because when they were first introduced they also had teething issues and low to no support? 

 

This is how tech evolves, it always has and always will.   unless you have a crystal ball, ignoring that costs of developing and producing such tech (that many people are excited about) what makes you so sure this is a dud tech?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/15/2018 at 12:25 AM, kiska3 said:

fps

I kinda dig that vehicle, I do kinda dig the Rat Rod look, but the older 50's style I do not like, the muscle car era is the best but only with newer technology installed. I would love to drive that car, make it a beach cruiser or something. Battlefield V, I dig that car! If I was bored enough, I'd go looking to see what it was, but I am a bit busy now, waiting for things to cool down and get back at it.

Link to comment
Share on other sites

Link to post
Share on other sites

@mr moose you brought up the car analogy, not me. As for "crystal ball" for games supporting ray tracing. They announced what, 5 games on press release of RTX cards? Of which, after months we have 1 actually playable with only reflections as an effect not even using actual RTX but DirectX DXR. Now factor in their ridiculous price, tiny boost in rasterization and apparently quality issues on reference models, there won't be many people around actually owning them. Which means devs won't have an incentive to bother with it. Especially not with RTX tech specifically. Which means we won't see any real change for several more years. It's not crystal ball, it's literally observation.

 

Not to mention I can just feel it how some game devs will try to make ray tracing effects without ray tracing. Rasterization is still used by billions of users, it's a hefty market and not some niche for enhusiasts. If they can brag about engine that can make same effects faster and on any card by "faking" them, it'll take even longer. Ray tracing isn't some magic bullet that solves all problems. So, don't expect a sudden shift over night. That would happen if entire scenes were ray traced and we were getting 30fps. In that case, I'd be absolutely amazed. And that would also actually solve almost all lighting, reflectivity and shadowing issues and need to have a system in place that deals with that. But we don't and we won't have for many years to come. Sure, one day we will have full scene ray tracing. And flying cars.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

@mr moose you brought up the car analogy, not me.

But you didn't give it any thought and did what you did with the 2080 and ignore the inconvenient bits.

1 minute ago, RejZoR said:

As for "crystal ball" for games supporting ray tracing. They announced what, 5 games on press release of RTX cards? Of which, after months we have 1 actually playable with only reflections as an effect not even using actual RTX but DirectX DXR.

 

They announced 9 for RT and 25 for dlss,  what they have now is still way ahead of any other development uptake in this field. Even if they take until next year to release and fine tune that they are still experiencing faster adoption with RTX than any other new GPU feature in history.

 

1 minute ago, RejZoR said:

Now factor in their ridiculous price,

But your not factoring in the product, your just ignoring half of it and trying to argue the prices is silly.  maybe to you but that doesn't make it so for everyone.

This forum seems to be confusing what makes a product relevant,  It's not a static definable condition because it is different for everyone,  there are people who will find the new 590 relevant and a good option for their upgrade desires, you will find people who see the potential in the 2080 and go for that.  Your specific postion on RTX or 1060 or 590 or fury or whatever card is being discussed is not the defining factor of relevance and value.

1 minute ago, RejZoR said:

tiny boost in rasterization and apparently quality issues on reference models,

food thing the quality issues still seem to be small and with usual failure rates unless you have something that says they are not.

 

1 minute ago, RejZoR said:

there won't be many people around actually owning them.

Except they sold out on pre order and many people already do own them and want them for their next build.  

1 minute ago, RejZoR said:

Which means devs won't have an incentive to bother with it.

They already are, why are you ignoring this? games devs and animation software devs are excited about this. 

 

1 minute ago, RejZoR said:

Especially not with RTX tech specifically. Which means we won't see any real change for several more years. It's not crystal ball, it's literally observation.

Several more years? no need of a crystal ball?  that's some insane confidence in your understanding of the industry.

1 minute ago, RejZoR said:

Not to mention I can just feel it how some game devs will try to make ray tracing effects without ray tracing.

Feel eh?  that some serious foundation to base such positions on.

1 minute ago, RejZoR said:

Rasterization is still used by billions of users, it's a hefty market and not some niche for enhusiasts.

Because rasterization is the only option they have for lighting effects. 

 

1 minute ago, RejZoR said:

If they can brag about engine that can make same effects faster and on any card by "faking" them, it'll take even longer. Ray tracing isn't some magic bullet that solves all problems. So, don't expect a sudden shift over night.

But in terms of tech adoption that is exactly what we are seeing, a sudden shift of devs jumping all over it. 

1 minute ago, RejZoR said:

That would happen if entire scenes were ray traced and we were getting 30fps. In that case, I'd be absolutely amazed. And that would also actually solve almost all lighting, reflectivity and shadowing issues and need to have a system in place that deals with that. But we don't and we won't have for many years to come. Sure, one day we will have full scene ray tracing. And flying cars.

There's that crystal ball again?  you know this as fact on what evidence? it's only just come out, how can you know how long it will take?   You can't.

 

https://aithority.com/computer-games/why-today-is-an-exciting-moment-for-gamers-and-important-catalyst-for-ray-traced-games/

https://www.rockpapershotgun.com/2018/09/24/nvidia-geforce-rtx-2080-turing-confirmed-pc-games/

 

If you would bother reading anything about it you will see that there are many more titles with RTX support  in the immediate months surrounding it's launch than there were with any other GPU tech in it's first year.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, CarlBar said:

AFAIK the Titan V is based on a hybrid of the geforce and quadro setups and has a lot more raw compute chops than a 1080 or the like. As a result it's not surprising it did well.

There isn't much of a difference between the performance capabilities of a Quadro card or a Geforce card beyond actual missing hardware capabilities, which are not used in game rendering or Ray Tracing. The extra hardware features like FP64 cores are purely for compute tasks that require higher precision (Ray Tracing isn't that) and ECC is for long term stability and has no performance benefit.

 

If this implementation was developed and modeled on a Titan V then it is no wonder performance is significantly worse on a 2080 Ti. The Titan V spec wise is vastly superior in that it has a lot more CUDA cores and more Tensor cores, no RT cores of course. Quadro cards an Geforce cards based on the same die with the same hardware configuration perform the same.

 

Titan V

CUDA: 5120:320:96

Tensor: 640

RT: 0

SM: 80

 

2080 Ti

CUDA: 4352:272:88

Tensor: 544

RT: 68

SM: 68

 

The difference between a Titan V and a 2080 Ti is much the same as 1080/2080 to 1080 Ti/2080 Ti. Personally I think the RT cores are either A) Not what it's cracked up to be B) Not being used properly or not being used due in part to A.

 

8 hours ago, CarlBar said:

but if the Rasterization side is having to handle the compute cost of denoising it suddenly makes a lot of sense.

Denoising would never be done on the CUDA cores, Tensor cores and the code to do so is well developed outside of the game development world and all research on that right now is done using things like Tensor Flow which can utilize Tensor accelerators. Nvidia isn't the only hardware company looking at and developing Tensor accelerators either.

 

If the Tensor cores weren't being used for it you'd be seeing something more like 3-8 FPS, Tensor cores really are that much faster for image processing.

 

Given that the 2080 Ti has 17.6% less Tensor cores than the Titan V and 51 is 17.6% less than 60 the 49 FPS 1080p DXR Ultra falls right in line with that, it's my opinion that performance in Battlefield V is directly tied to the Tensor cores.

 

P.S. The Titan V is a Telsa V100 card with graphics outputs, it uses the same die with SMs disabled.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Humbug said:

Hypothetical question

 

If we are willing to take such a big hit so as to make a RTX 2080ti into a 1080p 60fps GPU, then are there other methods by which we can  improve the graphical fidelity by a larger degree rather than ray tracing?

 

What I am asking is if ray tracing wasn't here, and you told the devs that the PC performance target was only 1080p 60fps on a RTX 2080ti then would they come up with something better looking than this?

That I would like to see, I suspect we'd need much bigger VRAM buffers to handle the much higher asset resolutions to achieve it though. There's tons of low res stuff in game still with only critical assets getting any kind of high end attention. If we did everything to the best we can we'd have games much closer to the amazing tech demos, though a lot of those in the past were quad GPU systems lol.

Link to comment
Share on other sites

Link to post
Share on other sites

I can't even bother to respond to your quoting noodle of 3544325746554 quotes @mr moose.

 

The fact is, RTX cards are not all that faster in classic rasterization, have next to no supported games (1x BFV), will not have supported games for a lot more time (1 game out of 9 announced, DLSS is just an inverted DSR so I don't even count that as anything special) and they are very expensive. Bend your reality all you want, but they just aren't a good product for the price they are asking.

Link to comment
Share on other sites

Link to post
Share on other sites

"just buy it"

 

"how much of you're life you want to look back and not have raytraced at 30fps"

.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, S w a t s o n said:

Of course it's much faster than a 1080 but the comparison is the 2080ti. I don't think double precision is used in that way but I'm not sure about the denoiser. I would assume it's the same one DICE showed before.

 

It just makes no sense as the titan v only has tensor cores and the demo was only using tensor cores. Turing has tensor cores AND RT cores.  They also stated they would be able to much more fully optimize and tweak it for increased  ray tracing resolution AND performance, not one or the other. And yet none of that has materialized. I think something went wrong or the guy talking to Digital Foundry was talking out his ass.

 

Edit: Right just realized you suggested they arent using the RT cores. While they did state they could get this DXR implementation to run on any hardware they are basing it on AT LEAST the tensor core method of RTX. They would have to tweak some stuff in the backend to properly support AMD. That is to say it's not pure compute. The tensor cores provided a large speedup even though it's nothing compared to the RT cores if NVIDIA is to believed. Brute force real time ray tracing should not yet be possible.

 

I was suggesting either they aren't using the RT cores or they're not using the ensor Cores.

 

2 hours ago, leadeater said:

There isn't much of a difference between the performance capabilities of a Quadro card or a Geforce card beyond actual missing hardware capabilities, which are not used in game rendering or Ray Tracing. The extra hardware features like FP64 cores are purely for compute tasks that require higher precision (Ray Tracing isn't that) and ECC is for long term stability and has no performance benefit.

 

If this implementation was developed and modeled on a Titan V then it is no wonder performance is significantly worse on a 2080 Ti. The Titan V spec wise is vastly superior in that it has a lot more CUDA cores and more Tensor cores, no RT cores of course. Quadro cards an Geforce cards based on the same die with the same hardware configuration perform the same.

 

Titan V

CUDA: 5120:320:96

Tensor: 640

RT: 0

SM: 80

 

2080 Ti

CUDA: 4352:272:88

Tensor: 544

RT: 68

SM: 68

 

The difference between a Titan V and a 2080 Ti is much the same as 1080/2080 to 1080 Ti/2080 Ti. Personally I think the RT cores are either A) Not what it's cracked up to be B) Not being used properly or not being used due in part to A.

 

Denoising would never be done on the CUDA cores, Tensor cores and the code to do so is well developed outside of the game development world and all research on that right now is done using things like Tensor Flow which can utilize Tensor accelerators. Nvidia isn't the only hardware company looking at and developing Tensor accelerators either.

 

If the Tensor cores weren't being used for it you'd be seeing something more like 3-8 FPS, Tensor cores really are that much faster for image processing.

 

Given that the 2080 Ti has 17.6% less Tensor cores than the Titan V and 51 is 17.6% less than 60 the 49 FPS 1080p DXR Ultra falls right in line with that, it's my opinion that performance in Battlefield V is directly tied to the Tensor cores.

 

P.S. The Titan V is a Telsa V100 card with graphics outputs, it uses the same die with SMs disabled.

 

The Titan V's actual available processing power is only superior in double precision though it may have all that extra hardware but it's obvious that the turing architecture is more than compensating for that.

 

Also i'm not suggesting they're not using the  Tensor cores as random speculation. It's flat out stated in the video that they weren't using the Tensor cores on the Titan V build. This probably says somthing amazing about their Algorithm, (to be fair it sounds like passing 2 modified forms of AA over it several times each, which is totally within the capabilities of the Titan V, (i believe it allows for upto 64x AA). ut it would explain why things are chugging so hard on the 2080 Ti.

 

2 hours ago, leadeater said:

That I would like to see, I suspect we'd need much bigger VRAM buffers to handle the much higher asset resolutions to achieve it though. There's tons of low res stuff in game still with only critical assets getting any kind of high end attention. If we did everything to the best we can we'd have games much closer to the amazing tech demos, though a lot of those in the past were quad GPU systems lol.

 

Yeah the thing he seems to be missing is that every times there's been a graphical upgrade it's come about because of ethier new hardware+driver support or a new software side features, (DX9, DX10, DX11, DX12, PhysX, e.t.c.), that have enabled new ways of doing things. You don't get major improvements beyond a certain point without support for new features.

 

1 hour ago, RejZoR said:

I can't even bother to respond to your quoting noodle of 3544325746554 quotes @mr moose.

 

The fact is, RTX cards are not all that faster in classic rasterization, have next to no supported games (1x BFV), will not have supported games for a lot more time (1 game out of 9 announced, DLSS is just an inverted DSR so I don't even count that as anything special) and they are very expensive. Bend your reality all you want, but they just aren't a good product for the price they are asking.

 

So basically your admitting you where wrong. Ok.

 

 

Seriously if you can;t respond to the general points made and want to just go around spounting your opinion as fact no one's going to take you remotely seriously because we can all smell the giant pile of bull your carrying around.

 

The fact is people are rushing out to buy the new cards, and, (contrary to prior improvements), software developers are going nutz over the capability. Game developer and otherwise. Where not looking at multiple games adopting it 2-3 years from now this time, where looking at multiple games adopting it in the next few months.

 

Do you know how long it took for the first non-Microsoft, (who obviously had all the info needed to start adding support waaaay sooner than anyone else), published AAA game to support DX12 on PC? 6 months. DX12 came out July 2015, Rise of the Tomb raider launched in January 2016.

 

Getting support for it within hours of DX12 being launched is a huge deal.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, CarlBar said:

The Titan V's actual available processing power is only superior in double precision though it may have all that extra hardware but it's obvious that the turing architecture is more than compensating for that.

Turing isn't faster though, the minor clock speed increase doesn't counter the 17.6% reduction in CUDA cores and that's before GPU Boost 2.0 levels out the frequency difference to much closer anyway. Titan V is 13.8 TFLOPS and 2080 Ti is 11.75 Single Precision, which funnily enough is a 17.45% difference.

 

Edit: Titan V is a compute card, with compute focused firmware with no game driver optimizations. It performs badly in games because it's not for games, but it's still a faster card than any Geforce 20 series card out right now.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

That I would like to see, I suspect we'd need much bigger VRAM buffers to handle the much higher asset resolutions to achieve it though. There's tons of low res stuff in game still with only critical assets getting any kind of high end attention. If we did everything to the best we can we'd have games much closer to the amazing tech demos, though a lot of those in the past were quad GPU systems lol.

Just need 3dxpoint to get cheap right?

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

@CarlBar

Dude, he quoted like 15 of my lines individually. If I quote that and respond to each, it'll be fucking 30 of them. Sorry, just no. Not bothering to respond to a huge ass bag of noodles is not admission of anything. I've responded to all his quotes in one single paragraph.

Link to comment
Share on other sites

Link to post
Share on other sites

Bet I can make more quotes.

 

Spoiler
1 minute ago, RejZoR said:

Dude

 

1 minute ago, RejZoR said:

he

 

1 minute ago, RejZoR said:

quoted

 

1 minute ago, RejZoR said:

like

 

1 minute ago, RejZoR said:

15

 

1 minute ago, RejZoR said:

of

 

2 minutes ago, RejZoR said:

my

 

2 minutes ago, RejZoR said:

lines

 

2 minutes ago, RejZoR said:

individually

 

2 minutes ago, RejZoR said:

If

 

2 minutes ago, RejZoR said:

I

 

2 minutes ago, RejZoR said:

quote

 

2 minutes ago, RejZoR said:

that

 

2 minutes ago, RejZoR said:

and

 

2 minutes ago, RejZoR said:

respond

 

2 minutes ago, RejZoR said:

to

 

2 minutes ago, RejZoR said:

each

 

3 minutes ago, RejZoR said:

it'll

 

3 minutes ago, RejZoR said:

be

 

3 minutes ago, RejZoR said:

fucking

 

3 minutes ago, RejZoR said:

30

 

3 minutes ago, RejZoR said:

of

 

3 minutes ago, RejZoR said:

them

 

Ok that got boring quickly, I admit defeat where I stopped.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, CarlBar said:

Also i'm not suggesting they're not using the  Tensor cores as random speculation. It's flat out stated in the video that they weren't using the Tensor cores on the Titan V build. This probably says somthing amazing about their Algorithm, (to be fair it sounds like passing 2 modified forms of AA over it several times each, which is totally within the capabilities of the Titan V, (i believe it allows for upto 64x AA). ut it would explain why things are chugging so hard on the 2080 Ti.

I haven't watch the video and it's rather long so without a time pointer I'll have to comment as is. I suspect that's in reference to not using Tensor cores for Ray Tracing but the actual denoising would be using the Tensor cores. At least all the past information and demos I've seen before all state using Tensor cores for that. You got a time stamp I can jump to?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×