Jump to content

RTX GPUs should see a 35% - 45% improvement in current games over the GTX 10 without DLSS and other RTX enhancements - Tom Peterson

D13H4RD
On 9/2/2018 at 11:10 AM, SolarNova said:

7800 GTX $600 $777

8800 GTX $650 (ultra $830) $809 (ultra $1,027)

GTX 280 $650 $748

GTX 480 $500 $578

GTX 580 $500 $575

GTX 680 $500 $549

GTX 780ti $700 $702

GTX 980ti $650 $686

GTX 1080ti $700 $723

 

GTX 2080ti $1200  (GTX 2080 $800)

Adjusted for inflation

 

And as you said yourself in your post, even if you may disagree, you are paying for a larger core with the new RTX cards. They are inherently more expensive, and have much lower yields than smaller cores. There's more complexity to each part of the core too. And guess what, Nvidia is going to want to make money back on their R&D for these cards. You don't have to like it, that doesn't matter. This is far from the first time they've released an expensive card and won't be the last time either. But you're still getting something for your money.

.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, AlwaysFSX said:

Adjusted for inflation

 

And as you said yourself in your post, even if you may disagree, you are paying for a larger core with the new RTX cards. They are inherently more expensive, and have much lower yields than smaller cores. There's more complexity to each part of the core too. And guess what, Nvidia is going to want to make money back on their R&D for these cards. You don't have to like it, that doesn't matter. This is far from the first time they've released an expensive card and won't be the last time either. But you're still getting something for your money.

Copy cat....

 

lol totally joking

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Dylanc1500 said:

Copy cat....

 

lol totally joking

I guess to add, you're also paying for three cores slapped together, gets pricey.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Is this NVidia admitting they cannot compete, or that the market is stagnant?

I mean, even AMD could release a card with a bigger silicone on it, with more power requirements (800w ;) ) or with multiple cores (Just like they have with the CPU division). But the cost would also scale up!

 

So is the massive cost reflective of NVidia getting greedy, or of the actual cost going up for them?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

I'm not saying don't include it just compare it like all new AA algorithms/features of the past, that is what it is after all. If it gives you the image quality improvement you're after and has a lower performance requirement or frame rate reduction of an alternative card or brand then that is a reason to buy it (not a big one) but I wouldn't use that as metric to say the card is more powerful or has more performance than another or last generation. It has a feature which is more efficient but it doesn't make it more powerful.

 

That's really the underlying issue of large technology shifts, there are plenty of products now and coming in the short term that will not meet the frame rate desires of many at the resolution they want to run. Running a game at non native screen resolution always looks much worse after all.

 

DLSS isn't magic, neither was any other AA before it so I just think a bit of realism needs to be applied and we should definitely not start confusing resolutions and their performances with something that is not actually that resolution no matter how effectively the same it is visually, not from a performance and statistical comparison standpoint since they are actually different measurement points.

If they are visually the same then yes you can do a direct comparison. If you get higher fps with the same visuals then the method really doesn't matter at that point. At this point nobody knows what dlss can do and how good it is. If it ends up with the exact same visuals or better while increasing performance then you bet people will use it. Nobody cares about how it is done but rather the end results. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TechyBen said:

Is this NVidia admitting they cannot compete, or that the market is stagnant?

I mean, even AMD could release a card with a bigger silicone on it, with more power requirements (800w ;) ) or with multiple cores (Just like they have with the CPU division). But the cost would also scale up!

 

So is the massive cost reflective of NVidia getting greedy, or of the actual cost going up for them?

See as the die size is 2x Pascal and has ddr6 which is expensive I would say that yeah their costs have gone up for sure. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Brooksie359 said:

If they are visually the same then yes you can do a direct comparison. If you get higher fps with the same visuals then the method really doesn't matter at that point. At this point nobody knows what dlss can do and how good it is. If it ends up with the exact same visuals or better while increasing performance then you bet people will use it. Nobody cares about how it is done but rather the end results. 

Correct it doesn't matter unless you are trying to do direct performance comparisons then it does. SSAA, MSAA, FXAA, CSAA, TSAA are all Antialiasing methods like DLSS is and none of those get tested as "performance equivalent" to higher resolutions and DLSS shouldn't start now.

 

1080p DLSS 120 FPS vs 1440p 110 FPS does not mean the former card has better performance, it's not the same task/workload. There is no way anyone should be using DLSS to say that these new RTX cards are more powerful than GTX 10 series because like all AA methods if the GPU over time becomes not powerful enough to render the game or you need to raise the resolution and it does not have the power to render the game then no matter how good at AA the card is, how much hardware accelerated AA you have it's not going to be able to do the task. RTX 20 series and GTX 10 series could have exactly the same shader performance and DLSS doesn't make RTX 20 series more powerful, it has new hardware accelerated AA. It's a feature not a performance increase.

 

When a new series of cards come out that have DLSS then you can compare those on DLSS to DLSS performance.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, TechyBen said:

Is this NVidia admitting they cannot compete, or that the market is stagnant?

I mean, even AMD could release a card with a bigger silicone on it, with more power requirements (800w ;) ) or with multiple cores (Just like they have with the CPU division). But the cost would also scale up!

 

So is the massive cost reflective of NVidia getting greedy, or of the actual cost going up for them?

There are parts of the new GPU that are objectively going to cost more like DDR6, and being twice the size and that's before we consider R+D.  But to rephrase your thoughts in regard to the market,   Consider if this is the result of both a lack of competition and technological stagnation (ability to build better), then it clearly shows Nvidia is not sandbagging or sitting on tech,  I mean they didn't even have to release anything let alone something that people would complain about.  That isn't greed, not releasing anything and milking the 10 series would be greed.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Correct it doesn't matter unless you are trying to do direct performance comparisons then it does. SSAA, MSAA, FXAA, CSAA, TSAA are all Antialiasing methods like DLSS is and none of those get tested as "performance equivalent" to higher resolutions and DLSS shouldn't start now.

 

1080p DLSS 120 FPS vs 1440p 110 FPS does not mean the former card has better performance, it's not the same task/workload. There is no way anyone should be using DLSS to say that these new RTX cards are more powerful than GTX 10 series because like all AA methods if the GPU over time becomes not powerful enough to render the game or you need to raise the resolution and it does not have the power to render the game then no matter how good at AA the card is, how much hardware accelerated AA you have it's not going to be able to do the task. RTX 20 series and GTX 10 series could have exactly the same shader performance and DLSS doesn't make RTX 20 series more powerful, it has new hardware accelerated AA. It's a feature not a performance increase.

 

When a new series of cards come out that have DLSS and you can compare those then you can compare DLSS to DLSS performance.

That's like saying if a new card can run Vulcan and the old card can only run opengl then you can't compare the Vulcan performance of the new card to the opengl of the old card. The fact is you can because at the end if the day all that matters is the end result. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Brooksie359 said:

That's like saying if a new card can run Vulcan and the old card can only run opengl then you can't compare the Vulcan performance of the new card to the opengl of the old card. The fact is you can because at the end if the day all that matters is the end result. 

No you can't because if you're trying to say which card is more powerful then that test cannot be used, you can use it to compare the performance difference of opengl to vulcan but you can't use it to say which is an actually more powerful card.

 

I'm not saying don't compare I'm saying you can't use it to say which card is more powerful because you are not measuring the same thing. 1L of Water and 1L of Coke is still 1L, one might taste better and you might prefer to drink the Coke but both are 1L, you can't use taste to measure volume and you can't use 1080p DLSS to measure 1440p performance.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, mr moose said:

That isn't greed, not releasing anything and milking the 10 series would be greed.

Or releasing a "new architecture" that's the same as the previous generation with only incremental gains and no big headline feature. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

No you can't because if you're trying to say which card is more powerful then that test cannot be used, you can use it to compare the performance difference of opengl to vulcan but you can't use it to say which is an actually more powerful card.

 

I'm not saying don't compare I'm saying you can't use it to say which card is more powerful because you are not measuring the same thing. 1L of Water and 1L of Coke is still 1L, one might taste better and you might prefer to drink the Coke but both are 1L, you can't use taste to measure volume and you can't use 1080p DLSS to measure 1440p performance.

You can compare features, not necessarily performance.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, D13H4RD2L1V3 said:

Or releasing a "new architecture" that's the same as the previous generation with only incremental gains and no big headline feature. 

I'm not sure I would call that greed though,  I mean AMD did that for several generations just trying to keep up.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

No you can't because if you're trying to say which card is more powerful then that test cannot be used, you can use it to compare the performance difference of opengl to vulcan but you can't use it to say which is an actually more powerful card.

 

I'm not saying don't compare I'm saying you can't use it to say which card is more powerful because you are not measuring the same thing. 1L of Water and 1L of Coke is still 1L, one might taste better and you might prefer to drink the Coke but both are 1L, you can't use taste to measure volume and you can't use 1080p DLSS to measure 1440p performance.

I get your point. There is no 1:1 comparison to be had because both products are different in how they get to the result.

So no, we can't compare performance if we insist on that point.

 

I would argue tho, that insisting on that 1:1 comparison is bad for a review.

Sure, have that 1:1 as an added data point, but it does not show the full picture. It only shows a very limited and possibly useless value for the consumer for the sake of making sure to follow semantics over realism.

 

 

In your example, you insist on using different resolutions (i am a little unsure why actually), so how about we use the same resolution?

1440p DLSS vs 1440p [insert any kind of AA that is considered closest in quality]. Is it a 100% fit? Very unlikely. There won't be a 100% equivalent quality AA. So, just use one that is objectively WORSE than DLSS. Noone can be mad at inflated numbers then, right? If DLSS does 1440p with more frames than an objectively worse 1440p with AA, while looking better,... THAT is a useful metric for the consumer.

 

And before we get the people saying DLSS may not be supported everywhere: Sure, it won't. But so is about every other form of AA. I own plenty of games that have one option. On or off. Or I get FXAA, or off. Or MSAA or off. We have been doing benchmarks with AA turned on forever now, even tho it is not representative of other games that don't have it.

 

The bottom line here is this:

As a consumer and potential buyer, I kinda expect a review to show me all the relevant data.

DLSS seems to be way more relevant than any other form of AA ever was. Unless other forms of AA, It is like free graphical quality.

And that has to make it into a review in one form or another. I don't care if it can be measured apples to apples. And I also want a flat out 1:1 comparison, but I won't upvote, or share, a review that lacks critical information about GRAPHICAL IMPROVEMENTS in a GPU review. Just like price, warranty, packaging, service etc. are part of the product, so should graphical quality be part of the product. And i really hope that we don't have to argue about that when it comes to hardware meant to accelerate graphics.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, mr moose said:

I'm not sure I would call that greed though,  I mean AMD did that for several generations just trying to keep up.

Desperation? Clinging to life? Throwing everything but the kitchen sink? Throwing the book at the situation? Using a bucket to keep the ship from sinking? Pretending to be Jack?

 

I'm out of terrible references for AMD, and need caffeine. Boy, I am going to get so much crap.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mr moose said:

I'm not sure I would call that greed though,  I mean AMD did that for several generations just trying to keep up.

Probably a bit of that and just releasing stuff to satisfy the market 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Tech Enthusiast said:

I get your point. There is no 1:1 comparison to be had because both products are different in how they get to the result.

So no, we can't compare performance if we insist on that point.

Of course we can compare performance, use a standard baseline like we do no with all reviews which means no AA of any kind then start adding those on.

 

Benchmarking methods should not change because of DLSS, that is what I'm saying. Everyone can make all the subjective comparisons they like and evaluate value based on that but you'll destroy any ability to do performance comparison if you dirty the data set with data points that are not the same, at that point all data is useless and so would all performance comparisons.

 

I not sure why so many people are finding this hard, at no point did I say don't compare I'm saying you can't use DLSS to give any definitive indication of the GPU being more powerful than the last. DLSS is a post render process, that's the critical point, POST render. If you don't have the power to render the frame DLSS won't allow it to happen, that's impossible. You might be willing to drop resolution and try and make up for it with DLSS but that does not make the GPU actually more powerful.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

Of course we can compare performance, use a standard baseline like we do no with all reviews which means no AA of any kind then start adding those on.

 

Benchmarking methods should not change because of DLSS, that is what I'm saying. Everyone can make all the subjective comparisons they like and evaluate value based on that but you'll destroy any ability to do performance comparison if you dirty the data set with data points that are not the same, at that point all data is useless and so would all performance comparisons.

 

I not sure why so many people are finding this hard, at no point did I say don't compare I'm saying you can't use DLSS to give any definitive indication of the GPU being more powerful than the last. DLSS is a post render process, that's the critical point, POST render. If you don't have the power to render the frame DLSS won't allow it to happen, that's impossible. You might be willing to drop resolution and try and make up for it with DLSS but that does not make the GPU actually more powerful.

It would make it more powerful at AA though. You are talking about rendering performance not AA performance. 

Link to comment
Share on other sites

Link to post
Share on other sites

Meanwhile there's still probably people going, "But can it mine?" as they hope to still make money from bitcoin.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

that does not make the GPU actually more powerful.

 

But it actually kinda does, it is just extra performance limited to certain tasks.

It just does not add to the baseline performance (rendering), but to the total performance(rendering + AA/quality).

 

So consider this example: 

If an older GPU can do 60fps on Setting X, it won't be able to do 60fps on setting X+ AA.

If  RTX can do 60fps on setting X, it will be able to do 60fps on setting X + DLSS.

 

This is straight out more performance, just restricted to a certain task.

 

I understand you want to only and strictly compare performance that is available always and at any time.

 

So let me try a second example:

GPU one has 1gb of VRam and can do 60fps in game X.

GPU two has 8gb of VRam and can do 60fps in game X.

 

Their performance is equal, right? Until we use a game Y that needs more VRam.

GPU one would start to struggle and perform much worse than GPU two.

 

How would a review go in this example? Well, we would likely not only use Game X but add Game Y to the benchmarks.

So we had the 1:1 comparison and saw that both have the same raw power, but also the second comparison that showcases why GPU two is a better buy anyways.

 

The same should apply to Tensor cores. They are, as far as reviews are concerned, not much different than having more VRam.

They both only come into play, when they are actually being utilized. So we would not have to change our current benchmarking, we would have to add on to the benchmarking.

 

And, like you said, this added performance is measurable on top of the baseline without any form of AA. The only difference here is the issue of deciding which form of AA is closest in quality, in order to make a fair comparison of how much the tensor cores add via DLSS, but it is indeed added performance that you simply would not have had if the tensor cores would be gone.

 

Basically, I don't look at DLSS as a mean of gaining performance, but I am looking at Tensor Cores as a mean to get more performance as soon as they are being used. In this case, it is DLSS, but it may actually be other stuff as well. Outsourcing AI tasks for future strategy games come to mind as a possibility.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Tech Enthusiast said:

If an older GPU can do 60fps on Setting X, it won't be able to do 60fps on setting X+ AA.

If  RTX can do 60fps on setting X, it will be able to do 60fps on setting X + DLSS.

 

This is straight out more performance, just restricted to a certain task.

 

I understand you want to only and strictly compare performance that is available always and at any time.

This I agree with, what I'm saying is do not put in DLSS performance figures in with higher resolution data sets. It's another data set you can draw on the same graph and compare just don't mix up the data because it is not the same variable set.

 

No matter what 1080p DLSS is not 1440p or anything else, collect and display the data that way and make sure it's not confused with actual higher resolutions. The output resolution to the monitor and the resulting image might be that higher resolution (if that's how it works) but it was still rendered at 1080p then DLSS was applied.

 

This is no different to CPUs and things like AVX vs non AVX, we don't use AVX performance figures against non AVX workloads to show the CPU is more powerful but we do compare against AVX and different CPUs have different AVX performance. Here we only have one DLSS architecture so we can't directly compare DLSS performance versus another DLSS product but we can compare how it improves upon a baseline or other instruction set (AA in this case).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×