Jump to content

RTX GPUs should see a 35% - 45% improvement in current games over the GTX 10 without DLSS and other RTX enhancements - Tom Peterson

D13H4RD

More info on the performance delta between the new RTX 20 GPUs versus the GTX 10.

 

NVIDIA's Tom Peterson, in an interview with HotHardware, revealed a little snippet of performance improvements over the GTX 10 in games that do not use RTX enhancements in a significant manner. 

Quote

Without DLSS, current games saw performance gain of around 50% over pascal GPUs, and now Petersen confirms that “if you are on high-resolution and not CPU limited,” there should be a 35-45% performance boost. 

Source: https://www.notebookcheck.net/Nvidia-official-says-the-RTX-20xx-cards-are-at-best-45-faster-than-the-Pascal-counterparts.328192.0.html

 

Interestingly, Peterson admitted that NVIDIA's performance graph could have been made more clearer. 

Quote

First of all, he admits that Nvidia “could have done a better job” with the official presentation in order to better emphasize the performance gains over the GTX 1000-series. The previous charts released by Nvidia highlighted performance improvements of more than 100% in some games, provided they also support DLSS, which requires specific coding, so current games may not get patches to introduce this feature. 

And if you like overclocking, there's some rosy news as these are apparently going to be designed to overclock past the 2100MHz mark on FE and custom cooling solutions. 

Quote

Last, but not least, Petersen mentioned that cards with Founders’ Edition or custom cooling solutions will be able to boost their cores over 2.1 GHz, and Nvidia really wanted the new GPUs to be good overclockers.

Personally, 35% - 45% is kind of the generally expected gains but for the price premium over GTX 10, it really seems that NVIDIA is banking on DLSS and Ray Tracing to help justify the significant premium. Unless you're an early adopter or you're a certain Avram Pilch of Tom's Hardware (who apparently can't live without Ray Tracing for some reason), you'd probably be better off waiting and seeing how it evolves, especially if you already own an upper end GTX 10 GPU. 

 

Watch the interview: https://hothardware.com/news/geforce-rtx-turing-nvidia-tom-petersen

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

Well... Higher resolutions benefit from the extra bandwhith GDDR6 provides. Same thing with HBM. Im more interested how it does at 1440p and 1080p (since most monitors are in that range). Not to mention in a variety of games. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, GoldenLag said:

Well... Higher resolutions benefit from the extra bandwhith GDDR6 provides. Same thing with HBM. Im more interested how it does at 1440p and 1080p (since most monitors are in that range). Not to mention in a variety of games. 

We'll probably have to wait for more detailed benchmarks. 

 

So far, the picture I'm getting is an extremely significant improvement with Ray Tracing and DLSS (which are the RTX's main features) and better performance at higher resolutions due to the higher memory bandwidth amongst other things (though to a much smaller degree by comparison). 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

if you are on high-resolution and not CPU limited

The CPU-limited point is good, though is there any game that's CPU-limited on high resolution? However, it seems we're back to the early predictions that they've solved some bottlenecks more than per-CUDA performance has increased much. So, between clocks, memory bandwidth and more CUDA cores, 25-30% seems like a reasonable uplift to expect, at the same core counts.

 

Nvidia is looking to sell 2080s to people with 4K displays. That seems like the gist for the first batch of cards, but they want 1080 Ti money for it.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, GoldenLag said:

Well... Higher resolutions benefit from the extra bandwhith GDDR6 provides. Same thing with HBM. Im more interested how it does at 1440p and 1080p (since most monitors are in that range). Not to mention in a variety of games. 

 

2 minutes ago, D13H4RD2L1V3 said:

We'll probably have to wait for more detailed benchmarks. 

 

So far, the picture I'm getting is an extremely significant improvement with Ray Tracing and DLSS (which are the RTX's main features) and better performance at higher resolutions due to the higher memory bandwidth amongst other things (though to a much smaller degree by comparison). 

1080p is going to be CPU limited, especially in DX11. DX11, especially is incredibly sensitive to all of the little I/O systems in modern computers that, except for Per CUDA core improvements, we can probably predict most of the performance, given the Titan V exists. 

Link to comment
Share on other sites

Link to post
Share on other sites

Hmm well this claims means that they're possibly going up 2 steps in performance class ( 1080ti = 2070 ) if that's the case I might be grabbing a 2080 and a 4k monitor, but I'll believe it when I see it.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Misanthrope said:

Hmm well this claims means that they're possibly going up 2 steps in performance class ( 1080ti = 2070 ) if that's the case I might be grabbing a 2080 and a 4k monitor, but I'll believe it when I see it.

That's usually been the case. 

 

Pascal's improvements, for instance, saw the GTX 1060 6GB being roughly equivalent to a 980 and the 1070 being roughly equivalent to a 980 Ti

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Taf the Ghost said:

 

1080p is going to be CPU limited, especially in DX11. DX11, especially is incredibly sensitive to all of the little I/O systems in modern computers that, except for Per CUDA core improvements, we can probably predict most of the performance, given the Titan V exists. 

Very true, unless Crysis 4 comes out.

 

1440p is more or less the new benchmark rez for high end cards.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Misanthrope said:

Hmm well this claims means that they're possibly going up 2 steps in performance class ( 1080ti = 2070 ) if that's the case I might be grabbing a 2080 and a 4k monitor, but I'll believe it when I see it.

Cuda core counts paint a different story than double performance.

 

Shure bandwidth helps, but it isnt magic. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

Cuda core counts paint a different story than double performance.

 

Shure bandwidth helps, but it isnt magic. 

Aye, that's why I am overall skeptical of these claims and the fact that they're coming this late and still unsubstantiated. But we'll see.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

Very true, unless Crysis 4 comes out.

 

1440p is more or less the new benchmark rez for high end cards.

Agreed. 1080p, on top end cards, is getting so fast that it's less on the GPU and far more on I/O subsystems. (This has also caused a lot of the drama with benchmarking, as guys that work their system really well can find a lot more performance than flat Stock Testing.) 1440p can leverage the entire card enough that you aren't running into weird timing issues with the system you're running on.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Misanthrope said:

Hmm well this claims means that they're possibly going up 2 steps in performance class ( 1080ti = 2070 ) if that's the case I might be grabbing a 2080 and a 4k monitor, but I'll believe it when I see it.

2080 should perform well at 4K because Nvidia has removed the bottlenecks that were getting in the way. That's the one big improvement for this generation that should be expected.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, D13H4RD2L1V3 said:

And if you like overclocking, there's some rosy news as these are apparently going to be designed to overclock past the 2100MHz mark on FE and custom cooling solutions. 

 

Fancy.

I wonder why they leave so much room for OCing.

AMD pushed their cards to the limit, almost. If NVidia leaves so much room, would that not just mean plenty of "free" performance that they can't claim to have for marketing purposes?

If they can reach 2100 anyways, why not put them at that and have fun with awesome looking marketing slides? After all the haters will compare OCed to the brim 1080tis vs stock 2080tis anyways,... not quite sure why they are doing this.

51 minutes ago, D13H4RD2L1V3 said:

Personally, 35% - 45% is kind of the generally expected gains but for the price premium over GTX 10, it really seems that NVIDIA is banking on DLSS and Ray Tracing to help justify the significant premium

 

For some people that is an easily justifiable premium due to that.

 

* Better graphics, finally after a decade.

* Tensor cores for machine learning? Hell yeah. Home office? check. Games with AI players? check. (seriously, something no one seems to talk about. Better AI in games would be amazing and exactly what these cores do)

* DLSS is just a different form of AA, but one that is a HELL of a lot faster and prettier.

 

Apart from those points, I can't wait to see topics about:

"That noob only saw me coming because he has a cheater RTX card. PTW hardware!1!11!"

"Why do I need an RTX Card to get a challenging AI opponent when playing [insert 4x game here]?!"

 

It is, however, perfectly fine to expect that nothing of this goodness will ever happen. But we are talking about pretty significant gains here, so I personally am willing to give them the benefit of the doubt. I can't imagine these kinda deals not happening. If it is possible, why would any dev or player NOT want it? This is not something like hair works that most people don't care about or that has zero impact on gameplay. RTX actually has an impact on gameplay and a huge one at that.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, GoldenLag said:

Very true, unless Crysis 4 comes out.

 

1440p is more or less the new benchmark rez for high end cards.

 

5 minutes ago, VegetableStu said:

good, good. better reference point for comparisons, especially for comparing with games that won't get RTX or DLSS anytime soon.

DLSS is actually Nvidia "encouraging" developers to use Compute hardware. This has been doable for a while, but without Async Compute on Nvidia products it wasn't adopted. Now that it is, this is actually going to be interesting.

Link to comment
Share on other sites

Link to post
Share on other sites

I think they wants to brainwash us about how awesome RTX is and all that mighty 35-45% improvements. Which games and do they have RT tech?

I'll believe it when there are independent bechmarks, not like nVidia is laying big bags of cash on the table to make them only say the good thing.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, VegetableStu said:

I feel like this should be a separate review on its own though ._. just for this transitionary period.

definitely agree on improvements to be made with DLSS (although I need to see sample videos for comparison to decide. I can tolerate noise, but "draw misses" and random mooshy areas I'm not a fan of)

That is a perfectly valid and important point.

 

We should not and actually can't compare old cards and RTX. This would be unfair and incomplete.

I am all for a straight out 1:1 comparison, without any RTX magic. But we do need a second part in any review that shows what RTX adds on top of the 1:1 comparison.

 

Just looking at DLSS it could be a 50% performance jump if you use AA at all. Zero if you don't I guess, but who honestly does not do any form of denoising like AA or upsampling? Either way, just ignoring this feature would be unfair to the tech behind RTX AND an incomplete picture for the customer. 

Link to comment
Share on other sites

Link to post
Share on other sites

Well that's great if you're charging 35% more for the same tier of card you haven't really made any advancements. This is the early adopter tax for RT and it'll be cheaper in a couple gens. AMD needs to pull a rabbit out the hat though 

That's an F in the profile pic

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Rattenmann said:

It is, however, perfectly fine to expect that nothing of this goodness will ever happen. 

Well, we all have to start from somewhere, right? 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, D13H4RD2L1V3 said:

Well, we all have to start from somewhere, right? 

I am not suggesting that I am personally expecting it to not happen. Tried to make that clear. 

I don't see why anyone (dev and consumer) would not want to have all the new features. After all, they are quite frankly amazing.

 

On the other hand, I am quite aware of the early adopter tax here. That is also to be expected.

I just don't think it is as bad as some people make it sound like. 

 

Not gonna push people to blindly buy depending on that tho.

While I am at least 90% certain RTX will be the standard sooner rather than later, people that don't care about better graphics (most Shooter players only care about fps and run on low anyways), or better AIs (basically 4x games, Civ like games ect.) should not be encouraged to spend the premium for features they don't use anyways.

 

Basically, we are at an amazing point in tech.

It saddens me that price is the big topic and not the new tech and possibilities. But that is kinda to be expected,... 

We just need reviews to show the full picture ON TOP of the flat out 1:1 comparison. Guess that is also the reason Nvidia wants to control the first week of reviews. Not to censor benchmarks, but to make sure it also shows the stuff RTX is about. 

Kinda like comparing a Hybrid Car to a Gas car and just don't talk about any electrical stuff at all. Would not be a complete review and not a good service for the viewers.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Rattenmann said:

Basically, we are at an amazing point in tech.

It saddens me that price is the big topic and not the new tech and possibilities. But that is kinda to be expected,...

Don't get me wrong. I love new tech, but people also tend to focus on price.

 

If it's anything to go by, when ray tracing and such becomes more and more mainstream, it can only get better, and I'm watching with keen eyes

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Misanthrope said:

Hmm well this claims means that they're possibly going up 2 steps in performance class ( 1080ti = 2070 ) if that's the case I might be grabbing a 2080 and a 4k monitor, but I'll believe it when I see it.

That's nothing special, that's every generation this decade. Except now the 70 series card is barely cheaper than the 80 Ti it's replacing the performance of.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, VegetableStu said:

I feel like this should be a separate review on its own though ._. just for this transitionary period.

definitely agree on improvements to be made with DLSS (although I need to see sample videos for comparison to decide. I can tolerate noise, but "draw misses" and random mooshy areas I'm not a fan of)

It'll be nice to finally see, as it means both major companies will be able to leverage the GPGPU hardware in their products in interesting ways. (There's some hilariousness in the fact Nvidia is actually moving massively in the GCN direction, minus GCN's original design limitations. )

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SteveGrabowski0 said:

That's nothing special, that's every generation this decade. Except now the 70 series card is barely cheaper than the 80 Ti it's replacing the performance of.

Well if AMD was capable of pulling that off and improve driver support at launch then they would be far better off than basically done.

 

So yes it is common but still in my opinion hard to pull off so I wouldn't be surprised if their gains were lesser this generation, specially because who else is going to compete now AMD probably won't even try high end GPU products for a while.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Rattenmann said:

I wonder why they leave so much room for OCing.

AMD pushed their cards to the limit, almost. If NVidia leaves so much room, would that not just mean plenty of "free" performance that they can't claim to have for marketing purposes?

If they can reach 2100 anyways, why not put them at that and have fun with awesome looking marketing slides? After all the haters will compare OCed to the brim 1080tis vs stock 2080tis anyways,... not quite sure why they are doing this.

For some people that is an easily justifiable premium due to that.

 

Its because of power consumption. The 2080 is already closer in TDP to past x80ti cards than it is to x80 ones. Raising the clockspeeds would be good for marketing, but then they would have to provide TDPs significantly higher than the ones they have currently, and they are already on the high side. That would be bad for marketing.

 

Having them clocked to the wall at stock would make them look worse for power consumption, instead they can advertise more overclocking headroom and leave the choice of increased clocks or lower power usage to users.

 

 

Aand a very off-topic thing:

 

I’ve been using this forum much more since overclock.net had a redesign. Just thought I’d say I really appreciate this community, you guys are great. That’s all :)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Did any of you talking about the following claim by Notebookcheck even watch the video?

 

''Last, but not least, Petersen mentioned that cards with Founders’ Edition or custom cooling solutions will be able to boost their cores over 2.1 GHz, and Nvidia really wanted the new GPUs to be good overclockers.''

 

Petersen did at no point in the video make the claim that Founders edition cards, and custom cooled cards (presumably released by board partners) would as a matter of fact reach 2.1GHZ. When asked about overclocking, Petersen mentioned that he had overclocked a single random (allegedly random) card (air cooled) to 2.13GHZ.

 

If Nvidia wants to make such claims, they can clock their cards to 2.1GHZ and sell it as 2.1GHZ on the store page or issue a guarantee to the customers that ALL of the Founders Edition cards will as a matter of fact reach 2.1ghz... However, Petersen made no claim that all cards will be able to reach 2.1ghz.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×