Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

[HardwareLeaks/_rogame] - Exclusive first look at Nvidia’s Ampere Gaming performance

If that's the 3090 those numbers really suck - I don't get why people are impressed. The 2xxx series was really underwhelming as well and it came out nearly two years ago If all they can do in that time is a measly +30% performance it better be cheap.

Link to post
Share on other sites
10 minutes ago, Sampsy said:

If that's the 3090 those numbers really suck - I don't get why people are impressed. The 2xxx series was really underwhelming as well and it came out nearly two years ago If all they can do in that time is a measly +30% performance it better be cheap.

not when you count in raytracing by far

you might think its a gimmick but its the future, devs wanted it for decades

Link to post
Share on other sites

It does feel like we're on the edge of a new way of thinking about GPU performance. It could well be they don't significantly increase non-RT performance in absolute terms, and use updated DLSS to give more of a boost, as well as in parallel throwing better RT into the mix. We're going to need separate measures on each of those areas, both against previous gen and vs AMD (and eventually Intel too).

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance Pro RGB 3200 4x16GB, Asus Strix 1080Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

TV Gaming system: Asus X299 TUF mark 2, 7920X @ 8c8t, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, EVGA 2080Ti Black, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, LG OLED55B9PLA

Former Main system: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Link to post
Share on other sites

30% seems just about normal-ish for a GPU performance bump generation-by-generation 

The Workhorse

R7 3700X | RTX 2070 Super | 32GB DDR4-3200 | 512GB SX8200P + 2TB 7200RPM Barracuda Compute | Windows 10 Pro

 

The Portable Station

Core i7 7700H | GTX 1060 | 8GB DDR4-2400 | 128GB SSD + 1TB HGST | Windows 10

 

Samsung Galaxy Note8 SM-N950F

Exynos 8895 ARM Mali G71 MP20 | 6GB LPDDR4 | 64GB internal + 128GB microSD | 6.3" 1440p "Infinity Display" AMOLED | Android Pie 9.0 w/ OneUI

Link to post
Share on other sites
1 hour ago, Sampsy said:

 If all they can do in that time is a measly +30% performance it better be cheap.

 

Not measly

 

480 to 580 was about 16%

580 to 680 was about 45%

680 to 780 was about 28%

780 to 980 was about 31%

980 to 1080 was about 62%

 

https://www.techspot.com/article/1191-nvidia-geforce-six-generations-tested/

 

Assume the two outliers are not representative of the whole (good statistical practice when working with averages especially when they are so far from the norm) and you have an average of 35% improvement.    So 30% is well within normal improvement range for a new product.

 

Or if you want to include the 980 ti because it was a year in between the 980 and 1080

980 to 980ti was 29% and 980ti to 1080 was 26.5%  bringing the release average to 31.8% or 32% if we ignore the outliers.

QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites

Regarding this entire thing

 

  1. It's a rumor
  2. It's a rumor
  3. It's a rumor

Did I say it's a rumor? Yeah, well, it's a rumor.

 

Best to keep expectations in check because we barely know anything else about the card that's even moderately confirmed. A 30%-ish performance improvement is well within range for NVIDIA's typical performance improvements generation-over-generation, but that also depends on the price. Turing itself was a solid 20-30% improvement over Pascal in terms of performance but the RTX price premium ended up making them worse value outright, especially at launch.

 

If Ampere's raw power proves to be a little underwhelming, I'm kinda betting that NVIDIA may not be focusing as much on raw compute power and focusing more on artificial intelligence and deep learning with stuff like DLSS.

The Workhorse

R7 3700X | RTX 2070 Super | 32GB DDR4-3200 | 512GB SX8200P + 2TB 7200RPM Barracuda Compute | Windows 10 Pro

 

The Portable Station

Core i7 7700H | GTX 1060 | 8GB DDR4-2400 | 128GB SSD + 1TB HGST | Windows 10

 

Samsung Galaxy Note8 SM-N950F

Exynos 8895 ARM Mali G71 MP20 | 6GB LPDDR4 | 64GB internal + 128GB microSD | 6.3" 1440p "Infinity Display" AMOLED | Android Pie 9.0 w/ OneUI

Link to post
Share on other sites
1 hour ago, mr moose said:

480 to 580 was about 16%

580 to 680 was about 45%

680 to 780 was about 28%

780 to 980 was about 31%

980 to 1080 was about 62%

480->580 40nm->40nm = 16%
580->680 40nm->28nm = 45%
680->780 28nm->28nm = 28%
780->980 28nm->28nm = 31%
980->1080 28nm-> 14nm = 62%

 

Generational improvements alone are on average less than 30%, however coupled with node change you can easily expect >50%(if it was so simple)

Link to post
Share on other sites
1 minute ago, Loote said:

480->580 40nm->40nm = 16%
580->680 40nm->28nm = 45%
680->780 28nm->28nm = 28%
780->980 28nm->28nm = 31%
980->1080 28nm-> 14nm = 62%

 

Generational improvements alone are on average less than 30%, however coupled with node change you can easily expect >50%(if it was so simple)

 

980ti to 1080 was 30% on jump to 16nm node.

and

1080 to 2080 was also a node jump (although slightly smaller) and 21%.  But you are right in that there is a lot more to it than that, there are increases in # cores and memory improvements and so on.    I think for the purpose of what to expect 30% is not unreasonable or measly based on being historically evident even between node jumps.

 

QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites

The numbers here do seem weird to me considering previous leaks. These numebrs just seem too low in comparison of what other leakers have talked about in regards to performance and the changes made to get performance. But everything is a grain of salt so who knows hot this'll turn out.

Link to post
Share on other sites

Meh if this is the top end chip. But if RT performance drop is below 30 or 20, i'm sold. 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to post
Share on other sites
2 hours ago, mr moose said:

 

980ti to 1080 was 30% on jump to 16nm node.

and

1080 to 2080 was also a node jump (although slightly smaller) and 21%.  But you are right in that there is a lot more to it than that, there are increases in # cores and memory improvements and so on.    I think for the purpose of what to expect 30% is not unreasonable or measly based on being historically evident even between node jumps.

 

It's also important to note GTX 1080Ti made massive jump in performance within same series. A rather unusually large jump actually. So large that going to RTX 2080 or even RTX 2080Ti doesn't make sense unless you have money to piss or you specifically want RTX functionality.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | AORUS GTX 1080Ti | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites

if that 3080 that is impressive, if it 3090, not that really 

Link to post
Share on other sites
11 hours ago, Loote said:

I believe just the node shrink should allow them to pack twice as many transistors, but it's more complicated than that and it could be occupied by even more RT cores.

 

 

This doesn't  seem right. I'm getting frustrated by leaks, I wish we'd know what's coming already.

As for point one yes, but that's only if you keep die sizes the same.

 

I highly doubt that Nvidia will be selling consumers 700+mm^2 dies yet again with available N7 supply and price per wafer. Not only do they not have enough supply to meet market demands, they also wouldn't have much margins at all on these dies.

 

As for architectural improvements, really? It doesn't feel right? Come on man, what I posted isn't even a leak it's a guy's talking about CUDA11 which has been publically released, and that after some playing around it not much seems to be different with Ampere vs Turing.

 

It doesn't feel right isn't a counter-point.

Link to post
Share on other sites
1 hour ago, uzzi38 said:

As for point one yes, but that's only if you keep die sizes the same.

Sure, they often did smaller dies on a new process and then only the next generation of the same process was big.

 

1 hour ago, uzzi38 said:

It doesn't feel right isn't a counter-point.

It was more of an agreement with:
 

Quote

Consumer facing Ampere I'd hope is different, but I can't see some revolutionary jump in performance coming as a result of this.

Based on what little info there is we start to come to conclusions, but afaik there's no way to tell which GPU would that be, how big is the influence of it being an engineering sample, or is it at all legit. I am not going to use stronger words because I'm not qualified to guess and people who are qualified are most likely forbidden from telling us.

Link to post
Share on other sites
16 hours ago, Loote said:

I believe just the node shrink should allow them to pack twice as many transistors, but it's more complicated than that and it could be occupied by even more RT cores.

 

I believe NVIDIA themselves have indicated it has 4x the RT cores. From memory of a breakdown of the Turing dies they took up quite a big area, and these RT cores have additional instruction support. I suspect most of even the 3090 die area is going to be RT cores. So this sounds quite reasonable to me.

Link to post
Share on other sites

People here thinking the gains are impressive, oh how i miss the older days.

Pentium 4 670 4.6ghz . ASUS P5K3 . 4GB DDR2 1066 . Radeon HD 6990 . Corsair HX1000i

Link to post
Share on other sites
1 hour ago, CarlBar said:

 

I believe NVIDIA themselves have indicated it has 4x the RT cores. From memory of a breakdown of the Turing dies they took up quite a big area, and these RT cores have additional instruction support. I suspect most of even the 3090 die area is going to be RT cores. So this sounds quite reasonable to me.

traversal coprocessor rumor?

 

https://www.tweaktown.com/news/73209/geforce-rtx-3080-and-3090-rumored-to-pack-traversal-coprocessor/index.html

 

Link to post
Share on other sites
5 hours ago, pas008 said:

From the link:

Quote

I'm sure that when you saw the headline, or read this news or saw a post on social media about a "traversal coprocessor" on NVIDIA's new GeForce RTX 3080 and GeForce RTX 3090 graphics cards that you thought "what the HELL is a traversal coprocessor".

Almost correct, but I used a stronger 4 letter word where the all caps go :D 

 

In short, the claim here is that RT would be offloaded from the main GPU die itself. Interesting stuff. You could look at it like a coprocessor. We have to assume that on-card bandwidth wont hinder this kind of operation if it goes ahead. It also implies that non-RT parts are more of a possibility as you don't need to decide to implement it at a silicon level, but can do so at a card level.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance Pro RGB 3200 4x16GB, Asus Strix 1080Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

TV Gaming system: Asus X299 TUF mark 2, 7920X @ 8c8t, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, EVGA 2080Ti Black, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, LG OLED55B9PLA

Former Main system: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Link to post
Share on other sites
50 minutes ago, porina said:

From the link:

Almost correct, but I used a stronger 4 letter word where the all caps go :D 

 

In short, the claim here is that RT would be offloaded from the main GPU die itself. Interesting stuff. You could look at it like a coprocessor. We have to assume that on-card bandwidth wont hinder this kind of operation if it goes ahead. It also implies that non-RT parts are more of a possibility as you don't need to decide to implement it at a silicon level, but can do so at a card level.

this means easier binning also?

Link to post
Share on other sites

Not impressive if the pricing structure stays the same or greater than the 20 series. 

 

However if your getting a 30% performance uplift over the 2080ti at 10 series pricing, then it's impressive/palatable.

 

I could also win the lottery and wifey Cara Delevingne.

CPU | Intel i7-7700K | GPU | EVGA 1080ti Founders Edition | CASE | Phanteks Enthoo Evolv ATX | PSU | Seasonic X850 80 Plus Gold | RAM | 2x8GB G.skill Trident RGB 3000MHz | MOTHERBOARD | Asus Z270E Strix | STORAGE | Adata XPG 256GB NVME + Kingston 120GB SSD + WD Blue 1TB + Adata 480GB SSD | COOLING | Hard Line Custom Loop + 7 Corsair 120 Air Series QE Fans | MONITOR | Acer Predator XB271HU | OS | Windows 10 |

                                   

                                   

Link to post
Share on other sites
4 hours ago, porina said:

In short, the claim here is that RT would be offloaded from the main GPU die itself. Interesting stuff. You could look at it like a coprocessor.

PhysX cards v2.0? 😉

Link to post
Share on other sites
21 hours ago, leadeater said:

PhysX cards v2.0? 😉

In a way I'm still shocked HW accelerated physics aren't a thing yet in games as a defacto standard. And because there really isn't any standard for it, it'll just remain a pointless gimmick shilled by NVIDIA. Because no one can implement it as a gameplay mechanic unless everyone can do it. Half-Life 2 was really the only serious attempt in that direction and how many years has passed since? A lot. And no game comes even close to Half-Life physics that were actually usable within gameplay. Even if it was just tossing radiators at headcrabbed zombies. 99% of games today don't even try that.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | AORUS GTX 1080Ti | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×