Jump to content

[HardwareLeaks/_rogame] - Exclusive first look at Nvidia’s Ampere Gaming performance

uzzi38

https://hardwareleaks.com/2020/06/21/exclusive-first-look-at-nvidias-ampere-gaming-performance/

 

Long story short are the following details from the article:

 

Quote

So basically, this unknown Ampere variant is :

  • 42.11% better than a stock RTX 2080 Ti Founders Edition
  • 34.20% better than a stock MSI RTX 2080 Ti Lightning Z
  • 28.42% better than a stock Nvidia Titan RTX
  • 8.30% better than the best Nvidia Titan V result under LN2
  • 2.18% less than KINGPIN’s overclocked EVGA RTX 2080 Ti XC

GPU core clock reports as 1935mhz, Memory clock is 6000mhz but likely a misreport due to new memory type/early driver.

 

EDIT: Alright, if I'm to add in something original, then here we go:

 


GA100 at least doesn't show any major uplifts in performance/flop or IPC, whatever you want to call it. There seems to be little adjustments to the uArch in terms of shaders. Consumer facing Ampere I'd hope is different, but I can't see some revolutionary jump in performance coming as a result of this.

 

 


The score is in line with what you'd expect from the same kind of clocks and 20-something% extra CUDA cores. I do hope the final clocks are higher though at the least, especially given the 350W rumour. At, say, 2.3GHz, it would become a 50% lead over the 2080Ti which is more like what you'd expect generation on generation. But in any case, the main point I'm trying to make here I'm fairly sure this is either the rumoured 3090 or Titan GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

presumably the 3090, still impresive

CPU: AMD 3600X

GPU: Nvidia 3070 (planned)

motherboard: MSI B500-A PRO

memory: G Skill Ripjaws V 2x8 GB 3600Mhz CL16

PSU: EVGA B3 650@

Case: NZXT H510

wireless network adapter: intel ax200

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, uzzi38 said:
  • 42.11% better than a stock RTX 2080 Ti Founders Edition
  • 8.30% better than the best Nvidia Titan V result under LN2
  • 2.18% less than KINGPIN’s overclocked EVGA RTX 2080 Ti XC

Holy how do you get better than a Titan V under LN2 and what looks to be 70 or so percent better than the founders edition?

✨FNIGE✨

Link to comment
Share on other sites

Link to post
Share on other sites

This is only good news if their mid range product drops in price.   I don't know about other countries but I have been noting a serious downturn in disposable income of late,  seems to be across the board too.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, BiG StroOnZ said:

Here's the compiled 3DMark Time Spy Chart with comparative performance numbers / scores (as found in OPs link):

 

Ampere-chart2-768x369.png.b5c0af952a581fde67f4f4c05f2e7899.png

 

If true, looking crazy good

 

Excuse me,  but is that the time spy result or the expected price tag?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, mr moose said:

Excuse me,  but is that the time spy result or the expected price tag?

 

Probably both!  😆

 

Also, correction (there seems to be some slight typos with OPs percentages) - the Unknown Ampere GPU is:

 

  • 30.98% better than a stock RTX 2080 Ti Founders Edition
  • 21.07% better than a stock MSI RTX 2080 Ti Lightning Z
  • 22.14% better than a stock Nvidia Titan RTX
  • 8.30% better than the best Nvidia Titan V result under LN2
  • 2.18% less than KINGPIN’s overclocked EVGA RTX 2080 Ti XC

Still is exciting if accurate.

Link to comment
Share on other sites

Link to post
Share on other sites

I mean... Unless it's actually something you can buy for money and not your firstborn child's eyes and left kidney does it really matter?

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

If this is the 3090/3080 Ti and it costs $1000+ it's pretty bad, if it is the 3080 and it costs $700 or less it is pretty good. A lot of people are assuming that this is the 3090/3080 Ti but if that is the case it would be ridiculously bad compared to Nvidia last big node jump where the 980Ti got beaten by ~30% on average by the 1080, and in TimeSpy according to the Guru3D test, 980Ti = 5327 and 1080 = 7591 so more than the 30% average difference, naturally this would change if Nvidia went back to normal pricing and the 3090/3080 Ti was around $700 maximum. The jump from the 780Ti to the 980 was pretty close to 30% in Time spy so if that isn't the 3080 it would be together with Turing another really bad improvement and only 30% improvement would actually put AMD in a position to be able to "match" the performance of the 3090/3080Ti which is something that I would think that Nvidia really doesn't want to happen.

Link to comment
Share on other sites

Link to post
Share on other sites

Putting aside the possible name changes, if it is either the 3090/Titan or 3080Ti/S, drawing from recent generations there isn't going to be that great a difference between them. While there might be some small core count difference, the main selling point of the top model is the extra VRAM.

 

So the question then is, if this unknown card is the more mainstream -80 card, or one of the higher ones above that. We also should keep in mind that Time Spy was created as a DX12 bench if my memory serves correctly. A big question is what is the RT performance like. Does it overcome the limitations of 1st gen? See if there is a Port Royal entry from the same submitter perhaps. I don't know if people think 20-30% improvement on the equivalent position current gen card is enough for traditional gaming, but a good boost in RT could also add significant value. I look forward to comparisons on both traditional and RT benches on next gen cards from both sides.

 

Random thought on the reported core speed. If memory serves me correctly, it only reports the driver boost speed. In practice, without needing to go anywhere near overclocking, cards may boost above that level. So we don't know what the eventual limit is. As for the ram speed, I've always got myself confused over real clocks and effective clocks on GPUs. Doesn't the multiplier between them differ depending on the ram type? e.g. GDDR will be different from HBM. If we take the 6000 value, does that divide down to something sensible with a common ratio?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BiG StroOnZ said:

Also, correction (there seems to be some slight typos with OPs percentages) - the Unknown Ampere GPU is:

It seems like the original source writer updated their article with more aggressive results for existing cards, hence the lower improvements now showing compared with this thread OP. I guess that is a problem with relative comparisons, how good it looks depends on what you choose to compare it with.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BiG StroOnZ said:

 

Probably both!  😆

 

Also, correction (there seems to be some slight typos with OPs percentages) - the Unknown Ampere GPU is:

 

  • 30.98% better than a stock RTX 2080 Ti Founders Edition
  • 21.07% better than a stock MSI RTX 2080 Ti Lightning Z
  • 22.14% better than a stock Nvidia Titan RTX
  • 8.30% better than the best Nvidia Titan V result under LN2
  • 2.18% less than KINGPIN’s overclocked EVGA RTX 2080 Ti XC

Still is exciting if accurate.

This is the second time already some idiot making typos like happened with the 3600XT being 4cores/8th.

So what is it exactly? Those updated? If that's true, it doesn't seem much of a big performance bump which is not really worth the upgrade IMHO.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

It seems like the original source writer updated their article with more aggressive results for existing cards, hence the lower improvements now showing compared with this thread OP. I guess that is a problem with relative comparisons, how good it looks depends on what you choose to compare it with.

 

Yeah, so uh, we may or may not have used Guru3D's numbers originally (first search result that came up) but for some reason they compared multiple GPUs together with overall performance ratings, not graphics performance ratings from 3DMark like you'd think.

 

The new numbers are actually purely graphics scores and can actually be compared with standard 2080ti performance figures.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, mr moose said:

This is only good news if their mid range product drops in price.   I don't know about other countries but I have been noting a serious downturn in disposable income of late,  seems to be across the board too.

They'll have no choice but to drop prices if the market can't afford the product. Unless they're just stupid.

 

I'd like to see a price drop as well. I'm thinking I'm gonna need a better GPU than this 1080ti for Cyberpunk 2077 to run in all its 4k glory.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Trik'Stari said:

They'll have no choice but to drop prices if the market can't afford the product. Unless they're just stupid.

I don't think the market by itself will be the main dictator. They'll have products through the whole stack eventually, so people can buy in at a point they're comfortable with. That's when the value argument sticks its ugly head in primarily with AMD. nvidia and AMD will both try to value position themselves accordingly. I'm not saying nvidia has to undercut AMD, same as Intel doesn't have to be better value than AMD. But they will consider each other in their pricing plans. Also throw in Intel as a new entrant, it could get interesting.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Bananasplit_00 said:

I mean... Unless it's actually something you can buy for money and not your firstborn child's eyes and left kidney does it really matter?

For those who can afford it yes. Also this usually translates into better performing lower end cards if the uplift at the high end is decent. Granted this is based on the assumption that they won't further price hike the high end because they no longer need to make the die bigger to try and get more performance like what they did with the 2080ti. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Brooksie359 said:

For those who can afford it yes. Also this usually translates into better performing lower end cards if the uplift at the high end is decent. Granted this is based on the assumption that they won't further price hike the high end because they no longer need to make the die bigger to try and get more performance like what they did with the 2080ti. 

Tbh since the launch of the RX 470-480 back in 2016, not much has changed on the low-end.

Vega, navi, gtx 1600, rtx 2000, and basically nothing changed (they got refreshed by the rx 500-series but apart from some optimization, they are basically the same.

 

So, I doubt if it will translate into better performance at the lower end because for the last 3-4 years, not much has changed...

 

And i'm sure someone is going to mention "hey what about the GTX 1660", and while it does give around 10-15% more performance than an rx 580, it's also 10-15% more expensive...

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, SlimyPython said:

Holy how do you get better than a Titan V under LN2 and what looks to be 70 or so percent better than the founders edition?

you need to be a fucking legend

Link to comment
Share on other sites

Link to post
Share on other sites

30% boost is HUge This will enable 75-100hz 4k gaming. Unless I am wrong about my guesses. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, samcool55 said:

Tbh since the launch of the RX 470-480 back in 2016, not much has changed on the low-end.

Vega, navi, gtx 1600, rtx 2000, and basically nothing changed (they got refreshed by the rx 500-series but apart from some optimization, they are basically the same.

 

So, I doubt if it will translate into better performance at the lower end because for the last 3-4 years, not much has changed...

 

And i'm sure someone is going to mention "hey what about the GTX 1660", and while it does give around 10-15% more performance than an rx 580, it's also 10-15% more expensive...

I would disagree. They have a node shrink which will allow for more cuda cores and clock speed within the same die size as what we previously had. Basically they can make the same size chips but they will be faster giving them the opportunity to give an increased performance at the low end. They couldn't do that with the 20 series or the 16 series because they didn't have any clock speed improvements or a way to add more cores within the same die size. I think it is safe to sa that there will he an increase in performance at the low end especially because if they dont then rdna 2 will likely dominate the low end. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, rawrdaysgoby said:

30% boost is HUge This will enable 75-100hz 4k gaming. Unless I am wrong about my guesses. 

That 350W rumour exists for a reason.

 

I highly doubt Nvidia will be happy with only a 30% uptick this generation. Not even slightly.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Trik'Stari said:

They'll have no choice but to drop prices if the market can't afford the product. Unless they're just stupid.

 

I'd like to see a price drop as well. I'm thinking I'm gonna need a better GPU than this 1080ti for Cyberpunk 2077 to run in all its 4k glory.

 

I don't think they will on grounds that the market they sell to is largely the same market that pays $60-80 for a pre-release, half finished, beta ,money grab, loot box riddled, remake only with new characters.  It's the reset of us that suffer because of it.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, uzzi38 said:

I highly doubt Nvidia will be happy with only a 30% uptick this generation. Not even slightly.

I believe just the node shrink should allow them to pack twice as many transistors, but it's more complicated than that and it could be occupied by even more RT cores.

Quote

GA100 at least doesn't show any major uplifts in performance/flop or IPC, whatever you want to call it. There seems to be little adjustments to the uArch in terms of shaders.

This doesn't  seem right. I'm getting frustrated by leaks, I wish we'd know what's coming already.

 

 

Quote

I don't think they will on grounds that the market they sell to is largely the same market that pays $60-80 for a pre-release, half finished, beta ,money grab, loot box riddled, remake only with new characters.  It's the reset of us that suffer because of it.

The fab capacity and price might be a problem, they could launch at high prices and lower them when fabs catch up to demand, how much will depend on the market's response(and what AMD does).

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, CTR640 said:

So what is it exactly? Those updated? If that's true, it doesn't seem much of a big performance bump which is not really worth the upgrade IMHO.

 

I would probably stand by the updated figures, personally. Because, while it may not seem like that much of a performance bump at first glance. Take into account, this is probably an engineering sample (most likely), so, I'm sure you understand what that entails. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×