Jump to content

NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)

1 minute ago, BiG StroOnZ said:

 

Half the power does sound optimistic, as NVIDIA is going to have to prioritize one or the other (power savings vs performance increase). Although, there will definitely be a very large performance saving experienced due to the new node. I think 30% power savings with 50% performance increase +/- 5% is a more realistic expectation.

For the last several generations pretty much all GPUs have been pretty heavily power constrained. If Nvidia can get +50% perf at -30% power, why would they not go for 65% perf at the same power? 

 

Power efficiency looks good on a slide and from a technological level it's nice to havez but it's not going to sell cards. If they want everyone to get off their 1080tis, they're going to want to show the biggest performance bump they can get. 

 

That said, better idle power draw and overall efficiency based on workload would make sense as an advertising point. I just don't see them sacrificing peak performance and putting out a sub-200W Ti card. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Waffles13 said:

For the last several generations pretty much all GPUs have been pretty heavily power constrained. If Nvidia can get +50% perf at -30% power, why would they not go for 65% perf at the same power? 

 

Power efficiency looks good on a slide and from a technological level it's nice to havez but it's not going to sell cards. If they want everyone to get off their 1080tis, they're going to want to show the biggest performance bump they can get. 

 

That said, better idle power draw and overall efficiency based on workload would make sense as an advertising point. I just don't see them sacrificing peak performance and putting out a sub-200W Ti card. 

 

They could want to put the thrill back in overclocking again, perhaps (a common complaint since Pascal). Instead of maxing out performance and clocks out of the gate. Just a theory.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Waffles13 said:

If Nvidia can get +50% perf at -30% power, why would they not go for 65% perf at the same power?

Lowering the power budget means increasing your performance ceiling. Plus it allows the GPU to be in the hands of more people since the power requirements are lower. I might not think about putting a 250W TDP GPU in a 450W SFF build, but a 180W TDP part? That sounds a bit more manageable.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, BiG StroOnZ said:

Thought this was a decent update regarding this topic, worthy of a bump for this thread (but not worthy of an entirely new one):

 

 

1) http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557

2) https://www.techpowerup.com/262592/nvidias-next-generation-ampere-gpus-to-be-50-faster-than-turing-at-half-the-power

3) https://www.guru3d.com/news-story/next-generation-nvidia-ampere-reportedly-to-offer-50-more-perf-at-half-the-power.html

 

As far as my opinion on the matter, there were other outlets claiming similar or higher performance increases as here. Therefore, it isn't out of the realm of possibility; however it seemed at the time many conflicting opinions appeared based on those original performance claims (Mainly, skepticism, as in: "there's no way anyone can know these suggested performance numbers this early". That was essentially in the beginning of November 2019). 

For what you quoted this seems a typical thing for press and also normal people, they get some info they can't quite comprehend and then interpret it totally wrong and go with it... re: power consumption 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mark Kaine said:

For what you quoted this seems a typical thing for press and also normal people, they get some info they can't quite comprehend and then interpret it totally wrong and go with it... re: power consumption 

 

Sounds like a summary of what existence on Earth is like currently... ?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, BiG StroOnZ said:

 

Sounds like a summary of what existence on Earth is like currently... ?

Yeah haha that's right I guess... 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

50% more performance than which the competitor cannot already beat? Regardless if true or possible not going to happen, why on earth would Nvidia release such a product in this situation. Far more likely to release 20% more and sit on ass till later then "Superfy" the product line when necessary or wants more sales.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Waffles13 said:

That said, better idle power draw and overall efficiency based on workload would make sense as an advertising point. I just don't see them sacrificing peak performance and putting out a sub-200W Ti card. 

The x80 Ti should still be at the 250W range, but the others might be lower, in a similar way to Maxwell/Pascal. The 980 Ti TDP is 250W against the 165W of the 980 for example.

 

 

I hope we get a GTX970 like card without the memory bullshit, a $350~$400 GPU with the previous x80 Ti performance while using almost half the power to bring the prices back to reality, but it probably won't be happening.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

50% more performance than which the competitor cannot already beat? Regardless if true or possible not going to happen, why on earth would Nvidia release such a product in this situation. Far more likely to release 20% more and sit on ass till later then "Superfy" the product line when necessary or wants more sales.

Uh hello, have you never heard of Pascal? It's Nvidia's most sold architecture and it came with a huge performance boost, the 1080 was ~40% faster than the 980, likewise with the 1080 Ti vs the 980 Ti. 20% performance per gen isn't even worth releasing, look at how pathetic Turing is doing versus Pascal in sales.

 

AMD is irrelevant, Nvidia has been competing with themselves since Pascal, maybe even arguably since Maxwell.

Dell S2721DGF - RTX 3070 XC3 - i5 12600K

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, System32.exe said:

Uh hello, have you never heard of Pascal?

Yes and back then competing products weren't nearly as behind. Don't take a one time event and think it'll happen again when it didn't the very next generation.

 

31 minutes ago, System32.exe said:

20% performance per gen isn't even worth releasing

Really then why did Super refresh happen, that wasn't worth it? Nvidia will only do what is necessary.

 

It's not unnecessary to make sure the you have the capability to release large performance gains but don't mistake that for actually getting them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mira Yurizaki said:

Lowering the power budget means increasing your performance ceiling. Plus it allows the GPU to be in the hands of more people since the power requirements are lower. I might not think about putting a 250W TDP GPU in a 450W SFF build, but a 180W TDP part? That sounds a bit more manageable.

If every card is higher performance, then you just go with a lower end model. The only difference is the epeen model number on the side of the card. If the 3060 is the same power draw /performance as a theoretical 3070 where they decided to keep performance down in the name of power efficiency, then what difference does it make (other than the fact that the 3060 may be in a lower price bracket (haha just kidding, this is Nvidia))?

 

1 hour ago, BiG StroOnZ said:

They could want to put the thrill back in overclocking again, perhaps (a common complaint since Pascal). Instead of maxing out performance and clocks out of the gate. Just a theory.

Dear God, I hope not. If you want to gamble, go to a casino. Why on Earth would I want to buy a product with 90% of its potential performance and then be required to put in time and effort to maybe unlock the full potential? It's not like they're going to give you a discount if you get the shittier silicon. 

 

The only argument I can see is if Nvidia is artificially capping your potential to overclock beyond the "safe" bounds for the card, beyond anything that they could sell en masse. And while they do do that to a large extent currently with regards to boost and power limits, I don't see any reason for them to change that just because they theoretically leave more headroom on the table. 

 

If anything, they're looking at the 1080ti and trying to figure out how to further limit the scalability and user configurability of future architectures to make sure they don't wind up competing with their own cards so severely going forward. 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, leadeater said:

Yes and back then competing products weren't nearly as behind. Don't take a one time event and think it'll happen again when it didn't the very next generation.

What?? Yes they were, if anything they were even more behind than they are now. AMD's initial response to Pascal was the RX 480, which only competed with.. the 1060. Vega didn't come until half way through Pascal's life cycle and was a total joke.

29 minutes ago, leadeater said:

Really then why did Super refresh happen, that wasn't worth it? Nvidia will only do what is necessary.

Eh, the Super refresh just made the 1650, 2060, and 2070 not dog shit anymore, but the generation as a whole is still lackluster, the prices are out of control.

29 minutes ago, leadeater said:

It's not unnecessary to make sure the you have the capability to release large performance gains but don't mistake that for actually getting them.

They're moving to 7nm+.. if they can't pull off something worthwhile there's something seriously fucked behind the scene at Nvidia.

Dell S2721DGF - RTX 3070 XC3 - i5 12600K

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, System32.exe said:

Eh, the Super refresh just made the 1650, 2060, and 2070 not dog shit anymore, but the generation as a whole is still lackluster, the prices are out of control.

Pricing situation that happen to Pascal as well, it's not new or exclusive to Turing.

 

1 hour ago, System32.exe said:

What?? Yes they were, if anything they were even more behind than they are now. AMD's initial response to Pascal was the RX 480, which only competed with.. the 1060. Vega didn't come until half way through Pascal's life cycle and was a total joke.

Then you have forgotten much about the Pascal generation. The 1080 Ti was the longest delay between architecture product launch and getting that product. Pascal was also the first generation where Nvidia did not use the largest most performant and feature rich GPU die, we as gamers never got that (GP100). And when was it that Nvidia released the 1080 Ti? Right before RX Vega.

 

Today as it is AMD is the furthest behind to Nvidia for the fastest GPU possible, it's never been this wide.

 

Nvidia will only do as much as required to compete and lead the market, that is their drive. Their drive is not to end of life products earlier than necessary which hurts profit margin and return on investment. It is not their drive to release more costly products, to them, on to the market than necessary.

 

Nvidia has to keep a long term view and there is currently little to push them to focus on short therm. So again Nvidia will easily be able to make huge gains moving to 7nm but do not mistake this with getting that immediately, especially cheaply. Nvidia has no idea how long they will need to keep product on 7nm so if they can make a product 20% faster using a smaller die that uses less power then that is the best option, even for us the customers. Do you want a GPU 30% faster than the 2080 Ti for the same price or more? Do you want 40% more performance for more? Or do you want 20% more for a lower price?

 

Nvidia needs to keep the option of making bigger dies with subsequent generations on the same process size which allows performance gains over generations, something which has always been done. You don't go in to a new process generation making the largest dies possible without first making sure you have refined the process and designs to keep costs down. This is what happen in the 28nm generations, this is what happened in the 16nm/12nm generations and this is what will happen in the 7nm/6nm generations (shrink, reduce, refine, enlarge).

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, rrubberr said:

GCN has never really been about gaming (which is AMD's mistake, no doubt)

The changes made in RDNA really do look like AMD is focusing more on gaming workloads now, it's still very GCN like but at least they are addressing some of the problems of before with having to perfectly address and allocate work or GCN will just end up largely dormant wasting cycles doing undersized jobs.

 

At some point I expect architectures to diverge and AMD will have a gaming optimized one and the compute optimized but they will only be able to get the funding to do that through the gaming market as they have zero market share in the server space and nobody who wants to use them. The only other option is to leach money out of the CPU side of the business, totally an option. Nvidia is already doing this dual architecture, or dual sub-architecture. Pleasing all workloads comes with too many compromises in such a competitive industry.

Link to comment
Share on other sites

Link to post
Share on other sites

If nvidia is smart, they'll make a big deal about power savings, ship a bunch of cards through the mid range without extra power plugs, and ship a little bit of a performance increase but mostly power savings.  SI's will eat that up.  Then when they decide they want more sales or AMD pushes them, they "super" it and move the power curve.  If your partners aren't allowed to ship over X watts for VRM capabilities if they want to still receive chips, or aren't allowed extra power plugs until "super", then it becomes pretty easy for them to do in that manner too.

 

They feed their stockholders, not gamers.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't care about power saving. I can hook up 4x 8pin if needed, if that gives me 3x GTX 1080Ti performance lol. You don't buy Ferrari with 5 liter V12 engine to save petrol... The same way I don't buy graphic cards to save power. And those who do, well, those can still buy a Renault Twingo or a GTX 1550 Mini Lite...

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, pas008 said:

I'm thinking its the typical thing we always see with these articles over exaggeration and mixing both together

Be nice for more

 

Realistically my understanding

Same power

50% plus increase in graphics

Same graphics

50% plus decrease in power

This immediately reminded me of what AMD were saying before 7nm launch. You can have EITHER:

  • Same performance at half the power
  • >25% more performance at the same power

performance vs power is a curve and you can choose where you operate along it. For gaming parts, it tends to be higher up the curve. You get slightly higher performance for the extra power cost. Lower power optimised ones are only a little slower, but use noticeably less power.

 

8 hours ago, Waffles13 said:

The only argument I can see is if Nvidia is artificially capping your potential to overclock beyond the "safe" bounds for the card, beyond anything that they could sell en masse. And while they do do that to a large extent currently with regards to boost and power limits, I don't see any reason for them to change that just because they theoretically leave more headroom on the table. 

nvidia cards generally already limit you to a certain power limit. Extreme overclockers have to bypass that. Even without this, AIB makers already push the limits of stability with their factory overclocks. Right next to me I have an EVGA 970. It's gaming stable, but compute unstable. I have to un-overclock it if I want to run anything other than gaming on it.

 

6 hours ago, rrubberr said:

People bash Vega alot because it's not a gaming part, but when you get down to it GCN has never really been about gaming (which is AMD's mistake, no doubt). For very specific use cases, like my own, Vega makes tons of sense. I use a Vega VII for LuxCoreRender, an OpenCL accelerated render engine (which I use for virtually all my 3D graphics work), and get 3-4 1080s worth of performance out of it. This is one example where HBM2 is a huge bonus, and having 16GB of VRAM is a must, which is something Nvidia won't give you for the $750 price tag the card wound up selling for.

Radeon VII is a kinda odd card for AMD to release and I think they used it to keep interest going in their consumer GPU side. It is basically a crippled workstation/compute card. I was almost interested but they still crippled the fp64 a bit more than I had use for. I know others who have bought multiples since it is still the highest fp64/cost card recently available.

 

6 hours ago, rrubberr said:

I haven't used an Nvidia card for a year or so, but last time I did, they were still shipping OpenCL 1.2 in their drivers, again 2.0 is a must for many applications.

While not a significant area for me to look at, I think a lot of nvidia optimised code uses CUDA in preference. Also the talk about broken OpenCL in current Navi (on Windows at least) is concerning.

 

6 hours ago, leadeater said:

At some point I expect architectures to diverge and AMD will have a gaming optimized one and the compute optimized

This could be a shift like when fp64 got crippled in consumer cards, starting on the 600 series on green, and the last "fast" fp64 red card was the 280X until the VII came around. But it is more complicated now, as GPU compute is making its way into more software. Which bits can they realistically cut without impacting consumers?

 

3 hours ago, RejZoR said:

I don't care about power saving. I can hook up 4x 8pin if needed, if that gives me 3x GTX 1080Ti performance lol. You don't buy Ferrari with 5 liter V12 engine to save petrol... The same way I don't buy graphic cards to save power. And those who do, well, those can still buy a Renault Twingo or a GTX 1550 Mini Lite...

whynotboth.gif

 

Power efficient cards could have their place at certain parts of the range, especially the lower end. Flagship cards can afford to push the electrons around more generously.

 

While I've not looked at it for gaming, there is a significant power saving in compute uses with Turing over Pascal. My 2070 is keeping up with my 1080 Ti, at lower power. That is only due to architecture since they're on same process. A process improvement combined with possible further improvements to architecture, there's room for efficiency to improve further.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, porina said:

Which bits can they realistically cut without impacting consumers?

Realistically I think most of the differences will be in the front end how you allocate work to the GPU in the rules in which you can, processing elements should still be common on both designs but maybe with a slightly different ratio. That's the biggest limiter/optimization issue with GCN, fitting in perfectly with all the requirements to fully utilize the die otherwise you could have up to 50% of the stream processors in a CU idle.

 

It's a lot easier to achieve peak performance when you remove flexibility which adds predictability, and with predictability you can optimize and refine. Nvidia got lots of flak for not really having Async Compute but if you think about it if something isn't possible then you don't have to accommodate it which means no die area or architecture dedicated to make it work. Since something like Async Compute is such a minor used thing it's a burden to have not a positive when it comes to efficiency and optimization to that which is used. And that is pretty much the story of GCN, feature rich highly flexible design with excellent performance at the cost of shouldering large responsibility of that on developers. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, porina said:

nvidia cards generally already limit you to a certain power limit. Extreme overclockers have to bypass that. Even without this, AIB makers already push the limits of stability with their factory overclocks. Right next to me I have an EVGA 970. It's gaming stable, but compute unstable. I have to un-overclock it if I want to run anything other than gaming on it.

I know that, my point was that if Nvidia decides that the 3080ti is going to be a 200W card for some reason, it's not like they are going to loosen up their lockdown of the card firmware out of some desire to make things more fun for overclockers. I'd much rather they take the extra power budget afforded by 7nm and use it to cram more performance into all product tiers. Then if someone needs a SFF card, they just go down a tier or two and still get the same performance without arbitrarily capping the high end, maximum performance focused market. 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm hoping they are going to be released this year I'm saving to replace this 1080 ti.

I7 9700k- MSI Z390 Gaming Edge AC - Be Quiet Dark Pro 4 - Corsair Vengeance LPX 3200 2X8GB - EVGA RTX 2080 TI Black Edition - WD BLACK NVMe M.2 500GB -  EVGA Supernova 650 P2, 80+ Platinum -Nzxt H500 - Display Dell S2721DGF 27" - Keyboard- Razer Huntsman Mini 60%  - Mouse- Logitech G203  - Headset  Astro A50 2019 Edition - Speakers - Logitech Z623

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah.. I don't know why they would be cheaper.. Nvidia has zero incentive to make their cards cheaper, as long as AMD does not offer anything high end.

 

At least AMD can compete in the midrange, 5700XT is a decent performer. But the 3080Ti will likely still be SUPER expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Looks like we can keep on rolling with the Ampere updates/rumors:

 

Quote

588668583_ampeecopy.thumb.jpg.56e966b5eb08c9178bd45bc004267d7d.jpg

 

 

ampereee.thumb.jpg.f012c09a2b6d9b61ac6d6e0d920b5cb7.jpg

 

aBiHyo5cnREvuqV5.jpg.8b726b1d51da230418c43816a54456cd.jpg

 

TkrJGx4zxDHGhG6X.jpg.608db05127daeb04acb4344f31e0e764.jpg

 

Alleged specifications of the GeForce RTX 3070 and RTX 3080 have surfaced. Of course, you'll need to take a ton disclaimers in mind, and huge grains of salt. But history has proven over and over again, that there often is validity (at least to some degree) to be found in these leaks. So here we go:

 

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. 

 

The original source also shared die diagrams for GA103 and GA104, they look professional but they are not as detailed as Turing diagrams, hence we put a strong doubt on their credibility. 

 

Rumors are that at GDC in March we'll see the first announcements on Ampere architecture (if it'll be called Ampere). 

 

 

 

Source: https://videocardz.com/newz/rumor-first-nvidia-ampere-geforce-rtx-3080-and-rtx-3070-specs-surface

Source: https://www.guru3d.com/news-story/rumor-nvidia-ampere-geforce-rtx-3070-and-rtx-3080-specs-surface.html 

Source: https://www.techpowerup.com/263128/rumor-nvidias-next-generation-geforce-rtx-3080-and-rtx-3070-ampere-graphics-cards-detailed

Link to comment
Share on other sites

Link to post
Share on other sites

Big if true, as with all things related to tech rumors. Again, no matter how much more performance Nvidia is going to push out compared to Turing, pricing is still going to matter, especially for the mid to lower end cards. I don't care if the RTX-3080 is going to be as fast as the RTX-2080ti, if its the same price, its still pretty sad.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×