Jump to content

NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)

41 minutes ago, MysticLTT said:

It's been weird for a while, SLI is going away, yet single cards struggle at decent monitor. I have a pile of games waiting for the right GPU, which never comes.

Sli going away does not mean Multi GPU is going away.

People are mixing Sli/CF and MultiGPU a lot.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, tankyx said:

Sli going away does not mean Multi GPU is going away.

People are mixing Sli/CF and MultiGPU a lot.

Nobody seems interested in developing for multi-GPUs, even though DirectX 12 and possibly Vulkan is supposed to make this easier.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Mira Yurizaki said:

Nobody seems interested in developing for multi-GPUs

When the projected number of sales that explicitly implementing (and supporting) multi-GPU would bring in justifies the amount of time/money such a business decision consumes, then game companies would be more than happy to develop for multi-GPUs :).

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, thorhammerz said:

When the projected number of sales that explicitly implementing (and supporting) multi-GPU would bring in justifies the amount of time/money such a business decision consumes, then game companies would be more than happy to develop for multi-GPUs :).

The problem with multi-GPU setups still is it only makes sense to do it at the high end. And few people have enough money to burn $1000+ on video cards.

 

It's a nice to have, but not really essential when a vast majority of your expected market has a single GPU setup.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/30/2019 at 12:22 PM, BiG StroOnZ said:

Q3/Q4 of 2020 seems like an appropriate release window for Ampere.

Why does it feel like there is a shorter period of time between Turing and when Nvidia is reportedly going to release Ampere? The time between Pascal and Turing felt like forever and many people including myself couldn't wait for Nvidia to release Turing GPUs.

  • My system specs
  • View 91 Tempered Glass RGB Edition, No PSU, XL-ATX, Black, Full Tower Case
  • ROG MAXIMUS XI EXTREME, Intel Z390 Chipset, LGA 1151, HDMI, E-ATX Motherboard
  • Core™ i9-9900K 8-Core 3.6 - 5.0GHz Turbo, LGA 1151, 95W TDP, Processor
  • GeForce RTX™ 2080 Ti OC ROG-STRIX-RTX2080TI-O11G-GAMING, 1350 - 1665MHz, 11GB GDDR6, Graphics Card
  • ROG RYUJIN 360, 360mm Radiator, Liquid Cooling System
  • 32GB Kit (2 x 16GB) Trident Z DDR4 3200MHz, CL14, Silver-Red DIMM Memory
  • AX1600i Digital, 80 PLUS Titanium 1600W, Fanless Mode, Fully Modular, ATX Power Supply
  • Formula 7, 4g, 8.3 (W/m-K), Nano Diamond, Thermal Compound
  • On AIO cooler 6 x NF-F12 IPPC 3000 PWM 120x120x25mm 4Pin Fibre-glass SSO2 Heptaperf Retail
  • 6 x NF-A14 IPPC-3000 PWM 140mm, 3000 RPM, 158.5 CFM, 41.3 dBA, Cooling Fan
  • 1TB 970 PRO 2280, 3500 / 2700 MB/s, V-NAND 2-bit MLC, PCIe 3.0 x4 NVMe, M.2 SSD
  • Windows 10 Pro 64-bit 
  • Beyerdynamic MMX 300 (2nd Generation) Premium Gaming Headset
  • ROG PG279Q
  • Corsair K95 Platinum XT
  • ROG Sica
Link to comment
Share on other sites

Link to post
Share on other sites

We were due to get a 7nm bump in performance from NV, considering it's 7nm EUV and adding some architectural improvements, I believe this generation should be great. IIrc they had huge gpus so without node shrink I don't think they'd be able to improve a lot. I wonder about the cost because 7nm is going to be more expensive, but if the silicon is a lot smaller, it can even be cheaper to produce 3070 than 2070 for example

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Thomas001 said:

Why does it feel like there is a shorter period of time between Turing and when Nvidia is reportedly going to release Ampere? The time between Pascal and Turing felt like forever and many people including myself couldn't wait for Nvidia to release Turing GPUs.

Because, save for the Titan V, Volta wasn't released to consumers. Or anything other than Big Volta. And Volta is essentially Turing without the RT cores.

 

6 hours ago, Loote said:

We were due to get a 7nm bump in performance from NV, considering it's 7nm EUV and adding some architectural improvements, I believe this generation should be great. IIrc they had huge gpus so without node shrink I don't think they'd be able to improve a lot. I wonder about the cost because 7nm is going to be more expensive, but if the silicon is a lot smaller, it can even be cheaper to produce 3070 than 2070 for example

For GPU manufacturers, smaller nodes just means stuffing more execution units since the performance of graphics based workloads can be improved by literally throwing more execution units at the problem. The average die size of GPUs hasn't really changed over the years because of this.

 

For example, this is a graph showing the transistor count, process node, and die size over the years with NVIDIA GPUs:

gpu-sizes-nvidia.png

 

(You can verify this yourself using data from https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units)

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Mira Yurizaki said:

(You can verify this yourself using data from https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units)

I redid some of the work, not because it's wrong, but because inclusion of node makes this hard to read.
The I picked the biggest from each gen.

8800    484
9800    324
280     576
480     529
580     520
680     294
780     561
980     398
980ti   601
1080ti  471
2080    545
2080ti  754

Titan V 815

The point is, I read read at some point an article about it and believed it. The claim was(mostly destroyed by poor memory), that the cost of making an GPU that's twice the die size is more than double the price. Now NVidia could probably make 700+ sized GPU at any point, but the performance improvement wouldn't justify the cost. On the other hand in recent years GPU prices rose enough to make bigger dies profitable. You could see the die size reached 600 only once, the big dies were in 500-600 bracket, yet 2080ti is much closet to Titan V's monster of a die than the nearest GTX. I also split 980 and 980ti because of the amount of time there was between them.

 

Now that I look at this, 3080 is probably going to be 500-600 sized and 3080ti as big as possible again, if you can make a card that will sell for $2000, why not...? I think we should concede that not only is Nvidia greedy, but 754 sized 2080ti is more expensive to produce than 561 780ti was.

Link to comment
Share on other sites

Link to post
Share on other sites

I was going to make a separate thread for this news story, but I think it's fitting to add it here. Basically, since the thread is already established, and constructive conversation related both to the new topic, and Ampere will be able to continue (without much obstruction to the OP). This news story, it isn't directly related to Ampere, but if correct; could definitely affect the release dates of Ampere - moving it closer to around Q4 2020, maybe even Q1 of 2021 (which changes a lot regarding past Ampere rumors).

 

But, the news is, that there is hubbub suggesting NVIDIA is readying a GeForce RTX 2080 Ti SUPER:

 

Quote

untitled-1.png.1dcd94f88c6a72423cd760c0cd49f227.png

 

1571854488_untitled-1(1).png.68139881ef03ab95fff0c2f6a3fc71ab.png

 

NVIDIA could launch a "GeForce RTX 2080 Ti Super" after all, if a tweet from kopite7kimi, an enthusiast with a fairly high hit-rate with NVIDIA rumors is to be believed. The purported SKU could be faster than the RTX 2080 Ti, and yet be somehow differentiated from the TITAN RTX. For starters, NVIDIA could enable all 4,608 CUDA cores, 576 tensor cores, and 72 RT cores, along with 288 TMUs and 96 ROPs. Compared to the current RTX 2080 Ti, the Super could get faster 16 Gbps GDDR6 memory.

It's possible that NVIDIA won't change the 352-bit memory bus width or 11 GB memory amount, as those would be the only things stopping the card from cannibalizing the TITAN RTX, which has the chip's full 384-bit memory bus width, and 24 GB of memory. Interestingly, at 16 Gbps with a 352-bit memory bus width, the RTX 2080 Ti Super would have 704 GB/s of memory bandwidth, which is higher than the 672 GB/s of the TITAN RTX, with its 14 Gbps memory clock.

 

Source 6

Source 7

Source 8

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/21/2019 at 2:14 AM, Loote said:

Now that I look at this, 3080 is probably going to be 500-600 sized and 3080ti as big as possible again, if you can make a card that will sell for $2000, why not...? I think we should concede that not only is Nvidia greedy, but 754 sized 2080ti is more expensive to produce than 561 780ti was.

So because NVIDIA made a larger GPU, which by your own statement is more expensive to produce, they're greedy?

 

Also despite the claim that Turing is TSMC 12nm based, it's really a refined version of TSMC's 16nm process technology (https://www.eenewsanalog.com/news/report-tsmc-relabel-process-12nm) and from what it looks like, "12nm" is really just the pitch between metal lines rather than the size of the transistors themselves (https://www.tel.com/museum/magazine/material/150227_report04_01/https://en.wikichip.org/wiki/16_nm_lithography_process#TSMC) This also makes sense given that Turing has around the same transistor density as Pascal

 

Given this and that Ampere is expected to be on 7nm, which is a two "generation" jump in process nodes, I would expect the same transistor density jump as in previous generations. If NVIDIA does make a GPU of that size for a 80 series SKU, then I'd expect to have faster performance than the 2080 Ti if all they did was add execution units.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/19/2019 at 3:22 PM, MysticLTT said:

It's been weird for a while, SLI is going away, yet single cards struggle at decent monitor. I have a pile of games waiting for the right GPU, which never comes.

 

On 11/19/2019 at 4:04 PM, tankyx said:

Sli going away does not mean Multi GPU is going away.

People are mixing Sli/CF and MultiGPU a lot.

 

On 11/19/2019 at 4:10 PM, Mira Yurizaki said:

Nobody seems interested in developing for multi-GPUs, even though DirectX 12 and possibly Vulkan is supposed to make this easier.

 

On 11/19/2019 at 4:27 PM, thorhammerz said:

When the projected number of sales that explicitly implementing (and supporting) multi-GPU would bring in justifies the amount of time/money such a business decision consumes, then game companies would be more than happy to develop for multi-GPUs :).

 

On 11/19/2019 at 4:53 PM, Mira Yurizaki said:

The problem with multi-GPU setups still is it only makes sense to do it at the high end. And few people have enough money to burn $1000+ on video cards.

 

It's a nice to have, but not really essential when a vast majority of your expected market has a single GPU setup.

quoted all because 

read release notes for last 10 to 20 drivers

sli profiles are being made and updated still

aong many work arounds lol but you want complex you deal with complex

 

and for reviews/benchmarks to show shit gains etc

well these reviews are using certain AA/etc maxed that you shouldnt be using in first place with sli

certain AA does just kill sli which does defeat the purpose though

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, pas008 said:

aong many work arounds lol but you want complex you deal with complex

The problem is that the end user shouldn't have to do much more work than "plug in another card, enable the thing, and have fun." If it's anything more complicated than that, your userbase pool shrinks because it sucks.

 

Like I know you can create custom SLI profiles with NVIDIA Inspector, but if I have to do that with every game or every time something updates, it's more work than its worth.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, BiG StroOnZ said:

I was going to make a separate thread for this news story, but I think it's fitting to add it here. Basically, since the thread is already established, and constructive conversation related both to the new topic, and Ampere will be able to continue (without much obstruction to the OP). This news story, it isn't directly related to Ampere, but if correct; could definitely affect the release dates of Ampere - moving it closer to around Q4 2020, maybe even Q1 of 2021 (which changes a lot regarding past Ampere rumors).

 

But, the news is, that there is hubbub suggesting NVIDIA is readying a GeForce RTX 2080 Ti SUPER:

 

 

Source 6

Source 7

Source 8

It's a bit strange that they wouldn't get this out in time for black Friday, but I suppose even a January/February launch would still give them a big enough gap between a Ti Super and the 3000 series. Even more so if the 3080Ti is delayed after the rest of the series launch, which wouldn't be too surprising given the newer node and possibility of shortages on the bigger dies. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mira Yurizaki said:

The problem is that the end user shouldn't have to do much more work than "plug in another card, enable the thing, and have fun." If it's anything more complicated than that, your userbase pool shrinks because it sucks.

 

Like I know you can create custom SLI profiles with NVIDIA Inspector, but if I have to do that with every game or every time something updates, it's more work than its worth.

i agree too

many variables now imho

somewhat got out of the "plugging in another card" long ago (xfire included)

adding many variants(vram, clocks, etc) of cards with wtf is workable along with pcie lanes to chipsets that can do it lol

 

comes with baggage

and now its for the <1% that dont mind the baggage

but luckily still there though for now for those messed up people that like to be put into that category lol (sli user for over a decade here)

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Waffles13 said:

It's a bit strange that they wouldn't get this out in time for black Friday, but I suppose even a January/February launch would still give them a big enough gap between a Ti Super and the 3000 series. Even more so if the 3080Ti is delayed after the rest of the series launch, which wouldn't be too surprising given the newer node and possibility of shortages on the bigger dies. 

 

Yeah, I agree, November would have been great. Maybe December (still Q4 2019) though - just in the nick of time for the Christmas/Hanukkah season. Even January/February (Q1 2020), as you mentioned also still gives plenty time / big enough gap; Mainly if 3000-series (Ampere) is Q1/Q2 2021 release (and the big chips aren't coming out until much later).

 

This whole releasing the 2080 Ti Super, might be because they are going back to their "old" "normal" releasing schedule, unlike Turing (where the 2080 Ti was released immediately). If there are node problems, wafer yield problems, shortages, etc. a 3080 Ti delay will definitely be within reason. Chances are, that the 2080 Ti and 2080 Ti Super will probably be as fast as the 3070/3080 respectively. If this is the case (normally a typical outcome, if you compare past GPU releases), I would be wary of buying a 2080 Ti or 2080 Ti Super right now (besides needing the best performance, no compromises). Unless of course we are looking at the Ampere release being a great deal later than Q3/Q4 2020, in which case the purchase won't be as unjustified. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/22/2019 at 6:13 PM, Mira Yurizaki said:

So because NVIDIA made a larger GPU, which by your own statement is more expensive to produce, they're greedy?

What I wanted to say is:
we can call them greedy, but the prices got so high because of the die size too, the outcome is a sum of two, not just greediness.

Link to comment
Share on other sites

Link to post
Share on other sites

Also bought the GTX1080 when it came out in 2016 with the 1080ti a big jump up and the 2080ti being even a bigger jump up.

But also with the jump in processing power came physical power usage as well.

 

the 3080 at least will have more of all the crap but less power usage, so hopefully a more refined version of the 2080.

Looking forward to the release but the 1080's done well considering its from 2016.

CPU | AMD Ryzen 7 7700X | GPU | ASUS TUF RTX3080 | PSU | Corsair RM850i | RAM 2x16GB X5 6000Mhz CL32 MOTHERBOARD | Asus TUF Gaming X670E-PLUS WIFI | 
STORAGE 
| 2x Samsung Evo 970 256GB NVME  | COOLING 
| Hard Line Custom Loop O11XL Dynamic + EK Distro + EK Velocity  | MONITOR | Samsung G9 Neo

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Thought this was a decent update regarding this topic, worthy of a bump for this thread (but not worthy of an entirely new one):

 

Quote

Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption. NVIDIA is to launch its new Ampere-based GPU in the second half of this year.

 

It is not exactly clear as to how they came to that math, but we'll happily take it for granted as an expectation. 7-nanometer technology, which would lead to a 50 percent increase in graphics performance while halving power consumption, they said. Perhaps they figure that slicing the production fabrication in half can double up transistors and this here is where that 50% perf is coming from. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.

 

1) http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557

2) https://www.techpowerup.com/262592/nvidias-next-generation-ampere-gpus-to-be-50-faster-than-turing-at-half-the-power

3) https://www.guru3d.com/news-story/next-generation-nvidia-ampere-reportedly-to-offer-50-more-perf-at-half-the-power.html

 

As far as my opinion on the matter: there were other outlets claiming similar or higher performance increases as here. Therefore, it isn't out of the realm of possibility; however, it seemed at the time many conflicting opinions appeared based on those original performance claims (Mainly, skepticism, as in: "there's no way anyone can know these suggested performance numbers this early". That was essentially in the beginning of November 2019). 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, BiG StroOnZ said:

Thought this was a decent update regarding this topic, worthy of a bump for this thread (but not worthy of an entirely new one):

 

 

Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption. NVIDIA is to launch its new Ampere-based GPU in the second half of this year.

 

It is not exactly clear as to how they came to that math, but we'll happily take it for granted as an expectation. 7-nanometer technology, which would lead to a 50 percent increase in graphics performance while halving power consumption, they said. Perhaps they figure that slicing the production fabrication in half can double up transistors and this here is where that 50% perf is coming from. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.

 

1) http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557

2) https://www.techpowerup.com/262592/nvidias-next-generation-ampere-gpus-to-be-50-faster-than-turing-at-half-the-power

3) https://www.guru3d.com/news-story/next-generation-nvidia-ampere-reportedly-to-offer-50-more-perf-at-half-the-power.html

 

Assuming that the rumor isn't totally made up, I wouldn't be surprised if they are factoring RT performance into the 50% uplift. Frankly, a <50% boost in purely RT would be pretty disappointing given that it's only the second iteration of the technology and there shouldn't be plenty of low hanging fruit to pick at to improve the RT core efficiency. 

 

That said, 50% wouldn't be completely unreasonable for the xx80 Ti card, if you assume 15-20% architectural improvement alongside significantly more cores thanks to 7nm. It's a bit optimistic, but not outside the realm of possibility. 

 

Either way, I highly doubt we're getting both half power and +50% performance at the same time. I'm sure that every card in the stack will be the same TDP as Turing and they'll just be cranking the performance up instead. I wouldn't be surprised if the story is just using and misinterpreting the 7nm specs where they say it can do one or the other. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, BiG StroOnZ said:

As far as my opinion on the matter, there were other outlets claiming similar or higher performance increases as here. Therefore, it isn't out of the realm of possibility; however it seemed at the time many conflicting opinions appeared based on those original performance claims (mainly, skepticism, as in: "there's no way anyone can know these suggested performance numbers this early"). 

Given historical data, with the exception from the 2080 to the 1080, the 80 cards have generally gained 30%-50% on average between generations. The only outlier was the 1080 from the 980 which had a much higher performance delta.

 

But saying the figures are confirmed? Yeah that's a stretch.

 

5 minutes ago, Waffles13 said:

Either way, I highly doubt we're getting both half power and +50% performance at the same time.

If we're comparing only the 80 cards and not the 80 Ti cards to the next gen 80 card and going by the TDP rating alone, then this something similar happened before between the 780 and 980. And that was on the same process node.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mira Yurizaki said:

Given historical data, with the exception from the 2080 to the 1080, the 80 cards have generally gained 30%-50% on average between generations. The only outlier was the 1080 from the 980 which had a much higher performance delta.

 

But saying the figures are confirmed? Yeah that's a stretch.

 

If we're comparing only the 80 cards and not the 80 Ti cards to the next gen 80 card and going by the TDP rating alone, then this something similar happened before between the 780 and 980. And that was on the same process node.

In my head I was recalling that 980 Ti to 1080 Ti was only a 30% jump, but I may be mixing that up with the jump from x80 to x80 Ti. In which case yeah, maybe it's more likely than it seems. Still, I highly doubt that we're going to see a high end card that only draws 100-125W. If they have the headroom, I fully expect them to use it. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Waffles13 said:

Either way, I highly doubt we're getting both half power and +50% performance at the same time. I'm sure that every card in the stack will be the same TDP as Turing and they'll just be cranking the performance up instead. I wouldn't be surprised if the story is just using and misinterpreting the 7nm specs where they say it can do one or the other. 

 

Half the power does sound optimistic, as NVIDIA is going to have to prioritize one or the other (power savings vs performance increase). Although, there will definitely be a very large performance saving experienced due to the new node. I think 30% power savings with 50% performance increase +/- 5% is a more realistic expectation.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Mira Yurizaki said:

But saying the figures are confirmed? Yeah that's a stretch.

 

Yeah, I wouldn't say they are confirmed (the figures). However, at the time, the website that had these numbers months ago caught a lot of flak despite saying they weren't confirmed.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm thinking its the typical thing we always see with these articles over exaggeration and mixing both together

Be nice for more

 

Realistically my understanding

Same power

50% plus increase in graphics

Same graphics

50% plus decrease in power

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×