Jump to content

Nvidia VRAM - Deserved vs Given for the Last 3 Generations

YoungBlade
2 minutes ago, porina said:

I get what you're saying, but it isn't so all or nothing. If you're a "max settings" type person, you're probably wanting something higher than 70 tier regardless. Once you're on a GPU that isn't guaranteeing top tier performance, you're making a tradeoff on image quality and performance. And gain, 3070 is over 2 years old at this point. Only now are some games really pushing 8GB limit, and they're generally not the best technical games. I'm not playing the "badly optimised" card here, and agree at some point top tier gaming will need more than 8GB. Done well 8GB shouldn't mean a bad gaming experience, especially as it is the biggest chunk in Steam Hardwar Survey at 32% overall. Above 8 GB GPUs only make up around 20% together.

My point is not that cards in general all need more VRAM. My point is that a card like the 3070 could be used well in games for way longer if it had more VRAM, because it has performance equal to the 2080 Ti. Anything the 2080 Ti can do, the 3070 could do if it had the same VRAM capacity. But it doesn't.

 

The GTX 1060 cannot max out any modern games at 1080p. Giving it more than 6GB would not help, because it isn't a VRAM problem. The GPU itself is too slow.

 

It's just like with a CPU. An i5 2400 will not run the latest games well today no matter how much RAM it has. You can run it with 8GB and not miss our on much. But a 13400 can utilize even 32GB to give you better performance, because it's fast enough to benefit from it. Limiting it to 8GB would be ridiculous. Yet apparently, people are fine with that for GPUs for some reason?

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, YoungBlade said:

Limiting it to 8GB would be ridiculous. Yet apparently, people are fine with that for GPUs for some reason?

It's because 8GB works for most cases still. We have a handful of cases where high settings might not work in some way on 8GB. For sure, there may be more cases in future, and I'm all for devs putting in higher quality settings in games. Even moving up the minimum requirements like killing quad core CPU support totally and a minimum of 16GB system ram, so they can do more things with the games to scale up. As pointed out earlier, only 20% of Steam Hardware Survey has more than 8GB VRAM. Good for them, they can use a slightly higher setting somewhere. The bulk of people are still on 6GB and 8GB and games will have to work with them for a long time yet. We may also be in a transition. In a couple years maybe, DirectStorage might reduce some of the pressure on VRAM if things can be better shuffled in and out as needed rather than loaded in one go. I still consider the benchmark as to what a game system should have can translate back to consoles. The difficulty there is they have unified memory and can allocate more of it to GPU than CPU, and that may be where the pain point is we're seeing more recently. From a pure gaming perspective I feel 12GB is plenty for the next couple of years with 16GB being a good amount for a gaming GPU. Beyond that is likely wasteful outside of niche gaming cases or otherwise used for non-gaming.

 

This talk of VRAM vs performance kinda reminds me of the pointless CPU vs GPU "bottleneck" discussions that often fire up on this forum. You can pick you settings to swing it wildly either way. I approach it with the "good enough" mentality. You don't need to try for a meaningless balance, as long as each part is good enough for the user's objective.

 

Anyway, bottom line remains 8GB is not unusable. It isn't dead. It will still game. If a user decides they want higher settings, they can make that call and go higher if appropriate.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Damn, your phrasing is telling

"Deserved"
"Given"



You know ram aint cheap. Like sure we are not over 20 dollars a gig mid pandemic
image.thumb.png.686bf3a0b12f2ffb8e7b1ec61b349b89.png



The 10.5GHZ/21gb/s that is used for most of the line is still 15.20 PER GIGABYTE.

You really think a 600 dollar card, you deserve 320dolalrs of it to be spent on RAM and RAM alone?

The need for more vram has not gone up over the last few generations. in 12 years, 1440p has gone from top end to... still top end but not 4k. 

it only went up this year because console ports, and even then A SINGULAR (perhaps 2 if you count plagues tale) GAME. 

Ram amount is a function of buswidth, and the size of the chips as well. that "deserve" scale you have just ignores how buses work. Neither company are just pulling ram amounts out of their ass.

We wont get more ram in significant amounts until we get denser chips.
 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, starsmine said:

Damn, your phrasing is telling

"Deserved"
"Given"



You know ram aint cheap. Like sure we are not over 20 dollars a gig mid pandemic
image.thumb.png.686bf3a0b12f2ffb8e7b1ec61b349b89.png



The 10.5GHZ/21gb/s that is used for most of the line is still 15.20 PER GIGABYTE.

You really think a 600 dollar card, you deserve 320dolalrs of it to be spent on RAM and RAM alone?

The need for more vram has not gone up over the last few generations. in 12 years, 1440p has gone from top end to... still top end but not 4k. 

it only went up this year because console ports, and even then A SINGULAR (perhaps 2 if you count plagues tale) GAME. 

Ram ammount is a function of buswidth, and the size of the chips as well, that who scale you have just ignores how buses work. Neither company are just pulling ram ammounts out of their ass.
We wont get more ram in significant amounts until we get denser chips.
 

Do you know that Nvidia intended to release a 16GB 3070 and 20GB 3080? There are actually some 20GB 3080 cards that made it into the wild, likely sold to miners. Nvidia also released the 12GB 2060.

 

The tech here is not as limited as you claim.

 

These cards could have had more VRAM. Not necessarily the exact amounts I specified, but certainly more than they were given.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, YoungBlade said:

Do you know that Nvidia intended to release a 16GB 3070 and 20GB 3080? There are actually some 20GB 3080 cards that made it into the wild, likely sold to miners. Nvidia also released the 12GB 2060.

 

The tech here is not as limited as you claim.

 

These cards could have had more VRAM. Not necessarily the exact amounts I specified, but certainly more than they were given.

... I didnt say it was limited.

I said its a funcition of bus width and chip density.
Yes 16gb 3070 and 20gb 3080 are possible

But do you really think Nvidia could have made money on 3080s that 440 dollars of it was vram(or even 300 pre pandemic pricing)? Not including the rest of the BOM?

not to mention, Bom isnt even the cost to make the product.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, starsmine said:

... I didnt say it was limited.

I said its a funcition of bus width and chip density.
Yes 16gb 3070 and 20gb 3080 are possible

But do you really think Nvidia could have made money on 3080s that 440 dollars of it was vram(or even 300 pre pandemic pricing)? Not including the rest of the BOM?

not to mention, Bom isnt even the cost to make the product.

Evidently, at one point, they did. Because some ended up in the wild, sold to someone.

 

And the RTX 2060 12GB is a real product, that was sold generally.

 

I think Nvidia can afford to give more VRAM, yes. I think they are choosing not to to increase profit margins and as a form of planned obsolescence.

 

The 10 series had sufficient VRAM to the point that most of the cards are usable even today with new titles. I think Nvidia isn't happy that their customers are using 7 year old products, and they don't want that to happen again.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, YoungBlade said:

Evidently, at one point, they did. Because some ended up in the wild, sold to someone.

To miners, and not at MSRP

The whole pricing structure would go up multiple hundreds if you want your "deserved" count. to handle more chips you need more memory chips, larger boards with more routing, larger cooler, more VRMs. etc etc.

I know the 2060 12GB is real. I didnt say that. 

10 series had sufficient ram... and so does the 30 series. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, starsmine said:

To miners, and not at MSRP

The whole pricing structure would go up multiple hundreds if you want your "deserved" count. to handle more chips you need more memory chips, more larger board, larger cooler, more VRMs. etc etc.

I know the 2060 12GB is real. I didnt say that. 

10 series had sufficient ram... and so does the 30 series. 

If the 30 series cards all have enough VRAM, then why are there instances where the 3070Ti loses to a 3060 that gets twice the FPS?

 

I don't want to hear it with a "but the 3060 has a low framerate, too" argument. The chart shows accurate relative scaling for cards with 11GB+ of VRAM. It should scale for all of them if their VRAM amounts are appropriate for the power of their GPU.

 

RT_2160p-color-p.webp

 

This is ridiculous. Notice that the 2080Ti, which should lose to the 3070Ti, wins by a threefold margin. And beats the 3080 10GB, too. That's all due to VRAM.

 

The 3080 12GB is where it should be in this chart. Every GPU with at least 11GB is. Those with less are not.

 

And this situation is only going to happen more and more in the coming years. The 3060 is going to end up aging better than the 3080 10GB but for wont of $30 worth of memory and a slightly bigger bus.

 

To me, that's wasteful.

 

AMD gave 16GB at the $580 price point, so I don't want to hear that it can't be done.

Link to comment
Share on other sites

Link to post
Share on other sites

Every card is choking on those settings.
The difference between 5 fps and 10fps is unplayable and unplayable. 

Yes you injected settings that begin to thrash the memory and have it call to system ram/storage every frame with an 8GB frame buffer. congrats. put 16 gb on it and congrats you are now playing at 12fps... aka unplayable. Aka that work load is not a workload that is not realistic having 16gb wont help you. That is something I could display with any synthetic test. 

I can force any pair of chips (not just GPUS) to show different results like that if there is a way to artificially force a bottleneck in the faster chip. 

 

10 minutes ago, YoungBlade said:

AMD gave 16GB at the $580 price point, so I don't want to hear that it can't be done.

You really are trying to argue things I did not argue. I keep using soft words because its engineering. aka compromise, aka it depends. 
Every engineering class, you get a question, your answer is correctly "it depends"
yes, AMD has 16GB cards, but they are also often bandwidth starved because they are using cheaper. slower memory. Their 12GB cards are only the 8ghz chips, using GDDR6 rather then GDDR6X and they end up leaving a lot of performance on the floor. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, starsmine said:

Every card is choking on those settings.
The difference between 4 fps and 10fps is unplayable and unplayable. 

Yes you injected settings that begin to thrash the memory and have it call to system ram/storage every frame with an 8GB frame buffer. congrats. put 16 gb on it and congrats you are now playing at 12fps... aka unplayable. Aka that work load is not a workload that is not realistic. That is something I could display with any synthetic test. 

I can force any pair of chips (not just GPUS) to show different results like that if there is a way to artificially force a bottleneck in the faster chip. 

You're not understanding my point. All of the 30 series cards, save the 3060 12GB, 3050, and the 3090s, could all actually use more VRAM if it was available. They are powerful enough to take advantage of the resources.

 

The 3080 12GB is not that much faster, yet it scales fine there with the rest of Nvidia's lineup. This shows that it is taking advantage of that extra VRAM, while the other cards are choking without it.

 

Yes, I picked an extreme case, but it is not a synthetic benchmark. It is a real game. And the 3090 and up can play it if you're comfortable with 30fps - something some gamers are actually fine with.

 

With a few settings tweaked, the 3080 12GB could almost certainly make this work. The 3080 10GB probably could too. But the 3070s are going to have to make big cuts to RT, not because they aren't powerful enough, but because they lack VRAM.

 

That's my point - there shouldn't be real games where a card's performance scaling collapses strictly due to a lack of VRAM.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, YoungBlade said:

You're not understanding my point. All of the 30 series cards, save the 3060 12GB, 3050, and the 3090s, could all actually use more VRAM if it was available. They are powerful enough to take advantage of the resources.

 

The 3080 12GB is not that much faster, yet it scales fine there with the rest of Nvidia's lineup. This shows that it is taking advantage of that extra VRAM, while the other cards are choking without it.

 

Yes, I picked an extreme case, but it is not a synthetic benchmark. It is a real game. And the 3090 and up can play it if you're comfortable with 30fps - something some gamers are actually fine with.

 

With a few settings tweaked, the 3080 12GB could almost certainly make this work. The 3080 10GB probably could too. But the 3070s are going to have to make big cuts to RT, not because they aren't powerful enough, but because they lack VRAM.

 

That's my point - there shouldn't be real games where a card's performance scaling collapses strictly due to a lack of VRAM.

Go buy a quadro. 

Yes. we know if you give a card specific worklaods "it can use it" but thats workload dependent, or rather its a function workload and how fast the GPU can rip though it. Thats why quadros and firepros and wxxxx cards all use slower cores and double or tripple the vram. 

You do not know the difference between synthetic and real. You can turn a real game into a synthetic by choosing and injecting settings that are not ever used in the real world. 
Synthetic does not mean the results are not real, it gives you a real scale in performance, it just means the results do not match 1 to 1 with the reality of a given specific use case. 

A 20GB 3080 would be cost prohibitive, for zero extra performance but in benchmarks like your hogwarts legacy one, and still it would not perform at a playable rate. the avg may be in the 20s, but the lows would still be teens.
Drop it to 8ghz GDDR6 to make it plausable to sell at a reasonable MSRP and you just lost 1/4 to 1/3 of your gaming performance. 

 

3060 12gb I have brought up before was an unfortunate choice, they didnt want to do 6, so they went. hey ram prices will go down, sure our profit margins are razor thin at launch so no one makes money, but OH SHIT COVID ram doubled in price. now literally no one makes money on 3060s at msrp. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, starsmine said:

Go buy a quadro. 

Yes. we know if you give a card specific worklaods "it can use it" but thats workload dependent, or rather its a function workload and how fast the GPU can rip though it. Thats why quadros and firepros and wxxxx cards all use slower cores and double or tripple the vram. 

You do not know the difference between synthetic and real. You can turn a real game into a synthetic by choosing and injecting settings that are not ever used in the real world. 
Synthetic does not mean the results are not real, it gives you a real scale in performance, it just means the results do not match 1 to 1 with the reality of a given specific use case. 

Okay, here's a non-synthetic example of the 3070 Ti failing due to lack of VRAM.

 

This is a real game that a real gamer was trying to demonstrate using real settings that the card could run at totally playable framerates, and yet it crashes only due to the limited VRAM.

 

 

What about this is in any way invalid? It's a perfectly realistic use case, something a real person would probably try to do, and something that would work perfectly fine if the 3070Ti had 16GB of VRAM.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, YoungBlade said:

Okay, here's a non-synthetic example of the 3070 Ti failing due to lack of VRAM.

 

This is a real game that a real gamer was trying to demonstrate using real settings that the card could run at totally playable framerates, and yet it crashes only due to the limited VRAM.

 

 

What about this is in any way invalid? It's a perfectly realistic use case, something a real person would probably try to do, and something that would work perfectly fine if the 3070Ti had 16GB of VRAM.

Thats a fair point. I even vram crashed it with a 3060 with 12gb. 

Something I would hope you would glean from when i said 

26 minutes ago, starsmine said:

I can force any pair of chips (not just GPUS) to show different results like that if there is a way to artificially force a bottleneck in the faster chip. 

is that you can just do that... with any workload, you just have to feed it 8GB of textures for the level. But also realize 0.5GB of textures in that game is still high. It doesnt really call the settings above that ultra, but 2GB is ultra, 4gb is ultra+, 6GB ultra+++ textures, etc. Modders have been doing just that for decades. 


I appreciate capcom giving us the option to do that. All games should imo, so long as its not just wasted junk data. 

Link to comment
Share on other sites

Link to post
Share on other sites

I was surprised to find that there was a 20GB RTX 3080 Ti that AIBs had, but nVidia canceled it because its performance came too close to the RTX 3090, It was a business decision and I understand that, but man, there are actually 20GB RTX 3080 Ti out in the real world, this card would definitely outlast the RTX 4070 Ti 12GB (which I feel is too little for a card of this caliber, 16GB sounds about right, but that would cannibalize sales of the RTX 4080).

 

I think nVidia learned from this, that's why they deliberately sell cards with insufficient VRAM. That's planned early obsolescence no matter how you look at it. I don't own any nVidia card outside of a GTX 1080 (perfectly fine as the card isn't powerful enough to leverage more VRAM even IF it had), so this VRAM shortfall on the RTX 3070 series doesn't affect me one bit, but I find it weird that instead of being angry, RTX 3070 series owners are defending nVidia, that suddenly, it's okay to run a game with less texture or with RT disabled precisely due to this VRAM shortfall.

 

Again, AMD would lead with sufficient VRAM for their newer cards, the RX 7800 XT would have 16GB VRAM (against the RTX 4070 12GB), and I suspect the RX 7700XT would have 12GB (against RTX 4060 series with just 8GB), can't comment about the RX 7600 XT though.

Main Rig: AMD AM4 R9 5900X (12C/24T) + Tt Water 3.0 ARGB 360 AIO | Gigabyte X570 Aorus Xtreme | 2x 16GB Corsair Vengeance DDR4 3600C16 | XFX MERC 310 RX 7900 XTX | 256GB Sabrent Rocket NVMe M.2 PCIe Gen 3.0 (OS) | 4TB Lexar NM790 NVMe M.2 PCIe4x4 | 2TB TG Cardea Zero Z440 NVMe M.2 PCIe Gen4x4 | 4TB Samsung 860 EVO SATA SSD | 2TB Samsung 860 QVO SATA SSD | 6TB WD Black HDD | CoolerMaster H500M | Corsair HX1000 Platinum | Topre Type Heaven + Seenda Ergonomic W/L Vertical Mouse + 8BitDo Ultimate 2.4G | iFi Micro iDSD Black Label | Philips Fidelio B97 | C49HG90DME 49" 32:9 144Hz Freesync 2 | Omnidesk Pro 2020 48" | 64bit Win11 Pro 23H2

2nd Rig: AMD AM4 R9 3900X + TR PA 120 SE | Gigabyte X570S Aorus Elite AX | 2x 16GB Patriot Viper Elite II DDR4 4000MHz | Sapphire Nitro+ RX 6900 XT | 500GB Crucial P2 Plus NVMe M.2 PCIe Gen 4.0 (OS)2TB Adata Legend 850 NVMe M.2 PCIe Gen4x4 |  2TB Kingston NV2 NVMe M.2 PCIe Gen4x4 | 4TB Leven JS600 SATA SSD | 2TB Seagate HDD | Keychron K2 + Logitech G703 | SOLDAM XR-1 Black Knight | Enermax MAXREVO 1500 | 64bit Win11 Pro 23H2

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, YoungBlade said:

Looking at those cards, the only ones in the orange range that I can find would be the GTX 660 2GB and GTX 780 3GB, as they should have had 3GB and 5GB respectively by this metric.

 

Did you ever find situations where the GTX 660 or GTX 780 seemed like it should be able to run something with higher settings, but it failed to do so due to its VRAM capacity?

Goofy metric still, it doesn't consider how VRAM is implemented on GPUs.

 

Yes the 780 (and 780 Ti) choke hard due to VRAM in any newer titles, 4 is really the minimum you can get away with at 1080p if you don't want to run the lowest possible texture sizes in stuff like Tomb Raider. But they should have been 6GB cards, not 5. And they were, there were 6GB variants of the 780 and 780 Ti made, I think they just didn't sell many (either they just didn't make a lot or nobody saw the extra VRAM as worth it back in 2013).

 

No the 660 was never held back by VRAM and the amount was appropriate for when it launched. Obvs more VRAM would have been nice for titles released a full decade after the GPU, but not needed.

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

For gaming, 24GB should be the fine even for next-gen top end imo.

10GB should be minimum for x50 tier now.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, YoungBlade said:

Do you know that Nvidia intended to release a 16GB 3070 and 20GB 3080? There are actually some 20GB 3080 cards that made it into the wild, likely sold to miners. Nvidia also released the 12GB 2060.

Did they really plan to launch such GPUs? There's a world of difference between implementing it as an option, and seriously considering taking it to market. The work required isn't significant in that you use higher density ram chips and update firmware accordingly, with the ability to talk to those higher density chips baked into silicon at chip design stage much earlier. It is possible a small handful of test cards were made for evaluation.

 

Miners are unlikely to be the target group. We're back to VRAM, since before Ethereum went POS it did have a requirement for the data file it uses. I'm unable to look up how big it got, but I don't believe it ever required more than 8GB GPUs. 4GB was passed for sure. So for a miner to be as optimal as they can, they just need enough fast ram. Doubling the capacity is just eating into profits for no reason.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

Did they really plan to launch such GPUs? There's a world of difference between implementing it as an option, and seriously considering taking it to market. The work required isn't significant in that you use higher density ram chips and update firmware accordingly, with the ability to talk to those higher density chips baked into silicon at chip design stage much earlier. It is possible a small handful of test cards were made for evaluation.

 

Miners are unlikely to be the target group. We're back to VRAM, since before Ethereum went POS it did have a requirement for the data file it uses. I'm unable to look up how big it got, but I don't believe it ever required more than 8GB GPUs. 4GB was passed for sure. So for a miner to be as optimal as they can, they just need enough fast ram. Doubling the capacity is just eating into profits for no reason.

Yes, they really intended to make those at one point. The 3080 20GB actually made it through production in a limited run. They were likely sold to miners, and then ended up on the second hand market.

 

https://www.tomshardware.com/news/geforce-rtx-3080-20gb-gpus-emerge-for-around-dollar575

 

My thought is that Nvidia originally intended these models to compete with the RX 6800 series, because some people were arguing even back in 2020 that the AMD cards would age better than the 3070 and 3080 due to the higher VRAM capacity.

 

That has definitely proven true with the 3070 - there are examples of the RX 6800 winning in RT benchmarks because the 3070 is choking on its VRAM buffer, but so far the 3080 has barely skated by with 10GB against the 6800XT. I expect it will be a problem by this time next year - that there will be a few games where you need to turn down textures on the 3080 10GB for it to run smoothly at Ultra settings, whereas the 6800XT will be fine.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Zando_ said:

Goofy metric still, it doesn't consider how VRAM is implemented on GPUs.

 

Yes the 780 (and 780 Ti) choke hard due to VRAM in any newer titles, 4 is really the minimum you can get away with at 1080p if you don't want to run the lowest possible texture sizes in stuff like Tomb Raider. But they should have been 6GB cards, not 5. And they were, there were 6GB variants of the 780 and 780 Ti made, I think they just didn't sell many (either they just didn't make a lot or nobody saw the extra VRAM as worth it back in 2013).

 

No the 660 was never held back by VRAM and the amount was appropriate for when it launched. Obvs more VRAM would have been nice for titles released a full decade after the GPU, but not needed.

No, the metric does not consider how VRAM is actually implemented. The idea is supposed to be that it's the amount of VRAM the card could use in games so thattthe GPU becomes the limiting factor.

 

The idea would be that the 780 could perform perfectly well in a game with settings that require around 5GB of VRAM, before needing to turn down settings, based on how powerful the GPU is.

 

Basically, it predicts that with 5GB+, you won't run into a situation where settings would work fine, except that you need to turn them down because it's out of VRAM, but is otherwise fine.

 

The test would be this: Set a game to High or Ultra, but put textures to the lowest setting. Does it run? Then, turn textures back to the same High or Ultra setting. Does it still run? If it fails to run in the second case, but not the first, the card has insufficient VRAM for its performance class.

 

This method is attempting to predict cards that would fail this test.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, GamerDude said:

I was surprised to find that there was a 20GB RTX 3080 Ti that AIBs had, but nVidia canceled it because its performance came too close to the RTX 3090

That is unlikely. To implement a 20GB variant that must be done using a 320bit bus with ~760GB/s bandwidth, where as the 3080 12GB and all cards above it share a 384bit @ ~912GB/s+ bandwidth.

 

A 3080 Ti 20GB would be memory bandwidth limited in performance so it wouldn't really be that close to a RTX 3090 and would also be slower than the RTX 3080 Ti 12GB in a list greater than zero games and applications.

 

The problem with a 20GB variant of the 3080 Ti is that product margin would be terrible, would lead to fewer sales of the RTX 3090 and other professional GPUs. It's simply bad for business for Nvidia but it's also not like there would be a lot of consumer benefit to such a card either. Just buy the RTX 3090 if you want/need that much vram, the actual retail pricing would be basically the same anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, YoungBlade said:

Yes, they really intended to make those at one point. The 3080 20GB actually made it through production in a limited run. They were likely sold to miners, and then ended up on the second hand market.

Ok, from the link it looks like they had a production test run with MSI. I still think it isn't targeted at miners, but that doesn't rule out miners ended up getting them one way or other. From the screenshot in the article, the vbios version could be from around April or May 2021, which was just before LHR was introduced. So this was somewhat early on in the life cycle of the product. We're already well into the mining cycle at the time. Any GPU was a good GPU for miners, even if the extra vram was of no benefit to them.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, porina said:

Ok, from the link it looks like they had a production test run with MSI. I still think it isn't targeted at miners, but that doesn't rule out miners ended up getting them one way or other. From the screenshot in the article, the vbios version could be from around April or May 2021, which was just before LHR was introduced. So this was somewhat early on in the life cycle of the product. We're already well into the mining cycle at the time. Any GPU was a good GPU for miners, even if the extra vram was of no benefit to them.

I don't think they were made for miners, I think that as you said, any GPU was valuable to miners at the time, so they happily paid big bucks for them, even if they couldn't support proper drivers.

 

My thought is that they were made for gamers because Nvidia saw that the 3070 and 3080 could benefit from extra VRAM - which they both obviously can - and that giving them double the VRAM was the easiest way to do that from a technical perspective. I also think that they were originally going to be the Ti models, and that Nvidia was going to unlock additional CUDA cores on those models to justify the Ti naming.

 

A 3070 with 16GB would be an excellent card, and there would be little that the RX 6800 would have on it. Especially if Nvidia gave the GPU itself a bit of a boost by unlocking the rest of the die and upping the power limit, like they did with the actual RTX 3070 Ti. Yes, the memory wouldn't have been faster, but the die itself would have been, and it would have twice the VRAM. Even if that halved the performance uplift to just 4%, I think people today would rather have a 3070 Ti with 16GB of GDDR6 non-X that ran 3% slower than the model with the faster memory. As evidenced by the 3070 Ti itself, the 3070 didn't really need faster memory to get more performance, but I think it does need more of it.

 

Whether the 3080 12GB or 3080 20GB would be better is more a matter of debate. The 12GB model opened up the bus to equal the full bus that the 3090 has, and that has given it more performance, but it still didn't make it all the way to the Ti model. I think that giving it 20GB, but the CUDA count of the 3080 Ti would result in a card that performs as well as the 3080 12GB, but has way better longevity thanks to a 20GB buffer.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, YoungBlade said:

Whether the 3080 12GB or 3080 20GB would be better is more a matter of debate.

I have to wonder if nvidia was planning for a regular sales cycle, not one where shortages dominated. We might never know, but if VRAM was as much or more of a constraint than GPUs were, then not moving to higher VRAM models at the time to get more GPUs out was the business move to make. The 3080 12GB might have taken over what would have been a 3080 20GB in an alternate reality.

 

From the vbios version earlier, the 3080 12GB released much later than when those 20GB samples were made.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×