Jump to content

More bandwidth than your storage capacity! Flagship RTX 50 Series card rumoured to have GDDR7 memory and 384 bit bus

filpo

image.thumb.png.be9896dc35abd939e7ae2eb4926a15f1.png
Looks like Blackwell next with the 24Gb chips will be nice to get a free boost in VRAM size without starving for bandwidth. 

To bad blackwell will still be 16Gb 😞

Also, not topic relevent, notice the power band, We moving to CAMM baby, Dell standard is real. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

It's not only important to remember that memory bandwidth isn't everything, the memory bus width isn't everything either.

 

The HD 2900 XT might have had a 512-bit bus, but it could only achieve 106GB/s of bandwidth.

This card is rumored to be able to push over 1500GB/s. So it's 15 times faster despite having a less wide bus. And that's without counting the modern texture compression that increases the effective memory bandwidth in applications like games. For example, Nvidia upgraded their data color compression in the RTX 20 series so that a game like GTA 5 required ~20-25% less VRAM to run.

 

We can't just look at the memory bandwidth and think that's enough to make judgment calls on. That's in a way like looking at the latest Ryzen 7000 processor and going "huh, still only 64bit? Guess CPUs haven't progressed in the last 30 years since my Nintendo 64 also has a 64-bit processor".

I totally agree that I can't just read it in the literal sense of only seeing the new 384bit on the new gpu and say my old one already has 512bit. It's just the title had. If it were to say it as "Next Gen RTX 50 rumor to use GDDR7" then it would have been fine, but to add the 384bit as if it's the next big thing, felt boring to me and it doesn't capture my excitement. Now if they have it something like "Next Gen RTX 50 to use GDDR7 with upgradable vram via M.2" then it would definitely capture my attention and be excited for what this new card has to offer.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, starsmine said:

Also, not topic relevent, notice the power band, We moving to CAMM baby, Dell standard is real. 

To not drag this thread off topic I've started a new thread to reply to this part.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, TetraSky said:

And it can be ours, for the extraordinary low price of $2499.99 for an RTX 5080.

If you were to use the card for around 6 years, then it's about $1 per day, so that's really cheap!  /s

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

No way the next gen is coming in 2025.

They'll have something else out in 2024 to try and milk people dry before then.... then they'll release another 20+ Super and Ti models, some with more/less memory/performance, so nobody knows what the fk they're actually buying at all, whilst still trying to ram RT down our throats and charging us over and above for that tech whilst nobody is even turning it on 😛

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, SADS said:

No way the next gen is coming in 2025.

They'll have something else out in 2024 to try and milk people dry before then.... then they'll release another 20+ Super and Ti models, some with more/less memory/performance, so nobody knows what the fk they're actually buying at all, whilst still trying to ram RT down our throats and charging us over and above for that tech whilst nobody is even turning it on 😛

yes... 2024 is Lovelace refreshes, as in the supers. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, SADS said:

They'll have something else out in 2024 to try and milk people dry before then.... then they'll release another 20+ Super and Ti models

yup, check my other thread 

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, SADS said:

No way the next gen is coming in 2025.

They'll have something else out in 2024 to try and milk people dry before then.... then they'll release another 20+ Super and Ti models, some with more/less memory/performance, so nobody knows what the fk they're actually buying at all, whilst still trying to ram RT down our throats and charging us over and above for that tech whilst nobody is even turning it on 😛

when they say 2025, that probably means end of 2025, like otc, nov... Happy Xmas! 🎄

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Mark Kaine said:

when they say 2025, that probably means end of 2025, like otc, nov... Happy Xmas! 🎄

If a company gives a time frame, it is usual to expect towards the end of the time frame. Have nvidia themselves stated 2025? Most of the talk has been rumours. A 2 year cycle would put next gen at end 2024. A small extension to early 2025 isn't significant. I've seen one slide allegedly from nvidia showing Ada-next closer to 2025 than 2026 scale.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 11/17/2023 at 8:39 PM, porina said:

If a company gives a time frame, it is usual to expect towards the end of the time frame. Have nvidia themselves stated 2025? Most of the talk has been rumours. A 2 year cycle would put next gen at end 2024. A small extension to early 2025 isn't significant. I've seen one slide allegedly from nvidia showing Ada-next closer to 2025 than 2026 scale.

ah yeah, i mean 2 years cycle is to be expected.  but generally, yeah, faster ram, "384" bit bus is not all that exciting *if* its still just in 16gb (or worse) range again... i know,  "but it'll be faster", it'll still be a problem as it's been traditionally for Nvidia. 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 05032-Mendicant-Bias said:

3080 was 384 bit

4080 was 256 bit

 

So.... is Nvidia going to give us 192 bit on the 5080 this time around since it's so fast? With a magic cache that does nothing to address the starved bandwidth in most applications?

1) The bus width is only one part of the equation. Don't get too caught up on those numbers which are essentially meaningless in the grand scheme of things. 

 

2) Do we actually know if the most applications are bandwidth-starved?

 

3) Why are you mad at Nvidia for using a "magic cache" when it's currently AMD that has gone that route?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

2) Do we actually know if the most applications are bandwidth-starved?

The large L2 absolutely works, different architectures using the same GDDR memory generation with newer archecture and GPU die deployed with less CUDA cores and narrower bus as well as more CUDA cores with a narrower bus both performing better than the former architecture generation. If bandwidth starvation was broadly a thing then one or both of these would have to not be true. The cache issue only matters when the working set is larger than the L2 cache and starts requiring more remote memory calls, then the reduced memory bandwidth matters, otherwise it doesn't.

 

1-bit bus with 16GB memory combined with a 16GB L2 cache would perform "better" than 384-bit bus with 16GB memory combined with 1 byte L2 cache, new asset loads would just suck until finished on the first scenario but most things are pre-loaded (DX DirectStorage ain't a "thing" yet).

 

If you can only have one then L2 cache is "better", if you want both then that is the 4090. The choice is there 😉

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, 05032-Mendicant-Bias said:

3080 was 384 bit

4080 was 256 bit

 

So.... is Nvidia going to give us 192 bit on the 5080 this time around since it's so fast? With a magic cache that does nothing to address the starved bandwidth in most applications?

Exactly my thought 😄 And cheap 5060 cards with 128 bit bus... *cough* 64 bit...

CPU: Ryzen 5800X3D | Motherboard: Gigabyte B550 Elite V2 | RAM: G.Skill Aegis 2x16gb 3200 @3600mhz | PSU: EVGA SuperNova 750 G3 | Monitor: LG 27GL850-B , Samsung C27HG70 | 
GPU: Red Devil RX 7900XT | Sound: Odac + Fiio E09K | Case: Fractal Design R6 TG Blackout |Storage: MP510 960gb and 860 Evo 500gb | Cooling: CPU: Noctua NH-D15 with one fan

FS in Denmark/EU:

Asus Dual GTX 1060 3GB. Used maximum 4 months total. Looks like new. Card never opened. Give me a price. 

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't we gone through the arguments on bus width already? But the thread got bumped so we go through it all again?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Why are you mad at Nvidia for using a "magic cache" when it's currently AMD that has gone that route?

Nvidia did a similar thing with Ada. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, 05032-Mendicant-Bias said:

3080 was 384 bit

4080 was 256 bit

 

So.... is Nvidia going to give us 192 bit on the 5080 this time around since it's so fast? With a magic cache that does nothing to address the starved bandwidth in most applications?

As Linus always says: there is no such thing as a bad product, just a bad price.

 

Arguing Nvidia would pair a die with an inadequate bus width hampering its performance and wasting expensive die area, is arguing that Nvidia doesn't like money. We both... - we ALL know that ain't the case...

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't matter I'm happy with my 4090.  Looking at the cards after that.  I have way more than 2TB Storage.  

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, HenrySalayne said:

Arguing Nvidia would pair a die with an inadequate bus width hampering its performance and wasting expensive die area

My claim is not that Nvidia is losing money.

My claim is that Nvidia is upselling the lower tier product one tier up, and increasing prices of tiers, making more money.

 

At its core, Nvidia's gross margin is in the GPU die, the difference between what TSMC charges and what Nvidia sells their GPU for to board partners. If Nvidia makes the GPU die more expensive, keeping performance parity by having a cheaper PCB, with fewer memory dies and smaller bus, Nvidia is getting a bigger share of the price of the card.

 

We consumers could have gotten a 4080 with a 384 bit bus and likely a 12GB and a 24GB memory configuration. It would have been faster. Nvidia correctly estimated they could upsell what would have been a 4070 with 256 bit bus and 16GB of VRAM as a 4080.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, 05032-Mendicant-Bias said:

At its core, Nvidia's gross margin is in the GPU die, the difference between what TSMC charges and what Nvidia sells their GPU for to board partners. If Nvidia makes the GPU die more expensive, keeping performance parity by having a cheaper PCB, with fewer memory dies and smaller bus, Nvidia is getting a bigger share of the price of the card.

That's not actually how it works or how Nvidia does it either. Nvidia supplies the GPU package and the memory together, anything beyond that is at the cost of the board maker/graphics card vendor. Even the Founders cards are manufactured under contract so the cost reduction of 1 or 2 memory packages isn't really a factor for Nvidia directly either.

 

There are only two costs that really matter since they are the lions share, TSMC per wafer cost and Nvidia's R&D cost. Everything else is small fry.

 

But the main problem with what you are thinking is around the memory which Nvidia is the supplier of. This is why you do not see mid generation variants of cut down die cards using bigger dies coming out with more memory or wider buses, Nvidia simply does not allow it.

 

image.png.9aad92ecd82dde3588706f07b0907c0e.png

 

If we were talking about the former RTX 4080 12GB variant then I would agree partially, since it would have used a smaller die with a narrower bus than the RTX 4080 16GB while still being sold as an RTX 4080 and a matching price with that model name.

 

Also Nvidia is not going to risk competition with AMD by making nonsensical memory bus configurations. Why would Nvidia want to be stuck with a more costly die to manufacturer and not be able to make price adjustments due to that, there is only so low Nvidia can and would actually sell a GPU package at.

Link to comment
Share on other sites

Link to post
Share on other sites

I agree that the big non recurrring cost for nvidia is R&D. Recurring costs are significant still. I remember that Nvidia GPUs are in the ballpark of 60% profit margin on consumer GPU, and with GH100 it could be as high as 100% profit margin or maybe even more now.

 

I always assumed Nvidia doesn't make margin on the memory, even with GDDR6X. Nvidia makes a bundle GPU+Memory just because that's the memory they have validated the design with, but resells memories from other suppliers with an ironclad deal. (I don't actually know if Nvidia makes a margin on that).

 

1 hour ago, leadeater said:

Also Nvidia is not going to risk competition with AMD by making nonsensical memory bus configurations. Why would Nvidia want to be stuck with a more costly die to manufacturer and not be able to make price adjustments due to that, there is only so low Nvidia can and would actually sell a GPU package at.

Because of money.

 

Nvidia got fond of selling absurdly overpriced GPUs during the pandemics and don't want to reset consumer expectations. Nvidia is not competitive with AMD in price to performance, and Nvidia doesn't care, Nvidia care more about keeping margins high than keeping market share.

 

AMD competes with Nvidia on price on the consumer GPU segment, but on ML accelerators it's not even close. It makes businness sense, because Nvidia can use the wafers to manufacture GH100 instead of GH102 to GH106 at an incredible premium. Nvidia knows that it can lose consumer GPU shippings, and the AI accelerators shipped paid for by VCs money will not only make up for it, but push profitability way up.

 

Nvidia doesn't care that I built three computers for my friends all with AMD parts. Nvidia cares that Elon Musk was FOMOed into buying AI accelerators and still didn't have enough money to secure supply ahead of all other VCs founded startups.

Link to comment
Share on other sites

Link to post
Share on other sites

From a game perspective, in the long term GPUs will be more sensitive to memory than their own design UNLESS they can get the whole "AI" frame generation and upscaling thing down. 

If AMD wanted to brute force their way to raster performance wins, they could just make a bit bigger of a die and throw HBM at the problem. Not cost effective, but it checks a lot of boxes. DLSS/FSR makes that less of a viable strategy though. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, 05032-Mendicant-Bias said:

Nvidia is not competitive with AMD in price to performance, and Nvidia doesn't care, Nvidia care more about keeping margins high than keeping market share.

Perf/price is only one metric to judge a GPU on. AMD only really have an advantage in raster, but even there they're adjusting to be slightly better than nvidia, and no more. The decision factors are much more varied: RT, AI, encoding, power efficiency, software support, VRAM, probably others if you look harder. Not everyone will weigh each part equally, and it is impossible to put this on a neat chart.

 

Also on market share: (Steam Hardware Survey)

image.png.28560ae8c68d22c6cf32447e1106083a.png

If anything, I think when AMD get around to adding ROCm to more consumer tier GPUs, that would probably help their sales more than their gaming efforts. Edit: I may be out of date on this, just had another look and the support does seem to cover most current/previous gen in Windows now.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, porina said:

Perf/price is only one metric to judge a GPU on. AMD only really have an advantage in raster, but even there they're adjusting to be slightly better than nvidia, and no more. The decision factors are much more varied: RT, AI, encoding, power efficiency, software support, VRAM, probably others if you look harder. Not everyone will weigh each part equally, and it is impossible to put this on a neat chart.

 

Also on market share: (Steam Hardware Survey)

image.png.28560ae8c68d22c6cf32447e1106083a.png

If anything, I think when AMD get around to adding ROCm to more consumer tier GPUs, that would probably help their sales more than their gaming efforts. Edit: I may be out of date on this, just had another look and the support does seem to cover most current/previous gen in Windows now.

The last steam hardware survey has some obviously false data. 10% of the PC market running steam isn't running 3060s, a gain of 4% from the last survey. Im not sure what happened, but it makes me distrust the rest of the data.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×