Jump to content

NVIDIA 'Ada Lovelace' GeForce RTX 4090 to have rumored 2520MHz Boost Clocks and over 2750MHz Max Clocks - more than 90 TFLOPs single-precision compute

BiG StroOnZ
On 7/4/2022 at 8:07 AM, tikker said:

Damn. These rumours make the 4000 series sound rather attractive. Let's hope for some reasonable pricing.

Do they though..? We've been at a point for quite a few years now where clocks are being pushed at the expensive of power draw.

.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

The Ti models/refreshes were a bit messed up in RTX 30 series, I suspect things are like they are to give room for more sensible and meaningful Ti models across the product lineup.

 

This mess I suspect will get ironed out in RTX 40 series.

I don't get what you're trying to show with the partial tables. Given a later post, I might go and look back more generations for more context on how they've moved.

 

5 hours ago, leadeater said:

Also two notes, I believe the caches are getting huge increases so will have similar benefit to Infinity Cache

This is the logical conclusion if compute potential goes up but mem bw goes down, if you don't want to choke performance. I haven't been following rumours closely so don't recall seeing anything to that effect.

 

5 hours ago, leadeater said:

and the other note is if Nvidia really is looking to reduce TSMC 5nm allocation then to me that means RTX 40 series is not as good as rumored. If RTX 40 series is as good as rumored Nvidia would not be looking to reduce allocation because literally everyone is going to want them and I doubt a recession would change it that much. RTX 30 -> RTX 40 may not be as good as RTX 20 -> RTX 30, a node shrink should mean that it would be but who knows. It's a very odd move from Nvidia in my opinion is all.

Can't say I'd make that conclusion. If economy tanks then luxury spending will be the first thing that gets cut. Make do with what you have for longer until things look better.

 

Also not sure where I was going when I looked at the following:

Steam Hardware Survey Feb 2022 relative share of latest 3 NV gens:

Pascal 36%

Turing 45% (of which 16-series 26%, 20-series 19%)

Ampere 19%

 

Keep in mind some 20 series models were/are still in production in parallel with 30 series. 

 

As much as people dumped on Turing, that's still a lot of them in use out there. And a lot are still holding on to Pascal. Ampere adoption may be hindered by shortages. With the supposed mining sell off and lowering prices, it might have gone up since February.

 

If you wonder why I looked at Feb results given we should have June now, I had manually entered it into a sheet previously so it was easy to access. Not in the mood for manual data entry and their web page format meant I was unable to automate an import.

 

 

Also played a bit with a silicon calculator showing 88 potential GA102 size dies per wafer. Ignoring yield and binning for now. If we assume nvidia's requested reduction is bigger than AMDs actual reduction, let's use AMD numbers of 20k wafers, that's still over 1.7M potential GA102 size GPU equivalents. If they make smaller ones, of course those numbers would be much higher still. Total market (including AMD) dGPU sales since around the time of Ampere launch is averaging somewhere over 10M units per quarter.

 

We have published defect rate indication for N5, and using GA102 dimensions we get around 56% yield based on defects alone. That may vary a bit due to binning, and cut down models will increase the effective yield. Again, smaller GPUs will have higher yield.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, porina said:

I don't get what you're trying to show with the partial tables. Given a later post, I might go and look back more generations for more context on how they've moved.

The odd ram sizes and bus widths, and having to move to GDDR6X for the RTX 3070 Ti.

 

I suspect the ram sizes will make more sense along with memory bus widths as that'll be how the RTX 40 series increases ram between Ti and non-Ti

 

36 minutes ago, porina said:

Can't say I'd make that conclusion. If economy tanks then luxury spending will be the first thing that gets cut. Make do with what you have for longer until things look better.

It's dipped before without much impact to GPU sales.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

The odd ram sizes and bus widths, and having to move to GDDR6X for the RTX 3070 Ti.

 

I suspect the ram sizes will make more sense along with my bus widths as that'll be how the RTX 40 series increases ram between Ti and non-Ti

Ok, gotcha.

 

To reformat the values from the tweet for better readability:

Model GPU mem width VRAM
4090 AD102 384 24
4080 AD103 256 16
4070 AD104 160 10

A possible 4080 Ti could be built with a 320-bit interface allowing 20GB max.

Likewise a 4070 Ti might go with a 192-bit interface for 12GB

 

If they stick to the same tier GPU or not will be a question. Is there connectivity in reserve, or will it require moving up to the next GPU up? Or specifically designed models will take up that space when the time comes.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/5/2022 at 9:02 AM, Rauten said:

So the alleged 4080, with 10240 CUDA cores and 16GB of VRAM, has a TDP of 420W.

And the alleged 4090, with 16384 CUDA cores and 24GB VRAM, has a TDP of 450W.

whoot, the 3090 had like 10% more CUDA cores over the 3080 and now all the sudden 60%? That would be huge and an actual reason to get the 90 model.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AlwaysFSX said:

Do they though..? We've been at a point for quite a few years now where clocks are being pushed at the expensive of power draw.

That's inevitable. You can't always increase clocks or make more powerful chips without increasing power draw and a higher power draw doesn't immediately mean less efficient cards. According to this rumour the 4090 would have 52% more CUDA cores and 30-60% higher clocks. The 450 W TDP (I presume) is only a 29% increase from the 350 W of a 3090 and equal to the 450 W of the 3090 Ti (taking the figures from Nvidia's web page). Similarly the 4080's 420 W is 30% higher than the 3080's 320 W.

 

IF these numbers are true and the performance scales like the clock increases then the 4000 series is equally efficient, in terms of performance, at worst and more efficient in better scenarios. At the end of the day I still take this with a boulder of salt until they actually launch, but to me they do seem attractive.

  

On 7/4/2022 at 4:31 PM, WereCat said:

Everytime new cards are about to release all rumours tend to sound too good to be true.

 

60% higher BASE clocks vs RTX 3090 only sounds impressive because of how low the base clock on 3090 is. In reality most 3090 cards actually run at 1800MHz+ anyways even the ones with bad coolers.

Actually since 1000 series the GPU boost on NVIDIA cards did such a great job that the base clock is basicaly just a meaningless number for the most part.

 

This will be only impressive if we can see the same behaviour on the 4000 series and the cards will boost themselves close to 3000MHz. But due to the massive power increase I believe that NVIDIA started pushing the silicon closer to its limit rather than leaving a massive headroom like they did for the last 3 generations.

It does mention 49% higher boost and 31% higher max clocks as well, but I agree. We'll have to see how far GPU Boost will let these cards actually go when they come out.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×