Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

5 minutes ago, eGetin said:

If I understood correctly, DLSS 3 takes one step further from there and it basically creates a new frame out of thin air based on predicted changes in pixels. So in this case you are able to see a greater value in FPS counter...

Correct, via interpolation.

 

6 minutes ago, eGetin said:

but it doesn't exactly feel like it.

If you're VSync-ed, you probably wouldn't notice other than a buttery-smooth FPS experience. DLSS3 would step in to interpolate even if the CPU is too busy to send it updated commands. So if there's input lag, it would be due to the CPU, not GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, yolosnail said:

 

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

 

On average, GA102 dies need 75 CUDA cores for 1fps, so, if I want to try and calculate what a 3070 would get from that I would come out with 76fps, but in reality the 3070 gets 106fps because GA104 dies only need 55 CUDA cores for the same 1fps.

 

The reason for that difference is a moot point IMO since I was basing it on real world data.

No, Im telling you thats wrong, and I gave you the REASON why that is wrong by explaining what you are seeing, and I gave you real world data to back that up with the 2060KO. Its not moot, its giving you the explanation on why the ratio happens, and that it isnt because of them being different dies.

GA102 75:1
GA104 55:1 

Is misleading, CUT DOWN the 102 core count to that of a 104 and they perform the same, the ratio will be similar. 

HECK they did that with the 104 being cut down to match the 106 with a 3060 variant and use up more lower binned 104s. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, StDragon said:

If you're VSync-ed, you probably wouldn't notice other than a buttery-smooth FPS experience. DLSS3 would step in to interpolate even if the CPU is too busy to send it updated commands. So if there's input lag, it would be due to the CPU, not GPU.

Ah yes, so you mean that DLSS 3 would be able to iron out fluctuating FPS? So if the target is for example 120fps and for some reason without DLSS you would see FPS dropping to like 100fps every now and then, DLSS3 could then step in and bump FPS back up to 120 with it's trickery? That actually sounds like a valid use case and could see it being a nice-to-have feature for those that would own a 40-series GPU. Wouldn't value it too high though thanks to VRR smoothing the image anyway when FPS is jumping around a bit. And it's also not how Nvidia themselves used it to market the new graphics cards 😄

Link to comment
Share on other sites

Link to post
Share on other sites

So other than the typical "NvIdiots will buy anything even though AMD is better!", people bitching about the names and prices (welcome to the real world where we got a recession), what do people think? 

 

I think the next generation sounds great. 

I am really excited for their new VNENC. Performance in that is going to be insane, and I am looking forward to seeing AV1. Intel's encoder isn't getting that great results so my guess is that they are missing some crucial features. 

I really like that they are working with developers of third party programs as well to implement new functions and optimize for older cards as well. 

 

The reordering function sounds like a game changer (for GPU designers) if it works as advertised. Probably something all GPUs will have in the future. 

 

Efficiency sounds good, but I am wondering if those measurements were in RT tasks or rasterization tasks. I am sure that TSMC's node is far more efficient than Samsung's 8nm node, but since the clocks seems rather high I am wondering if they might be running them quite far from the efficiency peak. 

 

Performance seems good as well, but we should wait for third party reviews. Last time Nvidia advertised double rasterization performance it was in cherry picked benchmarks, and the reality was more along the lines of 70% higher performance.

 

The other stuff they showed like the AI remastering seems fun but I wouldn't be surprised if it ends up being a gimmick that only works well on a handful of games. 

DLSS 3.0 sounds fantastic, but we'll have to see how well it works in real life once it's released. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, LAwLz said:

So other than the typical "NvIdiots will buy anything even though AMD is better!", people bitching about the names and prices (welcome to the real world where we got a recession), what do people think? 

 

I think the next generation sounds great. 

I am really excited for their new VNENC. Performance in that is going to be insane, and I am looking forward to seeing AV1. Intel's encoder isn't getting that great results so my guess is that they are missing some crucial features. 

I really like that they are working with developers of third party programs as well to implement new functions and optimize for older cards as well. 

 

The reordering function sounds like a game changer (for GPU designers) if it works as advertised. Probably something all GPUs will have in the future. 

 

Efficiency sounds good, but I am wondering if those measurements were in RT tasks or rasterization tasks. I am sure that TSMC's node is far more efficient than Samsung's 8nm node, but since the clocks seems rather high I am wondering if they might be running them quite far from the efficiency peak. 

 

Performance seems good as well, but we should wait for third party reviews. Last time Nvidia advertised double rasterization performance it was in cherry picked benchmarks, and the reality was more along the lines of 70% higher performance.

 

The other stuff they showed like the AI remastering seems fun but I wouldn't be surprised if it ends up being a gimmick that only works well on a handful of games. 

DLSS 3.0 sounds fantastic, but we'll have to see how well it works in real life once it's released. 

I would be happy if the 3dmark leaks where real and it was just around 80% faster.  That still  would be awesome. 

And i did see it has double encoders now, that is legit awesome.  Says it can do 8k 60fps recordings real time.  Very impressive. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

Whats a good real atx 3.0 PSU thats at least 1300-1600 watts for the 4090?

 

So fare iv only been able to find this one.

https://www.thermaltakeusa.com/toughpower-gf3-1650w-gold-tt-premium-edition.html?___store=us

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, xAcid9 said:

AMD will price RDNA3 accordingly to 4000 series performance imo, they stop caring for market share long time ago because you guys will buy Nvidia doesn't matter how high is the price anyway.

At first glance RDNA2 might seem roughly aligned with Ampere. AMD aimed for around parity in raster performance in their models but if you look around the edges they were worse in RT, worse video encoder. Probably more but those two were most noticeable to me. They're getting more feature complete but now they need to significantly improve the performance on wider features to match competition. For me to consider buying any RDNA3 card, at a given street pricing, AMD would have to offer comparable or better performance in both raster and RT games (native rendering), and have a video encoder that works well and is widely supported in software. I can't see them doing that.

 

Also for RDNA2 generation it didn't feel like GPU production was their priority. They were capacity constrained and it felt like the much more profitable enterprise products were their focus. So they made enough GPUs to have a presence and had no reason to try for share, which generally requires offering significantly more value than the competition, or the competition to make a massive blunder. Despite the posts in this thread, I don't see what we know so far to count as that.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, yolosnail said:

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

As@starsmine explained the die itself doesn't have anything to do with it. The two main factors is thread scaling inefficiencies and power scaling per SM/CUDA Core.

 

A quite simple demonstration of the flaw in your reasoning is the RTX 3060 (GA104 variant) vs RTX 3070 Ti. There is a ~55% performance difference using the same die and translates to quite a big difference of performance per SM/CUDA Core. Yet the performance is identical for a RTX 3060 (GA106) vs RTX 3060 (GA104) so if you used a ratio like you said you'd have a mathematical error viruses reality.

 

It really is as simple as less cores = more performance efficient in regards to achieved performance per core. Nvidia uses this along with power limits to create product segmentation that makes logical sense in terms of performance difference and cost difference. Power limits is a major tool Nvidia uses to create mid lifecycle Ti variants of products on the same GPU dies, often mistaken for node or product maturity or getting enough binned stock to create it. Every GPU die in the main consumer product line could have been produced and offered day 1 if Nvidia so choose.

Link to comment
Share on other sites

Link to post
Share on other sites

Hmmm... To wait this generation out and stick with my 3080 laptop or build an SFX desktop with this upcoming gen? Tis the question. 

 

Anyway, that 12gb 4080 should be a 4070. That is just insane to call it a 4080 when it's a completely different die to the 16gb model with less cores. 

 

I wonder how AMD is gonna go with RDNA 3 now, if they are going chiplet like with Zen they could undercut Nvidia on efficiency and cost. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, HenrySalayne said:

AD104

4080 12G

 

AD103

4080 16G

wait they are using different chips between the "same models"? people will feel like that is a scam, if they are not clear with how that is marketed.

Such a sh*tty move, and not unexpected, think there was some laws not too long ago that was focus on some of the chip market and hope some new ones come.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

At first glance RDNA2 might seem roughly aligned with Ampere. AMD aimed for around parity in raster performance in their models but if you look around the edges they were worse in RT, worse video encoder. Probably more but those two were most noticeable to me. They're getting more feature complete but now they need to significantly improve the performance on wider features to match competition. For me to consider buying any RDNA3 card, at a given street pricing, AMD would have to offer comparable or better performance in both raster and RT games (native rendering), and have a video encoder that works well and is widely supported in software. I can't see them doing that.

 

Also for RDNA2 generation it didn't feel like GPU production was their priority. They were capacity constrained and it felt like the much more profitable enterprise products were their focus. So they made enough GPUs to have a presence and had no reason to try for share, which generally requires offering significantly more value than the competition, or the competition to make a massive blunder. Despite the posts in this thread, I don't see what we know so far to count as that.

 

 

Lucky for me none of those edge cases are relevant in my case so I can happily consider RDNA2 as a good competitor to Ampere. If AMD can get a better price to performance ratio in rasterization for RDNA3, I'm sold.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, starsmine said:

As pointed out earlier, the die is 2x the cost, with more power consumption and everything that goes along with that.

If the newer process is so much more expensive that it would be cheaper to make a bigger die on an old one - that's Nvidia's problem, not mine, and they shouldn't (and likely wouldn't) have launched these cards.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, eGetin said:

Ah yes, so you mean that DLSS 3 would be able to iron out fluctuating FPS?

depends on the situation and what makes the FPS fluctuate, else it doesn't matter or it's just less noticable fluctuation in some cases.

5 hours ago, eGetin said:

So if the target is for example 120fps and for some reason without DLSS you would see FPS dropping to like 100fps every now and then, DLSS3 could then step in and bump FPS back up to 120 with it's trickery? That actually sounds like a valid use case and could see it being a nice-to-have feature for those that would own a 40-series GPU. Wouldn't value it too high though thanks to VRR smoothing the image anyway when FPS is jumping around a bit. And it's also not how Nvidia themselves used it to market the new graphics cards

I thought VRR was more of a monitor thing, else the syncing between the GPU spitting out frames and what frames the monitor can take and show to you.

Like G-sync and if having the module that can stack images to use the ones it needs if the GPU is too fast.

 

DLSS is like every other feature like FSR, XeSS and previous DLSS, it will increase your FPS, but can be reduced quality while in some areas increase quality or reduced accuracy (if full reconstruction and depending on how it works).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ydfhlx said:

If the newer process is so much more expensive that it would be cheaper to make a bigger die on an old one - that's Nvidia's problem, not mine, and they shouldn't (and likely wouldn't) have launched this cards.

I didnt say it would be cheaper to make a bigger die on an old node. on tsmc n7 the chip would have been 1000-1100 mm2 exceed 600W assuming it could scale the frequency. Making for a more expensive card due to yields dropping off a cliff when going that large.
Samsung 8n the chip would have been in the ball park of 1500-1700mm2 which, damn. 

Yes I agree that nvidia probably should not have been designing a 605mm2 die for n4 consumer chips especially as they knew the costs over a year ago when the locked in the designs. But they would have been making AD103/AD102/AD100 for professional level cards anyways. So is there that much harm letting consumers purchase the AD103 for a flagship?

Consumers are not entitled to owning flagship class cards. there will still be AD104/AD106/AD108 cards down the stack that will improve price performance next year. I just hope they dont make the same mistakes with the 3060 where it was literally impossible to manufacture at cost as soon as ram prices when up a whole percent and just left AIBs holding the bag. 3060 never needed 12GB of ram. Or the 3050 where they made a MSRP under BOM at launch so no AIBs ever made it. 

So long as Nvidia avoids those situations next year, this gen will be just fine.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

Fuck me....

 

that's more than i have ever spent on an entire PC

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Arika S said:

Fuck me....

 

that's more than i have ever spent on an entire PC

20% inflation 100% price increase 🤣  I was planning to spend quite a bit on my upcoming upgrade, but at this point I think I've swung to the other side, going to build the most value build I can think of. If Zen4 comes out with 7600X at 370 euros like the current listings show I might just buy 12400F with DDR4, entire platform for the same price... My 1060 will have to last for a bit longer it seems. 

 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, BiG StroOnZ said:

 

YdcvSPPdDmKe3aqLALGYK3.thumb.jpg.95cb03e2ca01c5f21e086edb468eb703.jpg

UP TO 4X PERFORMANCE

 

if you use this specific hardware feature that kills your performance anyway

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Sauron said:

UP TO 4X PERFORMANCE

 

if you use this specific hardware feature that kills your performance anyway

And you use DLSS 3 which interpolates frames by actually delaying showing what frame comes next. Gamers love latency!

 

 

On their promotion images they showed this Nvidia-DLSS-3-Cyberpunk-2077.jpg

which made me raise my eyebrow a little bit. 58 ms input lag is absolutely insane, it will feel like you're playing a 20 fps game even if it shows 98 frames. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, ZetZet said:

And you use DLSS 3 which interpolates frames by actually delaying showing what frame comes next. Gamers love latency!

 

 

On their promotion images they showed this Nvidia-DLSS-3-Cyberpunk-2077.jpg

which made me raise my eyebrow a little bit. 58 ms input lag is absolutely insane, it will feel like you're playing a 20 fps game even if it shows 98 frames. 

no.... 166ms of input lag feels like paying a 20fps game.

58ms is a whole lot better. Like yes, its not the same input lag AS someone playing at 98 fps for real, but dude. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, starsmine said:

no.... 166ms of input lag feels like paying a 20fps game.

58ms is a whole lot better. Like yes, its not the same input lag AS someone playing at 98 fps for real, but dude. 

But DLSS 2 decreased latency, DLSS 3 introduces it. If you get to choose between 60 and 80-90 fps, but 60 has less latency wouldn't you pick 60 most times? I know from my own experience almost all games with VSync are unplayable to me.

 

 

Anyways Digital Foundry is releasing a video about DLSS 3 next week so we will get to see. If it actually does increase latency it's a completely dead feature to me, hopefully the games let you switch to dlss 2 if that's the case.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

There is never an excuse for deceptive marketing.  The best value here is the 4090 but even it I'm hesitant not wanting to give Nividia my money.  Hope they get burned big time by overstock 3000 series and mass mining cards going on the used market.  It would have been better for Nvidia to just have not released the card then to make a whole lie out of it.

 

Defending it is almost as bad as Nvidia themselves.  Companies will not learn if people just keep taking it and making excuses for them.

 

NOTE: The real 4080 isn't work $1200 either but I know why they did it they expected mining to still be relevant when they designed the card and prices are way up.   Fortunately, AMD has way better costs lets if they want to screw over the customer as much as Nvidia does.  I'll be fine with a 3080 for a while if need be.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, ZetZet said:

But DLSS 2 decreased latency, DLSS 3 introduces it. If you get to choose between 60 and 80-90 fps, but 60 has less latency wouldn't you pick 60 most times? I know from my own experience almost all games with VSync are unplayable to me.

are we not choosing between 22 and 98 fps here?

Also note, those latencies are not 1/22 or 1/98, they are using whole pipeline latency.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, ZetZet said:

On their promotion images they showed this Nvidia-DLSS-3-Cyberpunk-2077.jpg

 

Also now that I look at it the image on the left has the "RTX OFF" label but in fact both have raytracing ON, the only difference is DLSS... so what's up with that?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, starsmine said:

are we not choosing between 22 and 98 fps here?

Also note, those latencies are not 1/22 or 1/98, they are using whole pipeline latency.

We are choosing between native, dlss 2 and dlss 3. Like I said, 2 decreased latency with increased frames, so if dlss 3 increases frames further, but adds latency, it has no point of existing in my view.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, starsmine said:

Is it wrong that both the 2.4 and 5.0 mustang are called mustangs?
I mean one has a 2.4L 4 banger, the other has a 5L v8?

Yes it is. A Mustang without a V8 is not a Mustang and everyone who drives the 2.4 turbo is a commie. And don't even get me started on the Mach-E, people who though of that name should be put in jail.

 

/s (or is it?)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×