Jump to content

Rtx 4090 is a monster (Official Benchmarks)

Fasterthannothing

I'm worried the price will be far from MSRP but we'll see.

 

Other than that it's nice, quite far from the 2x performance nvidia advertised (obviously) but still a great improvement. The 4080 is significantly cut down from this so it might not be 60% faster than the 3080.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

I don't think it's because they are scared. It's because they can. As long as the power consumption isn't unreasonable (which it isn't, it's lower than previous gen), why not let the card stretch its legs as much as possible?

Because they've spent the last 2 years being criticised for the power consumption figures of the 3000 series? Because of the outrage we've seen from the community over the last few months at the "leaked" power consumption figures of this generation? (Which I'm sure were all released just to make this 500W look "reasonable" - 500W is still a fuck load of power).

 

They have a ~70% improvement in performance per watt here, and they've chosen to spend basically all of that on performance, despite this consumer pushback. Dropping performance by 9% wouldn't change any of the reviews - it would still be recieved as a massive generational improvement. But it would also allow them to boast massive efficiency gains at a time when power consumption is very much at the forefront of a lot of consumers minds. Rather than a section in each of these review videos of "oh but look this card is still really power hungry" the reviewers could instead be praising Nvidia for such a huge increase in efficiency.

 

What's more impressive as a marketing statement right now: being able to claim a 59% generational performance uplift? Or being able to claim a 50% performance uplift at the same time as a 33% decrease in power consumption? I know which one I'd prefer.

 

And I don't think the 4090 is the "because we can" card for this gen. I very much expect a 4090Ti to appear at some point that draws 600+W for 5% more performance or something ridiculous like that.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, LAwLz said:

I feel kind of bad for the RTX 3070 I am about to pick up at the post office now...

But on the other hand, this card is expensive as balls and not something I would get anyway.

You shouldn't feel bad at all. Thing is 4090 is expensive enough that the value slider doesn't change at all. Which I think was the main intention of the 1600 price tag. If anything this makes used cards like 3070s and Rx 6800s more amazing. Unless rdna3 actually does bring increased performance in the 600-800 range.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

“Ada (Lovelace) provides the largest generational performance upgrade in the history of Nvidia,

 

The 8800 GTX would like a word. At least 130% rasterization generational improvement over the 7800 GTX. If I recall, the 6800 Ultra was something like 110-120% over the FX 5800/5900 Ultra too. Again, rasterization generational improvement.

 

Sure if we are counting RT and DLSS the 4090 improvements are huge, but raster is not 130% faster than the 3090 Ti. They made this same claim with Ampere too and that was definitely NOT "the biggest generational leap ever". More like biggest leaps of past decade?

 

Not trying to cast shade on the performance, the 4090 looks like a beast. Just calling out marketing BS.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, jaslion said:

Yeah this is powerdumping like the p4 days again.

 

Saw the same thing on my 3090. Small undervolt and lost 1% performance but power draw dropped to 280w 😛

Forgot to reply to this before, but I don't see this as a P4 moment. The P4 was just not a clock efficient architecture, but it also wasn't run far into its (in)efficiency curve. Those still had tens of % of overclocking headroom without resorting to extreme cooling.

 

I don't know if we have the data to see exactly where the architecture's native rendering performance is relative to previous generations along multiple points of its performance curve, but it is implied it is set towards the highest power end. I'll have a poke around existing reviews like the one I linked earlier and see if there's enough to show the delta vs Ampere.

 

I suspect the problem is fundamentally people like us on forums like this. Look at any review, and the main take away point will be fps. Maybe it isn't as bad as the old Top Gun saying "no points for 2nd place" but certainly if you launch a high end product, you want to be at the top. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jaslion said:

Whilst yes this is impressive the power consumption is beyond ridiculous. Like my slightly power optimized dual 3090s whilst rendering consume less

Doesn't a signle 3090 consume 350W while a 4090 450W?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Ydfhlx said:

Doesn't a signle 3090 consume 350W while a 4090 450W?

Yeah, yet 4090 seems to consume less than 3090Ti btw:

 

I edit my posts more often than not

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Ydfhlx said:

Doesn't a signle 3090 consume 350W while a 4090 450W?

Yeah too much info too little time.

 

But an optimized 3090 can go down to 250w. But the performance is great on the 4090 per watt.

Link to comment
Share on other sites

Link to post
Share on other sites

I always have to scratch my head whenever the topic of power consumption comes up and there's always those people that (Willfully?) ignore whether they're looking at TDP, Total system power, or GPU/Board only power.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, jaslion said:

But an optimized 3090 can go down to 250w. But the performance is great on the 4090 per watt.

Earlier link I posted showed that at equal power limits, 4090 was far more efficient than 3090 Ti. They did focus more on RT titles though, and I'm not familiar with their test settings. It would be interesting to see if that trend continues at lower power limits if they can be set, and in other (non-RT) games. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Looking at the reviews I am happy that I got the 3090 instead of waiting.

It is really impressive, don't get me wrong, but it is going back in price to performance.

In Europe I paid about 36% of the price of the 4090 for the 3090, but I am getting about 60% of the performance, not too shabby. The 4080 12gb which would have cost me about 50% more than the 3090 (1.5x price, 800€ vs 1200€) is really cut down compared to the 4090, so I am sure it would get similar or less performance than the 3090.

I only see your reply if you @ me.

This reply/comment was generated by AI.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, porina said:

Earlier link I posted showed that at equal power limits, 4090 was far more efficient than 3090 Ti. They did focus more on RT titles though, and I'm not familiar with their test settings. It would be interesting to see if that trend continues at lower power limits if they can be set, and in other (non-RT) games. 

And productivity. Thats my main focus.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Tan3l6 said:

Yeah, yet 4090 seems to consume less than 3090Ti btw:

 

I love Anthony. Great review.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, jaslion said:

And productivity. Thats my main focus.

At least back to Pascal (relative to Maxwell) each generation has usually improved more for compute when comparing against equal gaming performance. I don't know if that's been tested for 40 series. I'm using compute in a general sense (think boinc) and not so much at productivity so it could vary depending on the exact use case.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, porina said:

At least back to Pascal (relative to Maxwell) each generation has usually improved more for compute when comparing against equal gaming performance. I don't know if that's been tested for 40 series. I'm using compute in a general sense (think boinc) and not so much at productivity so it could vary depending on the exact use case.

I've seen tests but it's kinda all over the place rn.

Link to comment
Share on other sites

Link to post
Share on other sites

Is it just me or do the reviews seem to be getting crap views compared to other launches?

 

The power efficiency is what interests me the most, I wonder if AMD will come anywhere near close. With Zen4 they seem to have abandoned efficiency for performance (at stock at least) , lets hope they don't do the same with RDNA3

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, yolosnail said:

Is it just me or do the reviews seem to be getting crap views compared to other launches?

 

The power efficiency is what interests me the most, I wonder if AMD will come anywhere near close. With Zen4 they seem to have abandoned efficiency for performance (at stock at least) , lets hope they don't do the same with RDNA3

The timing of the launch just wasn't a great time for peak viewership hours. Most of Europe is only just now getting home from work, while in the US it's still the middle of the day. Give it another 12 hours and I imagine those numbers will look quite different.

 

I do however wonder how many people just skip to the graphs they're interested in. If they don't watch for very long, it doesn't count as a view and the algorithm punishes the video for low viewer retention. Unfortunately YouTuibe isn't showing me the little viewership graph on these videos - the peak around "power consumption" on recent GPU/CPU reviews always makes me laugh.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

I saw this linked elsewhere. Not a site I'm familiar with but the suggestion there seems to be drop 100W power limit and lose 3% average fps, drop 150W lose 9% average fps. Of course it can vary from game to game.

 

40pow.png.5f9a1ef73c309409d3c7c8eb544cd826.png

https://www.computerbase.de/2022-10/nvidia-geforce-rtx-4090-review-test/5/

 

That is just power limiting too with a decent card you can probably get it to 300W with less than that 3% loss if you also do a custom voltage curve.  How much that helps depends on the quality of your GPU though.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, ewitte said:

That is just power limiting too with a decent card you can probably get it to 300W with less than that 3% loss if you also do a custom voltage curve.  How much that helps depends on the quality of your GPU though.

Adjusting the power limit is normally safe. By that, I mean unless someone messed up the implementation it will not put it into an unstable state. Once you start touching the voltage curve I'd put that in the same class as overclocking. Gains may be possible to tune your particular sample, but you will have to do careful stability testing.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

I'd like to see the 4090 be capped to the power limit of the 3090 and 2080 and see how much the performance get's nerfed. Because I suspect the 30% increase in power plays a larger part here than we are willing to admit.

Actually der8auer did something like that. He checked performance vs power target in 10% steps. Turns out. At 60% power target it's still beating everything up by a mile and consumes 1/3 less power.

 

4090-Perf-PT.thumb.JPG.02b1fab172ba039c4ba8d3c19cbdc2eb.JPG

 

Imho that chart indicates that NVIDIA initially planned with the way less efficient Samsung node and designed their coolers and power delivery systems for that. After the switch to TSMC they had a way more efficient process and massively oversized cooling solutions and an unnecessary power plug.

 

At 60% Power Target you'd decrese performance by 10% (which is irrelevant considering that at 100% the card is 40-80% faster than anything else on the market) but you'd save 1/3 of the power. That kind of increase in power consumption for relatively little more performance is what you'd usually try to squeeze out with overclocking.

 

The 4090 could heave easily be still the fastest GPU by a mile with just 2x 8-Pin PCIe plugs and a little bit of juice from the slot. In theory that cap would sit at 375W (comparing with the chart that's 80% power target).

 

Edit: This is der8auer's english video: 

 

 

Use the quote function when answering! Mark people directly if you want an answer from them!

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

I'd like to see the 4090 be capped to the power limit of the 3090 and 2080 and see how much the performance get's nerfed. Because I suspect the 30% increase in power plays a larger part here than we are willing to admit.

Debouer already did.  -35% power you loose 2-10 percent performance.  It kills it period.

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, bowrilla said:

At 60% Power Target you'd decrese performance by 10% (which is irrelevant considering that at 100% the card is 40-80% faster than anything else on the market) but you'd save 1/3 of the power. That kind of increase in power consumption for relatively little more performance is what you'd usually try to squeeze out with overclocking.

Keep in mind that different workloads may scale a bit differently depending on the mix of execution resources used. Still, that is an interesting chart. Roughly between 30% and 60% power target, it looks like it is power scaling. Only above 60% power target does it go into some mix of limiting other than just power.

 

21 minutes ago, bowrilla said:

The 4090 could heave easily be still the fastest GPU by a mile with just 2x 8-Pin PCIe plugs and a little bit of juice from the slot. In theory that cap would sit at 375W (comparing with the chart that's 80% power target).

Not sure but I think modern nv cards only draw from 12V rail. Total PCIe slot power is combined from 12V rail at up to 5.5A (66W) and 3.3V up to 3A making up the remainder. Still, would any AIB want to make a high end offering without some overclocking headroom? Unless they set the limit to for example 320W to allow some uplift to 360W.

 

Like it or not, this is a "best for now" product so lower power limiting isn't really warranted as default. It could be interesting if nvidia were to offer an easier to access power limit function somehow so those who don't need to run flat out all the time and turn it down a bit without resorting to overclocking tools.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I can't believe just one generation later there is already a card that is 90-100% faster than my 3080 (10G), with an even bigger margin looking at RT. This is huge. If i wouldn't be looking at building a house atm, i would probably get one to fully unlock my C2's 4K 120Hz potential.

 

Just started replaying Cyberpunk a week back. To see how this card BREEZES through 4K High / RT Ultra / DLSS quality (especially at DLSS 3.0) looks completely insane to me. (sitting here at around 70-80 fps on 4K high / DLSS quality / RT off) Don't get me wrong, it's still a great experience. But the 4090's performance capabilities in RT games are just mindblowing imo.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×