Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

13 hours ago, BiG StroOnZ said:

RTX 4090 is $1599, RTX 4080 16GB is $1199, and RTX 4080 12GB is $899.

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, yolosnail said:

I think people are focusing too much on the specs without knowing how the performance really stacks up. We don't know if they're using the same die just cut down, or whether they're all using different dies.

 

From the last gen, performance vs CUDA cores varied depending on what die was being used, even with the same architecture

 

Just from a few quick calculations at 1080p the GA102 (3080/3080ti/3090/3090ti) averages about 70-75 CUDA cores per frame, whereas the GA104 (3060ti/3070/3070ti) averaged  around 50-55 CUDA cores per frame.

 

So, if the 4080 12GB is really just a '4070' in disguise, then it would use the next die down the stack.

For arguments sake, let's just say they have exactly the same performance per CUDA core as the 30 series. That could actually put the 12GB 7% ahead of the 16GB in performance despite having less CUDA cores!

 

Obviously that won't be the case, but it's exactly why you should never just look at the specs and think more=better!

 

And just to add to the argument, IMO, the 4080 16GB isn't the 'true' 4080 with the 4080 12GB being the 4070. To me, the 4080 16GB is the 4080ti and the 4080 12GB is the 'true' 4080

I think you misunderstand why the smaller core count chips get more frames per core. 
Its simple as game engines and drivers failing to use all the cores at once. Its a highly parallel process sure, but its not using n threads.

Its like a 1000 lane highway and at say 4am, only 100 get used, at rush hour all are used. 
vs a 500 lane highway where where at 4am only 100 get used and at rush hour all are used.

that 4 am part, you just used 10 lanes per car on one gpu, and on the other gpu you used 5 lanes per car.

the die being used had minimal effect on cuda performance. A GA102 cut down to GA104 specs will perform similarly. You saw a good example of this with the 2060 KO where a TU104 was cut down to the equivalent core count of the tu106 and they performed equivalently in gaming benchmarks. the KO only pulling ahead on certain compute tasks that looked for resources available only on the TU104.

Its literally difficult to keep 10k cores fed with data, and it simply doesnt do that in many situations. there are a bunch of bubbles. 

But yes the performance difference between the 4080 12G and 4080 16G is being exaggerated here... and also I feel like a bad point when they are in completely different price brackets.

Most generations have had a card or many cards in its line up with the same name, with different ram amounts, perform differently, its not misleading here like we have had in the past (460 SE, 460, 460 OEM as an example, also some 460s had twice the ram)

 

  like guys

image.thumb.png.ced1c61cb05b51260d1e914ce794cf45.png
Basically, wait a bit for benchmark suits to see how much the spread is, and recognize that its a 300 dollar difference between 900 and 1200 dollars.
Also there is not much slack to make that card cost much less then 900 dollars, Im not sure Nvidia even with the names dont mean anything mentality really wants to be going around with a xx70 tier card costing 900 dollars.

 

1 hour ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

get a 4060 and it will still be 2x faster or something, idk. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, eGetin said:

I'd say DLSS 3 is a gimmick and with the information we got in the keynote I feel like it's only going to work in cases where the frame rate was already good enough. You can't fix input lag by generating completely new frames out of thin air.

Actually, yes, yes you can! That's the proven ability of machine learning. While it doesn't seem all that much, smoothing out the 1% and 0.1% of the lower FPS would be a substantial improvement in eliminating micro-stutter.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, StDragon said:

Actually, yes, yes you can! That's the proven ability of machine learning. While it doesn't seem all that much, smoothing out the 1% and 0.1% of the lower FPS would be a substantial improvement in eliminating micro-stutter.

But in this case the frames don't come from game engine but from the GPU or driver or whatever. DLSS 1 and 2 were huge improvements because they were just fancy upscaling and your "underpowered" GPU could render frames in much lower resolution which meant higher FPS. If I understood correctly, DLSS 3 takes one step further from there and it basically creates a new frame out of thin air based on predicted changes in pixels. So in this case you are able to see a greater value in FPS counter but it doesn't exactly feel like it.

 

The question really is that how high base FPS do you need so that the game doesn't feel "off"? Is 40 enough? 60? 100? 20 for sure isn't and getting 30 or 40 fps just by generating "fake" frames in between the real ones is not going to enhance your input lag much. It remains to be seen for now.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

Is ordering from the US not a thing?  At that price difference how bad could shipping be on a used 3000 series?

Workstation:  14700nonk || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, starsmine said:

I think you misunderstand why the smaller core count chips get more frames per core. 
Its simple as game engines and drivers failing to use all the cores at once. Its a highly parallel process sure, but its not using n threads.

Its like a 1000 lane highway and at say 4am, only 100 get used, at rush hour all are used. 
vs a 500 lane highway where where at 4am only 100 get used and at rush hour all are used.

that 4 am part, you just used 10 lanes per car on one gpu, and on the other gpu you used 5 lanes per car.

the die being used had minimal effect on cuda performance. A GA102 cut down to GA104 specs will perform similarly. You saw a good example of this with the 2060 KO where a TU104 was cut down to the equivalent core count of the tu106 and they performed equivalently in gaming benchmarks. the KO only pulling ahead on certain compute tasks that looked for resources available only on the TU104.

Its literally difficult to keep 10k cores fed with data, and it simply doesnt do that in many situations. there are a bunch of bubbles. 

But yes the performance difference between the 4080 12G and 4080 16G is being exaggerated here... and also I feel like a bad point when they are in completely different price brackets.

Most generations have had a card or many cards in its line up with the same name, with different ram amounts, perform differently, its not misleading here like we have had in the past (460 SE, 460, 460 OEM as an example, also some 460s had twice the ram)

its simply 4080 12G, and 4080 16G

 

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

 

On average, GA102 dies need 75 CUDA cores for 1fps, so, if I want to try and calculate what a 3070 would get from that I would come out with 76fps, but in reality the 3070 gets 106fps because GA104 dies only need 55 CUDA cores for the same 1fps.

 

The reason for that difference is a moot point IMO since I was basing it on real world data.

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, eGetin said:

If I understood correctly, DLSS 3 takes one step further from there and it basically creates a new frame out of thin air based on predicted changes in pixels. So in this case you are able to see a greater value in FPS counter...

Correct, via interpolation.

 

6 minutes ago, eGetin said:

but it doesn't exactly feel like it.

If you're VSync-ed, you probably wouldn't notice other than a buttery-smooth FPS experience. DLSS3 would step in to interpolate even if the CPU is too busy to send it updated commands. So if there's input lag, it would be due to the CPU, not GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, yolosnail said:

 

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

 

On average, GA102 dies need 75 CUDA cores for 1fps, so, if I want to try and calculate what a 3070 would get from that I would come out with 76fps, but in reality the 3070 gets 106fps because GA104 dies only need 55 CUDA cores for the same 1fps.

 

The reason for that difference is a moot point IMO since I was basing it on real world data.

No, Im telling you thats wrong, and I gave you the REASON why that is wrong by explaining what you are seeing, and I gave you real world data to back that up with the 2060KO. Its not moot, its giving you the explanation on why the ratio happens, and that it isnt because of them being different dies.

GA102 75:1
GA104 55:1 

Is misleading, CUT DOWN the 102 core count to that of a 104 and they perform the same, the ratio will be similar. 

HECK they did that with the 104 being cut down to match the 106 with a 3060 variant and use up more lower binned 104s. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, StDragon said:

If you're VSync-ed, you probably wouldn't notice other than a buttery-smooth FPS experience. DLSS3 would step in to interpolate even if the CPU is too busy to send it updated commands. So if there's input lag, it would be due to the CPU, not GPU.

Ah yes, so you mean that DLSS 3 would be able to iron out fluctuating FPS? So if the target is for example 120fps and for some reason without DLSS you would see FPS dropping to like 100fps every now and then, DLSS3 could then step in and bump FPS back up to 120 with it's trickery? That actually sounds like a valid use case and could see it being a nice-to-have feature for those that would own a 40-series GPU. Wouldn't value it too high though thanks to VRR smoothing the image anyway when FPS is jumping around a bit. And it's also not how Nvidia themselves used it to market the new graphics cards 😄

Link to comment
Share on other sites

Link to post
Share on other sites

So other than the typical "NvIdiots will buy anything even though AMD is better!", people bitching about the names and prices (welcome to the real world where we got a recession), what do people think? 

 

I think the next generation sounds great. 

I am really excited for their new VNENC. Performance in that is going to be insane, and I am looking forward to seeing AV1. Intel's encoder isn't getting that great results so my guess is that they are missing some crucial features. 

I really like that they are working with developers of third party programs as well to implement new functions and optimize for older cards as well. 

 

The reordering function sounds like a game changer (for GPU designers) if it works as advertised. Probably something all GPUs will have in the future. 

 

Efficiency sounds good, but I am wondering if those measurements were in RT tasks or rasterization tasks. I am sure that TSMC's node is far more efficient than Samsung's 8nm node, but since the clocks seems rather high I am wondering if they might be running them quite far from the efficiency peak. 

 

Performance seems good as well, but we should wait for third party reviews. Last time Nvidia advertised double rasterization performance it was in cherry picked benchmarks, and the reality was more along the lines of 70% higher performance.

 

The other stuff they showed like the AI remastering seems fun but I wouldn't be surprised if it ends up being a gimmick that only works well on a handful of games. 

DLSS 3.0 sounds fantastic, but we'll have to see how well it works in real life once it's released. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, LAwLz said:

So other than the typical "NvIdiots will buy anything even though AMD is better!", people bitching about the names and prices (welcome to the real world where we got a recession), what do people think? 

 

I think the next generation sounds great. 

I am really excited for their new VNENC. Performance in that is going to be insane, and I am looking forward to seeing AV1. Intel's encoder isn't getting that great results so my guess is that they are missing some crucial features. 

I really like that they are working with developers of third party programs as well to implement new functions and optimize for older cards as well. 

 

The reordering function sounds like a game changer (for GPU designers) if it works as advertised. Probably something all GPUs will have in the future. 

 

Efficiency sounds good, but I am wondering if those measurements were in RT tasks or rasterization tasks. I am sure that TSMC's node is far more efficient than Samsung's 8nm node, but since the clocks seems rather high I am wondering if they might be running them quite far from the efficiency peak. 

 

Performance seems good as well, but we should wait for third party reviews. Last time Nvidia advertised double rasterization performance it was in cherry picked benchmarks, and the reality was more along the lines of 70% higher performance.

 

The other stuff they showed like the AI remastering seems fun but I wouldn't be surprised if it ends up being a gimmick that only works well on a handful of games. 

DLSS 3.0 sounds fantastic, but we'll have to see how well it works in real life once it's released. 

I would be happy if the 3dmark leaks where real and it was just around 80% faster.  That still  would be awesome. 

And i did see it has double encoders now, that is legit awesome.  Says it can do 8k 60fps recordings real time.  Very impressive. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

Whats a good real atx 3.0 PSU thats at least 1300-1600 watts for the 4090?

 

So fare iv only been able to find this one.

https://www.thermaltakeusa.com/toughpower-gf3-1650w-gold-tt-premium-edition.html?___store=us

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, xAcid9 said:

AMD will price RDNA3 accordingly to 4000 series performance imo, they stop caring for market share long time ago because you guys will buy Nvidia doesn't matter how high is the price anyway.

At first glance RDNA2 might seem roughly aligned with Ampere. AMD aimed for around parity in raster performance in their models but if you look around the edges they were worse in RT, worse video encoder. Probably more but those two were most noticeable to me. They're getting more feature complete but now they need to significantly improve the performance on wider features to match competition. For me to consider buying any RDNA3 card, at a given street pricing, AMD would have to offer comparable or better performance in both raster and RT games (native rendering), and have a video encoder that works well and is widely supported in software. I can't see them doing that.

 

Also for RDNA2 generation it didn't feel like GPU production was their priority. They were capacity constrained and it felt like the much more profitable enterprise products were their focus. So they made enough GPUs to have a presence and had no reason to try for share, which generally requires offering significantly more value than the competition, or the competition to make a massive blunder. Despite the posts in this thread, I don't see what we know so far to count as that.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, yolosnail said:

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

As@starsmine explained the die itself doesn't have anything to do with it. The two main factors is thread scaling inefficiencies and power scaling per SM/CUDA Core.

 

A quite simple demonstration of the flaw in your reasoning is the RTX 3060 (GA104 variant) vs RTX 3070 Ti. There is a ~55% performance difference using the same die and translates to quite a big difference of performance per SM/CUDA Core. Yet the performance is identical for a RTX 3060 (GA106) vs RTX 3060 (GA104) so if you used a ratio like you said you'd have a mathematical error viruses reality.

 

It really is as simple as less cores = more performance efficient in regards to achieved performance per core. Nvidia uses this along with power limits to create product segmentation that makes logical sense in terms of performance difference and cost difference. Power limits is a major tool Nvidia uses to create mid lifecycle Ti variants of products on the same GPU dies, often mistaken for node or product maturity or getting enough binned stock to create it. Every GPU die in the main consumer product line could have been produced and offered day 1 if Nvidia so choose.

Link to comment
Share on other sites

Link to post
Share on other sites

Hmmm... To wait this generation out and stick with my 3080 laptop or build an SFX desktop with this upcoming gen? Tis the question. 

 

Anyway, that 12gb 4080 should be a 4070. That is just insane to call it a 4080 when it's a completely different die to the 16gb model with less cores. 

 

I wonder how AMD is gonna go with RDNA 3 now, if they are going chiplet like with Zen they could undercut Nvidia on efficiency and cost. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, HenrySalayne said:

AD104

4080 12G

 

AD103

4080 16G

wait they are using different chips between the "same models"? people will feel like that is a scam, if they are not clear with how that is marketed.

Such a sh*tty move, and not unexpected, think there was some laws not too long ago that was focus on some of the chip market and hope some new ones come.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

At first glance RDNA2 might seem roughly aligned with Ampere. AMD aimed for around parity in raster performance in their models but if you look around the edges they were worse in RT, worse video encoder. Probably more but those two were most noticeable to me. They're getting more feature complete but now they need to significantly improve the performance on wider features to match competition. For me to consider buying any RDNA3 card, at a given street pricing, AMD would have to offer comparable or better performance in both raster and RT games (native rendering), and have a video encoder that works well and is widely supported in software. I can't see them doing that.

 

Also for RDNA2 generation it didn't feel like GPU production was their priority. They were capacity constrained and it felt like the much more profitable enterprise products were their focus. So they made enough GPUs to have a presence and had no reason to try for share, which generally requires offering significantly more value than the competition, or the competition to make a massive blunder. Despite the posts in this thread, I don't see what we know so far to count as that.

 

 

Lucky for me none of those edge cases are relevant in my case so I can happily consider RDNA2 as a good competitor to Ampere. If AMD can get a better price to performance ratio in rasterization for RDNA3, I'm sold.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, starsmine said:

As pointed out earlier, the die is 2x the cost, with more power consumption and everything that goes along with that.

If the newer process is so much more expensive that it would be cheaper to make a bigger die on an old one - that's Nvidia's problem, not mine, and they shouldn't (and likely wouldn't) have launched these cards.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, eGetin said:

Ah yes, so you mean that DLSS 3 would be able to iron out fluctuating FPS?

depends on the situation and what makes the FPS fluctuate, else it doesn't matter or it's just less noticable fluctuation in some cases.

5 hours ago, eGetin said:

So if the target is for example 120fps and for some reason without DLSS you would see FPS dropping to like 100fps every now and then, DLSS3 could then step in and bump FPS back up to 120 with it's trickery? That actually sounds like a valid use case and could see it being a nice-to-have feature for those that would own a 40-series GPU. Wouldn't value it too high though thanks to VRR smoothing the image anyway when FPS is jumping around a bit. And it's also not how Nvidia themselves used it to market the new graphics cards

I thought VRR was more of a monitor thing, else the syncing between the GPU spitting out frames and what frames the monitor can take and show to you.

Like G-sync and if having the module that can stack images to use the ones it needs if the GPU is too fast.

 

DLSS is like every other feature like FSR, XeSS and previous DLSS, it will increase your FPS, but can be reduced quality while in some areas increase quality or reduced accuracy (if full reconstruction and depending on how it works).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ydfhlx said:

If the newer process is so much more expensive that it would be cheaper to make a bigger die on an old one - that's Nvidia's problem, not mine, and they shouldn't (and likely wouldn't) have launched this cards.

I didnt say it would be cheaper to make a bigger die on an old node. on tsmc n7 the chip would have been 1000-1100 mm2 exceed 600W assuming it could scale the frequency. Making for a more expensive card due to yields dropping off a cliff when going that large.
Samsung 8n the chip would have been in the ball park of 1500-1700mm2 which, damn. 

Yes I agree that nvidia probably should not have been designing a 605mm2 die for n4 consumer chips especially as they knew the costs over a year ago when the locked in the designs. But they would have been making AD103/AD102/AD100 for professional level cards anyways. So is there that much harm letting consumers purchase the AD103 for a flagship?

Consumers are not entitled to owning flagship class cards. there will still be AD104/AD106/AD108 cards down the stack that will improve price performance next year. I just hope they dont make the same mistakes with the 3060 where it was literally impossible to manufacture at cost as soon as ram prices when up a whole percent and just left AIBs holding the bag. 3060 never needed 12GB of ram. Or the 3050 where they made a MSRP under BOM at launch so no AIBs ever made it. 

So long as Nvidia avoids those situations next year, this gen will be just fine.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

Fuck me....

 

that's more than i have ever spent on an entire PC

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Arika S said:

Fuck me....

 

that's more than i have ever spent on an entire PC

20% inflation 100% price increase 🤣  I was planning to spend quite a bit on my upcoming upgrade, but at this point I think I've swung to the other side, going to build the most value build I can think of. If Zen4 comes out with 7600X at 370 euros like the current listings show I might just buy 12400F with DDR4, entire platform for the same price... My 1060 will have to last for a bit longer it seems. 

 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, BiG StroOnZ said:

 

YdcvSPPdDmKe3aqLALGYK3.thumb.jpg.95cb03e2ca01c5f21e086edb468eb703.jpg

UP TO 4X PERFORMANCE

 

if you use this specific hardware feature that kills your performance anyway

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Sauron said:

UP TO 4X PERFORMANCE

 

if you use this specific hardware feature that kills your performance anyway

And you use DLSS 3 which interpolates frames by actually delaying showing what frame comes next. Gamers love latency!

 

 

On their promotion images they showed this Nvidia-DLSS-3-Cyberpunk-2077.jpg

which made me raise my eyebrow a little bit. 58 ms input lag is absolutely insane, it will feel like you're playing a 20 fps game even if it shows 98 frames. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, ZetZet said:

And you use DLSS 3 which interpolates frames by actually delaying showing what frame comes next. Gamers love latency!

 

 

On their promotion images they showed this Nvidia-DLSS-3-Cyberpunk-2077.jpg

which made me raise my eyebrow a little bit. 58 ms input lag is absolutely insane, it will feel like you're playing a 20 fps game even if it shows 98 frames. 

no.... 166ms of input lag feels like paying a 20fps game.

58ms is a whole lot better. Like yes, its not the same input lag AS someone playing at 98 fps for real, but dude. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×