Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

8 hours ago, Kilrah said:

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

Tell me if I'm remembering wrong but I thought Nvidia themselves said that DLSS3 (and/or the frame interpolation portion of it) will cause some increased input lag but is offset by an improved Nvidia reflex?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, thechinchinsong said:

Tell me if I'm remembering wrong but I thought Nvidia themselves said that DLSS3 (and/or the frame interpolation portion of it) will cause some increased input lag but is offset by an improved Nvidia reflex?

Yes, it's offset but not negated. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just became aware of Twitter thread below by a NV VP. Some take away points:

 

DLSS 3 relies on Optical Flow Accelerator hardware unit. An OFA is also present in Turing, Ampere, but is significantly improved in Ada. It could run on older RTX GPUs but wouldn't give the expected performance (both in fps and quality) that Ada will.

 

System latency with DLSS3 is comparable to running without. So not the same, but sounds like it'll be close enough for all but the most demanding twitch gamers.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

sounds like it'll be close enough for all but the most demanding twitch gamers.

So possibly useless for esports shooters, great 😞 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Dogzilla07 said:

So possibly useless for esports shooters, great 😞 

I don't have a personal interest in that niche of gaming, but they're already running insane fps. Even without DLSS 3, 40 series should offer some raw performance uplift if it is wanted. If it is all about speed and precision I'd guess native rendering will remain meta for that area. DLSS 3.0 seems like it is more useful as a RT heavy quality of life upgrade.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/22/2022 at 9:34 AM, porina said:

Q: Why isn’t DisplayPort 2.0 listed on the spec sheet?

 

The current DisplayPort 1.4 standard already supports 8K at 60Hz. Support for DisplayPort 2.0 in consumer gaming displays are still a ways away in the future.

This is is just straight up misinformation. Displayport 1.4a is limited to ~26 GBit/s or 2160p120@8bit respectively 4320p30@8bit. Anything above that is a compressed signal via DSC. And even with DSC the limits of DP1.4a will be reached if the "up to 4 times performance" claims hold true.

 

And then using the lack of DP2.0 displays as the reason for not putting DP2.0 on the 40 series is just ridiculous. The reason we haven't seen DP2.0 displays is the absence of DP2.0 on graphics cards.

Link to comment
Share on other sites

Link to post
Share on other sites

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

And Nvidia priced a 4070/4070Ti card at $900, $200 more than the previous 3080 10GB, and $400 more than the 3070. I find the pricing rather egregious after the 30 series was more expensive, and prices went up again after the 30 series launch.

Nvidia also gets away with not allowing Turing and Ampere card users to use DLSS3, the hardware is present but Nvidia wants people to buy RTX 40 series cards to use the latest DLSS, no complains about this because Nvidia.

30 minutes ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

Anyone buying the 4080 12GB without knowing the difference got f*cked by Nvidia's deceptive marketing.

By asking the question anyone running a 3070 or below is admitting the naming is important, the 4080 12GB should be called a 4070Ti and I think calling it a Ti would be generous as the real 4070 might get cut down a lot, like the 3070 was. The 3070 felt like oh you're poor so you only get 8GB of VRAM, getting a card that won't last nearly as long without having to turn things way down so VRAM isn't a limiter.

The naming is important because the 4080 12GB and the 4080 16GB don't have anything in the naming that makes a clear difference in specifications like the significant difference in Cuda cores or bus bandwidth, the problem is most people looking at graphics cards on a shelf or on an online retailer aren't going to know the difference and maybe won't care, they'll just see the 4080 12GB being $300 less and buy that, without knowing they're getting screwed over paying $900 for a cut down card that uses a completely different die from the real 4080.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

Exactly. But unfortunately most have got not spine.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, IPD said:

Boycott the shit out of their products

Good luck

 

Can't believe after 12 pages, we're still talking about the product name lmao

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

What's happening to the specs? Changed or updated? 4080/16 got over 10k cuda cores?

Differences are big between the 12 and 16 version.

 

https://videocardz.com/newz/nvidia-confirms-ada-102-103-104-gpu-specs-ad104-has-more-transistors-than-ga102

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, CTR640 said:

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

The 3080 12GB has 8960 Cuda cores, 280 TMUs, compared to the 3080 10GB with 8704 Cuda cores and 272 TMUs, according to Techpowerup the 3080 12GB is 3% faster than the 3080 10GB in relative performance. With the 3080 10GB you got the same GA102 die as the 12GB version.

https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-12-gb.c3834

The difference in the 4080's without any clear naming to indicate the 4080 12GB isn't on the same die as the 4080 12GB, at least Nvidia could've called it the 4080 (192) 12GB to clearly show the card isn't the same with less VRAM.

16 minutes ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

And Nvidia has so many influencers to support them and promote their products, average consumers are so easily influenced they think Nvidia is the only option for gaming. I recall when the 30 series came out, LTT made it seem as if you need the software things the 30 series cards come with, even though most people are just gaming, and laughed off the Radeon 6000 cards as if they were pointless, even though the 6800XT offered better value in rasterization performance.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blademaster91 said:

The naming is important because the 4080 12GB and the 4080 16GB don't have anything in the naming that makes a clear difference in specifications like the significant difference in Cuda cores or bus bandwidth, the problem is most people looking at graphics cards on a shelf or on a online retailer aren't going to know the difference and maybe won't care, they'll just see the 4080 12GB being $300 less and buy that, without knowing they're getting screwed over paying $900 for a cut down card that uses a completely different die from the real 4080.

Nothing in the naming of any graphics card makes any indication towards the distinction between the models, so that problem is still the same, regardless of how many cards there are with 4080 in their name. Heck, the fact that the 4080's list their memory in the product name actually gives them more indication as to what the difference between the models are than in previous generations where there was just clinical numbering and nothing else.

 

1 hour ago, CTR640 said:

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

I brought up the 3070 because had Nvidia called the 4080 12GB a 4070, what would've changed? Then you'd be complaining about paying $900 for a 70-class card and we'd be having the same discussion again where I point out repeatedly that names don't matter, that "70-class" means nothing and you should do your research and you guys would be claiming "but it's confusing to anyone picking up a 4070 card for that price, they'd be fucked and overpaying compared to previous generations with the xx70 moniker".

 

And if someone chooses a 4080 12GB knowing what they're getting, how would they end up being fucked? This is why I'm having issues with your argument.

 

And again, here's something I brought up a few pages ago that nobody bothered to comment on (I have my suspicions why) and it's this: If you think the naming is confusing to people who don't bother researching the differences, what makes you think the hitherto used naming convention isn't confusing already? Let's say you get to choose between buying a 2080 and a 3050. You don't to any research at all. Which of the two would you conclude was the better one? And now let's add another wrinkle to the argument: Let's also say you're completely tech-illiterate and you don't know that the first two numbers of that card represent the generation. Any random consumer would therefore conclude that the 3050 was a strictly better card all around, because the number is higher, right? Therefore anyone buying a 3050 for any reason whatsoever is fucked.

 

And this is why I have problems accepting that these complaints about the naming are as altruistic as you people like to make it seem. Fact of the matter is either of the cards will not just be called a "4080", the memory designator is part of the name and if you look up the specs of that name, you'll get the specs of what you're buying, same as you would if they had named the lower tier card a 4070 or whatever. 

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blademaster91 said:

Nvidia also gets away with not allowing Turing and Ampere card users to use DLSS3, the hardware is present but Nvidia wants people to buy RTX 40 series cards to use the latest DLSS, no complains about this because Nvidia.

See my earlier post in this thread where I went over this point. Earlier RTX cards have an implementation of the functionality used, but apparently not of a level to give the benefit the updated version in Ada brings. They're not ruling out bringing DLSS 3 to older RTX cards, but with the current implementation, it would not provide a benefit. If it gets backported, it would be a bonus. DLSS 2 will still work on all regardless.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, HenrySalayne said:

This is is just straight up misinformation. Displayport 1.4a is limited to ~26 GBit/s or 2160p120@8bit respectively 4320p30@8bit. Anything above that is a compressed signal via DSC. And even with DSC the limits of DP1.4a will be reached if the "up to 4 times performance" claims hold true.

DSC is supposed to be perceptually lossless so 8k60 can be run today, if you can render anything fast enough. Suppose a 3080+ class GPU with DLSS 2 performance mode could probably do that. 40 series with DLSS 3 potentially raising that might start to justify 8k120.

 

I feel this is arguing over a detail. 8k is probably going to remain the tiniest of niches within the lifespan of the 40 series. I see 8k TVs exist, and without checking, are they going to be all HDMI anyway?

 

2 hours ago, HenrySalayne said:

And then using the lack of DP2.0 displays as the reason for not putting DP2.0 on the 40 series is just ridiculous. The reason we haven't seen DP2.0 displays is the absence of DP2.0 on graphics cards.

Chicken and egg. Who says it has to be nvidia to be first mover? I'm seeing talk RDNA 3 will apparently support DP2, so if that really is a must have feature, go team red.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Moonzy said:

Good luck

 

Can't believe after 12 pages, we're still talking about the product name lmao

I always take it as a good sign when the only "controversy" surrounding a product is something so trivial and unimportant as the name. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, porina said:

but they're already running insane fps. Even without DLSS 3, 40 series should offer some raw performance uplift if it is wanted. If it is all about speed and precision I'd guess native rendering will remain meta for that area. DLSS 3.0 seems like it is more useful as a RT heavy quality of life upgrade.

There is never enough fps for twitch shooters, and other esport shooters.

 

And additionally, the faster we get to 1000FPS (in whatever way) the faster we get Ready Player One VR/AR, there is only one useful speed/cadence of technology progress = faster.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Dogzilla07 said:

There is never enough fps for twitch shooters, and other esport shooters.

Something like 200 fps is more than enough to make it impossible to tell, the problem is that 0.1% and 1% lows and other spikes are also present, so they always try to push for 500 to reduce spikes. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ZetZet said:

Something like 200 fps is more than enough to make it impossible to tell, the problem is that 0.1% and 1% lows and other spikes are also present, so they always try to push for 500 to reduce spikes. 

There were tests done for military purposes, and people can detect/notice changes up to 1000Hz, 1000FPS. And again a bit separately but in the similar vein, and whether based on that research or separate research Palmer Lucky (the father of modern VR), and others have mentioned 1000FPS being the minimum for Ready Player One style AR/VR.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Dogzilla07 said:

There were tests done for military purposes, and people can detect/notice changes up to 1000Hz, 1000FPS. And again a bit separately but in the similar vein, and whether based on that research or separate research Palmer Lucky (the father of modern VR), and others have mentioned 1000FPS being the minimum for Ready Player One style AR/VR.

That military research was about spotting one frame that is wildly different from the rest, not actually spotting differences between frames. And for VR I think input lag is far more important than just raw framerate. Like sure, more is always better, but how much better is it. Human reaction time is the HUNDREDS of milliseconds. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sir Beregond said:

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

I think you, and others who have replied to me, seem to be under the impression that I condone tricking consumers. I don't. But I just see it from a different perspective: You seem to cling to a designator and think that it should stick for all time to mean the same thing out of some sense that an 80-class GPU represents something, that it means something, anything at all and that that alone is already enough to ensure clarity. I just come forward and accept that this is all just marketing and implore everybody to never fall for that and instead look at the cold hard numbers. The more confusing Nvidia make it for you, the harder you have to look at the specs and be aware of what you're paying for. I'm not oblivious to the fact that Nvidia are once again playing the long game. You can bet your ass that next time around, there'll be only a single 5080, but priced like the 4080 16GB, because hey, there's precedent that the 80-class GPU is worth $1200+. But that's just that, names. The specs don't lie.

 

Also, I find this notion hilarious that you're trying to claim that you're getting less for more. This is still rooted in the mindset that an 80-class GPU is an 80-class GPU is an 80-class GPU, regardless of generation. These names mean nothing. As I've stated a few pages prior, I have friends and coworkers who were convinced that the relevant part of an Intel processor is whether it's an i3, i5 or i7, not any of the numbers after that, and that any i7 will always be superior to any i5 or i3 across generations. It doesn't matter how simple or clear you make this, there will be always someone who will not get it and the more complicated it is, the likelier it is that people will actually try and double check what they're getting is actually what they intend to get. Heck in my previous comment that you neatly didn't respond to, I once again asked, why do you think the current naming convention to be totally clear and not confusing at all. And seemingly nobody can tell me why the way it was before is totally clear to non-techies who don't frequent forums like these, but having two 4080's is now such a problem. To some random consumer, do you honestly think seeing a 3080 or 3080 Ti registers with them? That it represents a meaningful difference?

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×