Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

53 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

Exactly. But unfortunately most have got not spine.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, IPD said:

Boycott the shit out of their products

Good luck

 

Can't believe after 12 pages, we're still talking about the product name lmao

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

What's happening to the specs? Changed or updated? 4080/16 got over 10k cuda cores?

Differences are big between the 12 and 16 version.

 

https://videocardz.com/newz/nvidia-confirms-ada-102-103-104-gpu-specs-ad104-has-more-transistors-than-ga102

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, CTR640 said:

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

The 3080 12GB has 8960 Cuda cores, 280 TMUs, compared to the 3080 10GB with 8704 Cuda cores and 272 TMUs, according to Techpowerup the 3080 12GB is 3% faster than the 3080 10GB in relative performance. With the 3080 10GB you got the same GA102 die as the 12GB version.

https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-12-gb.c3834

The difference in the 4080's without any clear naming to indicate the 4080 12GB isn't on the same die as the 4080 12GB, at least Nvidia could've called it the 4080 (192) 12GB to clearly show the card isn't the same with less VRAM.

16 minutes ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.

And Nvidia has so many influencers to support them and promote their products, average consumers are so easily influenced they think Nvidia is the only option for gaming. I recall when the 30 series came out, LTT made it seem as if you need the software things the 30 series cards come with, even though most people are just gaming, and laughed off the Radeon 6000 cards as if they were pointless, even though the 6800XT offered better value in rasterization performance.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blademaster91 said:

The naming is important because the 4080 12GB and the 4080 16GB don't have anything in the naming that makes a clear difference in specifications like the significant difference in Cuda cores or bus bandwidth, the problem is most people looking at graphics cards on a shelf or on a online retailer aren't going to know the difference and maybe won't care, they'll just see the 4080 12GB being $300 less and buy that, without knowing they're getting screwed over paying $900 for a cut down card that uses a completely different die from the real 4080.

Nothing in the naming of any graphics card makes any indication towards the distinction between the models, so that problem is still the same, regardless of how many cards there are with 4080 in their name. Heck, the fact that the 4080's list their memory in the product name actually gives them more indication as to what the difference between the models are than in previous generations where there was just clinical numbering and nothing else.

 

1 hour ago, CTR640 said:

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

I brought up the 3070 because had Nvidia called the 4080 12GB a 4070, what would've changed? Then you'd be complaining about paying $900 for a 70-class card and we'd be having the same discussion again where I point out repeatedly that names don't matter, that "70-class" means nothing and you should do your research and you guys would be claiming "but it's confusing to anyone picking up a 4070 card for that price, they'd be fucked and overpaying compared to previous generations with the xx70 moniker".

 

And if someone chooses a 4080 12GB knowing what they're getting, how would they end up being fucked? This is why I'm having issues with your argument.

 

And again, here's something I brought up a few pages ago that nobody bothered to comment on (I have my suspicions why) and it's this: If you think the naming is confusing to people who don't bother researching the differences, what makes you think the hitherto used naming convention isn't confusing already? Let's say you get to choose between buying a 2080 and a 3050. You don't to any research at all. Which of the two would you conclude was the better one? And now let's add another wrinkle to the argument: Let's also say you're completely tech-illiterate and you don't know that the first two numbers of that card represent the generation. Any random consumer would therefore conclude that the 3050 was a strictly better card all around, because the number is higher, right? Therefore anyone buying a 3050 for any reason whatsoever is fucked.

 

And this is why I have problems accepting that these complaints about the naming are as altruistic as you people like to make it seem. Fact of the matter is either of the cards will not just be called a "4080", the memory designator is part of the name and if you look up the specs of that name, you'll get the specs of what you're buying, same as you would if they had named the lower tier card a 4070 or whatever. 

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blademaster91 said:

Nvidia also gets away with not allowing Turing and Ampere card users to use DLSS3, the hardware is present but Nvidia wants people to buy RTX 40 series cards to use the latest DLSS, no complains about this because Nvidia.

See my earlier post in this thread where I went over this point. Earlier RTX cards have an implementation of the functionality used, but apparently not of a level to give the benefit the updated version in Ada brings. They're not ruling out bringing DLSS 3 to older RTX cards, but with the current implementation, it would not provide a benefit. If it gets backported, it would be a bonus. DLSS 2 will still work on all regardless.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, HenrySalayne said:

This is is just straight up misinformation. Displayport 1.4a is limited to ~26 GBit/s or 2160p120@8bit respectively 4320p30@8bit. Anything above that is a compressed signal via DSC. And even with DSC the limits of DP1.4a will be reached if the "up to 4 times performance" claims hold true.

DSC is supposed to be perceptually lossless so 8k60 can be run today, if you can render anything fast enough. Suppose a 3080+ class GPU with DLSS 2 performance mode could probably do that. 40 series with DLSS 3 potentially raising that might start to justify 8k120.

 

I feel this is arguing over a detail. 8k is probably going to remain the tiniest of niches within the lifespan of the 40 series. I see 8k TVs exist, and without checking, are they going to be all HDMI anyway?

 

2 hours ago, HenrySalayne said:

And then using the lack of DP2.0 displays as the reason for not putting DP2.0 on the 40 series is just ridiculous. The reason we haven't seen DP2.0 displays is the absence of DP2.0 on graphics cards.

Chicken and egg. Who says it has to be nvidia to be first mover? I'm seeing talk RDNA 3 will apparently support DP2, so if that really is a must have feature, go team red.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Moonzy said:

Good luck

 

Can't believe after 12 pages, we're still talking about the product name lmao

I always take it as a good sign when the only "controversy" surrounding a product is something so trivial and unimportant as the name. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, porina said:

but they're already running insane fps. Even without DLSS 3, 40 series should offer some raw performance uplift if it is wanted. If it is all about speed and precision I'd guess native rendering will remain meta for that area. DLSS 3.0 seems like it is more useful as a RT heavy quality of life upgrade.

There is never enough fps for twitch shooters, and other esport shooters.

 

And additionally, the faster we get to 1000FPS (in whatever way) the faster we get Ready Player One VR/AR, there is only one useful speed/cadence of technology progress = faster.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Dogzilla07 said:

There is never enough fps for twitch shooters, and other esport shooters.

Something like 200 fps is more than enough to make it impossible to tell, the problem is that 0.1% and 1% lows and other spikes are also present, so they always try to push for 500 to reduce spikes. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ZetZet said:

Something like 200 fps is more than enough to make it impossible to tell, the problem is that 0.1% and 1% lows and other spikes are also present, so they always try to push for 500 to reduce spikes. 

There were tests done for military purposes, and people can detect/notice changes up to 1000Hz, 1000FPS. And again a bit separately but in the similar vein, and whether based on that research or separate research Palmer Lucky (the father of modern VR), and others have mentioned 1000FPS being the minimum for Ready Player One style AR/VR.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Dogzilla07 said:

There were tests done for military purposes, and people can detect/notice changes up to 1000Hz, 1000FPS. And again a bit separately but in the similar vein, and whether based on that research or separate research Palmer Lucky (the father of modern VR), and others have mentioned 1000FPS being the minimum for Ready Player One style AR/VR.

That military research was about spotting one frame that is wildly different from the rest, not actually spotting differences between frames. And for VR I think input lag is far more important than just raw framerate. Like sure, more is always better, but how much better is it. Human reaction time is the HUNDREDS of milliseconds. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sir Beregond said:

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

I think you, and others who have replied to me, seem to be under the impression that I condone tricking consumers. I don't. But I just see it from a different perspective: You seem to cling to a designator and think that it should stick for all time to mean the same thing out of some sense that an 80-class GPU represents something, that it means something, anything at all and that that alone is already enough to ensure clarity. I just come forward and accept that this is all just marketing and implore everybody to never fall for that and instead look at the cold hard numbers. The more confusing Nvidia make it for you, the harder you have to look at the specs and be aware of what you're paying for. I'm not oblivious to the fact that Nvidia are once again playing the long game. You can bet your ass that next time around, there'll be only a single 5080, but priced like the 4080 16GB, because hey, there's precedent that the 80-class GPU is worth $1200+. But that's just that, names. The specs don't lie.

 

Also, I find this notion hilarious that you're trying to claim that you're getting less for more. This is still rooted in the mindset that an 80-class GPU is an 80-class GPU is an 80-class GPU, regardless of generation. These names mean nothing. As I've stated a few pages prior, I have friends and coworkers who were convinced that the relevant part of an Intel processor is whether it's an i3, i5 or i7, not any of the numbers after that, and that any i7 will always be superior to any i5 or i3 across generations. It doesn't matter how simple or clear you make this, there will be always someone who will not get it and the more complicated it is, the likelier it is that people will actually try and double check what they're getting is actually what they intend to get. Heck in my previous comment that you neatly didn't respond to, I once again asked, why do you think the current naming convention to be totally clear and not confusing at all. And seemingly nobody can tell me why the way it was before is totally clear to non-techies who don't frequent forums like these, but having two 4080's is now such a problem. To some random consumer, do you honestly think seeing a 3080 or 3080 Ti registers with them? That it represents a meaningful difference?

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Avocado Diaboli said:

I think you, and others who have replied to me, seem to be under the impression that I condone tricking consumers. I don't. But I just see it from a different perspective: You seem to cling to a designator and think that it should stick for all time to mean the same thing out of some sense that an 80-class GPU represents something, that it means something, anything at all and that that alone is already enough to ensure clarity. I just come forward and accept that this is all just marketing and implore everybody to never fall for that and instead look at the cold hard numbers. The more confusing Nvidia make it for you, the harder you have to look at the specs and be aware of what you're paying for. I'm not oblivious to the fact that Nvidia are once again playing the long game. You can bet your ass that next time around, there'll be only a single 5080, but priced like the 4080 16GB, because hey, there's precedent that the 80-class GPU is worth $1200+. But that's just that, names. The specs don't lie.

 

Also, I find this notion hilarious that you're trying to claim that you're getting less for more. This is still rooted in the mindset that an 80-class GPU is an 80-class GPU is an 80-class GPU, regardless of generation. These names mean nothing. As I've stated a few pages prior, I have friends and coworkers who were convinced that the relevant part of an Intel processor is whether it's an i3, i5 or i7, not any of the numbers after that, and that any i7 will always be superior to any i5 or i3 across generations. It doesn't matter how simple or clear you make this, there will be always someone who will not get it and the more complicated it is, the likelier it is that people will actually try and double check what they're getting is actually what they intend to get. Heck in my previous comment that you neatly didn't respond to, I once again asked, why do you think the current naming convention to be totally clear and not confusing at all. And seemingly nobody can tell me why the way it was before is totally clear to non-techies who don't frequent forums like these, but having two 4080's is now such a problem.

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sir Beregond said:

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

Sure, but then why complain about the name? I agree, the announced cards are too expensive and not worth it for any gamer who doesn't also dabble with GPU compute, you're better off getting a 30-series card, especially a used one right now that the prices are falling. If the problem here is price, then let's focus on the actual problem: I'm not willing to pay over a grand for a GPU. And neither should you, under any circumstances. But this has nothing to do with the name.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Dogzilla07 said:

There is never enough fps for twitch shooters, and other esport shooters.

 

And additionally, the faster we get to 1000FPS (in whatever way) the faster we get Ready Player One VR/AR, there is only one useful speed/cadence of technology progress = faster.

Interesting. You must be a machine and way better than shroud who seems to have 0 difference between 144 and 240. 

 

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sir Beregond said:

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

 

1 hour ago, Sir Beregond said:

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

You are not being ripped off, you are told the model of card, "4080 12G" and you look up benchmarks in that regard, you find the performance it is at


Just like no one was ripped off with the 970 ram thing.
EVERYONE bought the card BASED off the benchmarks. when it came out the 500MB of ram was slower then advertised, guess what, it still performed the exactly same as the benchmarks always did.

Lets compare Die cost between 4080 12G and 3070
4080 12G, AD104 295mm n4 => TSMC price, 120 dollars .07 defect density, 18k wafer (underestimation, it should be 17k*1.15)
Ram price at launch, 156 dollars (12x13)
just chip and ram for nvidia = 276
Just chip and ram for an AIB = 396
4080 12G TDP => 285
Cooler costs exceed 50


3070, GA104, 392mm, 8nmSamsung => Samsung Price 54 dollars, .05 defect density, 6k Wafer (2/3 the price of TSMC n7, someone correct this)
Ram price at launch, 96 dollars (8x12)
Just chip and ram = 150
Just chip and ram for an AIB = 182
3070 TDP => 220W
The coolers for this card costs just under 50 dollars... back in 2020.


Remember higher TDP means bigger cooler, more expensive VRMs, More weight meaning more in shipping, also metal prices have gone up pre-covid leading to that bigger cooler not a linear price scale. Shipping costs per pound is higher then it was when Ampere launched, so again, not a linear scale. Retail shelf space is limited because the boxes are bigger, takign up more of the back room as well.

The PRICE to manufacture and sell the 4080 12G is DOUBLE that to manufacture and sell the 3070. 


You guys are mad about the wrong things. Be mad at the focus on RTX which is why die sizes are so massive vs their shader gains gen over gen since the jump from pascal to Turing or something, idk. 

AIBs make 5% margins. 

MSRP for these cards is not nearly as flexible as some of you people think it is. Additional competition will not force nvidia to tank pricing. 
Nvidia needs to cut margins honestly back to 2010 levels so AIBs can breath again, but that wont do a wild swing in price either so them doing that wont help the consumers wallet, just let AIBs have better support like EVGA once had for us.

Link to comment
Share on other sites

Link to post
Share on other sites

What are you predictions for the 4090 performance in 4k without DLSS and RT? Lets say like in the new COD?

Link to comment
Share on other sites

Link to post
Share on other sites

unknown.png

https://www.cybenetics.com/attachs/52.pdf

(Page 27 also has additional details including the above chart)

Am I reading this page wrong? If the ATX 3.0 cards do NOT receive anything on the sense pins (IE you're using an ATX 2.0 PSU with adapters), they're basically supposed to be limited to 150 watts max after bootup per the spec so are RTX 4000 series cards ignoring the spec since connectors were pulling over 150w in some cases causing issues?

 

Edit:

This is of course assuming the adapter cables don't properly short the right pins to ground or don't have the extra sense pins in case of cheaper cables.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Vibora said:

What are you predictions for the 4090 performance in 4k without DLSS and RT? Lets say like in the new COD?

CEO guy said rasterization performance is around x2 and the DLSS is around x4.   3dmark benchmark leaks show about 90% faster than a 3090 on liquid nitrogen, so that's pretty awesome if real. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Shzzit said:


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

That is an impressive amount of assumptions with factual data in only 3 sentences.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Shzzit said:


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

Actually, the issue is that Nvidia isn't giving a rebate to its board partners. A GPU chip, fancy model, is like 50$ to produce. Make that 35-40$ before covid. Even if you double the cost, Nvidia profits margins are really high. Yes, I know operation cost, and the billions in R&D per architecture, but still. If Nvidia was a type of co-op, and had a policy of 5-10% profits, the 4090 would be like 600-700$ US. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×