Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

3 minutes ago, Avocado Diaboli said:

I think you, and others who have replied to me, seem to be under the impression that I condone tricking consumers. I don't. But I just see it from a different perspective: You seem to cling to a designator and think that it should stick for all time to mean the same thing out of some sense that an 80-class GPU represents something, that it means something, anything at all and that that alone is already enough to ensure clarity. I just come forward and accept that this is all just marketing and implore everybody to never fall for that and instead look at the cold hard numbers. The more confusing Nvidia make it for you, the harder you have to look at the specs and be aware of what you're paying for. I'm not oblivious to the fact that Nvidia are once again playing the long game. You can bet your ass that next time around, there'll be only a single 5080, but priced like the 4080 16GB, because hey, there's precedent that the 80-class GPU is worth $1200+. But that's just that, names. The specs don't lie.

 

Also, I find this notion hilarious that you're trying to claim that you're getting less for more. This is still rooted in the mindset that an 80-class GPU is an 80-class GPU is an 80-class GPU, regardless of generation. These names mean nothing. As I've stated a few pages prior, I have friends and coworkers who were convinced that the relevant part of an Intel processor is whether it's an i3, i5 or i7, not any of the numbers after that, and that any i7 will always be superior to any i5 or i3 across generations. It doesn't matter how simple or clear you make this, there will be always someone who will not get it and the more complicated it is, the likelier it is that people will actually try and double check what they're getting is actually what they intend to get. Heck in my previous comment that you neatly didn't respond to, I once again asked, why do you think the current naming convention to be totally clear and not confusing at all. And seemingly nobody can tell me why the way it was before is totally clear to non-techies who don't frequent forums like these, but having two 4080's is now such a problem.

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sir Beregond said:

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

Sure, but then why complain about the name? I agree, the announced cards are too expensive and not worth it for any gamer who doesn't also dabble with GPU compute, you're better off getting a 30-series card, especially a used one right now that the prices are falling. If the problem here is price, then let's focus on the actual problem: I'm not willing to pay over a grand for a GPU. And neither should you, under any circumstances. But this has nothing to do with the name.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Dogzilla07 said:

There is never enough fps for twitch shooters, and other esport shooters.

 

And additionally, the faster we get to 1000FPS (in whatever way) the faster we get Ready Player One VR/AR, there is only one useful speed/cadence of technology progress = faster.

Interesting. You must be a machine and way better than shroud who seems to have 0 difference between 144 and 240. 

 

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sir Beregond said:

I think the better term would be that they would be ripped off.

 

The name bothers me because it's a marketing attempt to obfuscate the general consumer who is only going to think the difference between it and the 4080 16GB is 4GB of vram and $300. Yes, I can also make the argument that everyone who is buying a graphics card should properly educate themselves on what they are buying, of course. However, that doesn't excuse anti-consumer practices that seek to confuse, and ultimately extract more money from many who you and I both know won't know the difference, aren't on tech forums, or otherwise.

 

Ampere is functionally the anomaly in the generations of GeForce since the 600-series and Kepler. See prior to then, the 80-class card was the top card with the full, big chip (except for some instances where they had to cut-down, like the GTX 480). There was no Ti or Titan or 90 class single GPU card (there were dual GPU cards that were 90's). Nvidia then shifted their 60-class chip up into the 80-class product for the 600-series, while retaining said 80-class branding and pricing, and thus started the trend of giving the consumer less for more while seeking to make the consumer think they were still getting what they did before.

 

If anything Ada is just a return to business as usual for Nvidia since Kepler, but like Kepler is another attempt to shift tiers and pricing. You keep saying "hurdur why does it matter", well if the past decade has told us anything about what happens when we let Nvidia get away with it, then it should tell you we should not let it happen again.

 

1 hour ago, Sir Beregond said:

No, I am not clinging to an idea that an 80-class in and of itself means anything, because clearly it has ranged from being flagship, to mid-range, to high-end. I 100% agree that in the end one needs to look at the cold hard numbers of what a product is regardless of its branding. At that point I don't care that they made the presumed 4070 into the 4080 12GB in terms of that aspect of it (I do care from the stance that it will confuse the average consumer). I do care that when looking at the cold, hard numbers, combined with pricing, it is yet again Nvidia attempting to shift tiers and pricing...again just looking at the numbers - chip class, core counts, memory, and memory bus width, and that is bad for the consumer.

You are not being ripped off, you are told the model of card, "4080 12G" and you look up benchmarks in that regard, you find the performance it is at


Just like no one was ripped off with the 970 ram thing.
EVERYONE bought the card BASED off the benchmarks. when it came out the 500MB of ram was slower then advertised, guess what, it still performed the exactly same as the benchmarks always did.

Lets compare Die cost between 4080 12G and 3070
4080 12G, AD104 295mm n4 => TSMC price, 120 dollars .07 defect density, 18k wafer (underestimation, it should be 17k*1.15)
Ram price at launch, 156 dollars (12x13)
just chip and ram for nvidia = 276
Just chip and ram for an AIB = 396
4080 12G TDP => 285
Cooler costs exceed 50


3070, GA104, 392mm, 8nmSamsung => Samsung Price 54 dollars, .05 defect density, 6k Wafer (2/3 the price of TSMC n7, someone correct this)
Ram price at launch, 96 dollars (8x12)
Just chip and ram = 150
Just chip and ram for an AIB = 182
3070 TDP => 220W
The coolers for this card costs just under 50 dollars... back in 2020.


Remember higher TDP means bigger cooler, more expensive VRMs, More weight meaning more in shipping, also metal prices have gone up pre-covid leading to that bigger cooler not a linear price scale. Shipping costs per pound is higher then it was when Ampere launched, so again, not a linear scale. Retail shelf space is limited because the boxes are bigger, takign up more of the back room as well.

The PRICE to manufacture and sell the 4080 12G is DOUBLE that to manufacture and sell the 3070. 


You guys are mad about the wrong things. Be mad at the focus on RTX which is why die sizes are so massive vs their shader gains gen over gen since the jump from pascal to Turing or something, idk. 

AIBs make 5% margins. 

MSRP for these cards is not nearly as flexible as some of you people think it is. Additional competition will not force nvidia to tank pricing. 
Nvidia needs to cut margins honestly back to 2010 levels so AIBs can breath again, but that wont do a wild swing in price either so them doing that wont help the consumers wallet, just let AIBs have better support like EVGA once had for us.

Link to comment
Share on other sites

Link to post
Share on other sites

What are you predictions for the 4090 performance in 4k without DLSS and RT? Lets say like in the new COD?

Link to comment
Share on other sites

Link to post
Share on other sites

unknown.png

https://www.cybenetics.com/attachs/52.pdf

(Page 27 also has additional details including the above chart)

Am I reading this page wrong? If the ATX 3.0 cards do NOT receive anything on the sense pins (IE you're using an ATX 2.0 PSU with adapters), they're basically supposed to be limited to 150 watts max after bootup per the spec so are RTX 4000 series cards ignoring the spec since connectors were pulling over 150w in some cases causing issues?

 

Edit:

This is of course assuming the adapter cables don't properly short the right pins to ground or don't have the extra sense pins in case of cheaper cables.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, IPD said:

They can and they will so long as consumers continue to enable their fuckery by paying for it.  Boycott the shit out of their products that cost over $500, and eventually their bottom line will cause sanity to return.


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Vibora said:

What are you predictions for the 4090 performance in 4k without DLSS and RT? Lets say like in the new COD?

CEO guy said rasterization performance is around x2 and the DLSS is around x4.   3dmark benchmark leaks show about 90% faster than a 3090 on liquid nitrogen, so that's pretty awesome if real. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Shzzit said:


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

That is an impressive amount of assumptions with factual data in only 3 sentences.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Shzzit said:


I doubt they are even making that much NOW with how the economy is.  And you want them to sell cards at a loss so they go out of business and were all screwed?

 

WHO CARES what there name is, they are PRICED per their performance PERIOD> 

Actually, the issue is that Nvidia isn't giving a rebate to its board partners. A GPU chip, fancy model, is like 50$ to produce. Make that 35-40$ before covid. Even if you double the cost, Nvidia profits margins are really high. Yes, I know operation cost, and the billions in R&D per architecture, but still. If Nvidia was a type of co-op, and had a policy of 5-10% profits, the 4090 would be like 600-700$ US. 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, GoodBytes said:

Actually, the issue is that Nvidia isn't giving a rebate to its board partners. A GPU chip, fancy model, is like 50$ to produce. Make that 35-40$ before covid. Even if you double the cost, Nvidia profits margins are really high. Yes, I know operation cost, and the billions in R&D per architecture, but still. If Nvidia was a type of co-op, and had a policy of 5-10% profits, the 4090 would be like 600-700$ US. 

I love when I give hard numbers and it just gets ignored like this.

the 3070, pre covid, was over 50, From Samsung
4080 12G (aka the ad104) is 120 dollars from TSMC


Nvidia profit margins are 65%. (gross which means ignores costs like R&D)
EVGA_NV_006.png

even if they had 10% gross profit margins, the 4090 would never be 600-700 dollars

RAM by itself for the 4090 is over 300 dollars today.
the die for the 4090 FROM TSMC, so zero profit for nvidia at this point, is 330 dollars with the underestimated 18k wafer cost. (its more, but I dont know nvidia's discount if any for bulk purchase) you are already over 630 dollars right then and there for the manufacturing of the 4090

You are very correct about the rebate aspect of it. Nvidia is not doing enough and is just tithing AIBs at this point, making them a zombie husk

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, porina said:

DSC is supposed to be perceptually lossless so 8k60 can be run today, if you can render anything fast enough. Suppose a 3080+ class GPU with DLSS 2 performance mode could probably do that. 40 series with DLSS 3 potentially raising that might start to justify 8k120.

 

The definition of visually lossless might be different than you expect:

image.thumb.png.49b30e7b2c8f60f82e61af1b5d10d260.png

The developers of DSC tested their compression with an ABX test in the early 2010s (yes, that's how old it is). They were using high-end monitors of this period and the subjects needed to decide which picture is the compressed one. The limit they set was an accuracy of 75% (in "only" 3 out of 4 cases the compressed picture could be identified) to call it "visually lossless".

It would be quite interesting if this still holds true. They set an upper limit of 3x compression and something like 4320p60@12bit would need to be compressed by a factor of three to not exceed the available bandwidth.

 

Latency on the other hand is really not an issue. The algorithm compresses each H-line individually, resulting in a 2 H-line lag compared to an uncompressed signal.

16 hours ago, porina said:

I feel this is arguing over a detail. 8k is probably going to remain the tiniest of niches within the lifespan of the 40 series. I see 8k TVs exist, and without checking, are they going to be all HDMI anyway?

When the 1070 launched in 2016, nobody was probably thinking it will ever drive a 2160p120 display, yet in 2019 this was my monitor upgrade. And it could do it just fine.

16 hours ago, porina said:

Chicken and egg. Who says it has to be nvidia to be first mover? I'm seeing talk RDNA 3 will apparently support DP2, so if that really is a must have feature, go team red

Because they were the first to launch a new GPU generation in 2022. We are still waiting for Intel Arc and AMD's RDNA 3.

Intel's Arc has DP2.0 UHBR10, which is the "slowest" DP2.0 standard with 40 GBit/s (around 38 GBit/s usable), but it is still a bump allowing 2160p120@12bit without compression. The middle child (UHBR 13.5) would lift this to 54 GBit/s (around 50 GBit/s usable).

This is not cutting-edge technology. HDMI 2.1 has been around for almost 4 years. It's beyond me why DP2.0 is not found on a $1600 high-end graphics card in 2022.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I feel like this thread is once again showing that this forum is full of kids that doesn't have any idea about the world outside of their small interest bubble. 

 

It's like when there was a shortage of GPUs and some people said that nobody was affected by covid except the GPU market, so clearly it was Nvidia's fault. Yet almost every industry had some degree of shortages. 

 

Now, prices of everything is rising dramatically and yet some people only see their hobby getting more expensive, and then comes to the conclusion that it's Nvidia's fault. 

 

Inflation is crazy everywhere in the world, we are heading towards a recession, and then kids on this forum are mad that they can't afford to buy the latest and greatest graphics card so they can play the latest flavor of the month game at 4K 120 fps with everything maxed out. 

 

Maybe people on this forum need to think about their lives a little bit and realize that maybe maxing out call of duty, or getting 1000 fps in CS isn't the most important thing. If you can't afford the latest graphics card then using the previous generation isn't such a bad thing? Maybe turn some settings down from "ultra" and put them on "high" instead, and you'll get good FPS with much cheaper cards. I doubt many people will notice a difference when actually playing the game and enjoying it for what it is. 

 

Don't take this post as a "stop attacking Nvidia". Feel free to complain about the 4080 name for another 13 pages if you want. As I said earlier, I just think it's a funny and good sign when something so trivial as the name gets all the attention. To me it really is an indication that the product is good, because the name doesn't really matter at all. It's just a sticker on the box. If the only negative thing people talk about is what the sticker on the box says then there can't be that much negative to say about the other, right? 

By the way, I agree that it should have been called the 4070 or something along those lines. I just find it funny that it is such a big issue to some, and seemingly the biggest "issue" people have. 

 

What I want people to stop is feeling entitled to having the latest and greatest all the time, and now that they can't get it they try to blame Nvidia, or some other company for not getting it. Open your eyes and see that this is not just something happening in the GPU industry. The price on good in the US has gone up over 13% in the last 12 months, and are still rising. Electricity prices are up about 16%. Pretty much everything is getting way more expensive, and when this happens people need to focus on what really matters. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, HenrySalayne said:

The definition of visually lossless might be different than you expect:

image.thumb.png.49b30e7b2c8f60f82e61af1b5d10d260.png

The developers of DSC tested their compression with an ABX test in the early 2010s (yes, that's how old it is). They were using high-end monitors of this period and the subjects needed to decide which picture is the compressed one. The limit they set was an accuracy of 75% (in "only" 3 out of 4 cases the compressed picture could be identified) to call it "visually lossless".

It would be quite interesting if this still holds true. They set an upper limit of 3x compression and something like 4320p60@12bit would need to be compressed by a factor of three to not exceed the available bandwidth.

Please note that the participants would get 2/4 right from purely guessing. That's why the threshold seems pretty high at 75%.

50% is the floor since there is always a 50% chance the subject guesses right, even if they were blindfolded. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LAwLz said:

Please note that the participants would get 2/4 right from purely guessing. That's why the threshold seems pretty high at 75%.

50% is the floor since there is always a 50% chance the subject guesses right, even if they were blindfolded. 

It's a little bit more complicated than that. Yes, simply guessing would result in 50% success rate. But correctly identifying 1/4 of the compressed images and guessing for the rest would not result in 75% but 62,5%. With 75%, participants would have correctly identified 50% of the compressed images while simply guessing for the rest.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, HenrySalayne said:

The developers of DSC tested their compression with an ABX test in the early 2010s (yes, that's how old it is). They were using high-end monitors of this period and the subjects needed to decide which picture is the compressed one. The limit they set was an accuracy of 75% (in "only" 3 out of 4 cases the compressed picture could be identified) to call it "visually lossless".

It would be quite interesting if this still holds true. They set an upper limit of 3x compression and something like 4320p60@12bit would need to be compressed by a factor of three to not exceed the available bandwidth.

A better way to visualise it is that they set the threshold mid way between "100% can't tell the difference" and "100% always tell the difference".

 

This is probably a more strenuous test than is practically needed, since you get presented original and compressed images for direct comparison. A more practical scale might be to try an absolute quality rating. Show the sample images individually, without reference, and obtain a subjective quality score. Does the compressed image result in a lower perceptual score?

 

As a parallel, look at jpeg vs uncompressed. If you set jpeg to highest quality, from memory the resulting data size is roughly 1/3 to 1/2 that of uncompressed (depending on content) but viewed separately I doubt many can pick up it has been compressed at all without analysis and manipulating the image.

 

Also at 8k, the error is going to be finer than ever. Does every pixel need to be perfect? I've kinda argued it doesn't need to be in photography but maybe it could also start to apply to displays at higher resolutions. That's not to say I want obviously wrong pixels, but small level offsets may become more tolerable as pixel density increases.

 

21 minutes ago, HenrySalayne said:

When the 1070 launched in 2016, nobody was probably thinking it will ever drive a 2160p120 display, yet in 2019 this was my monitor upgrade. And it could do it just fine.

Use it how you like, but I'm going to guess your use case is uncommon. Are you mainly using it as a video out? Does it really benefit from 120 Hz vs 60 Hz? Without knowing what you're doing, I'm going to guess it is a niche use case.

 

21 minutes ago, HenrySalayne said:

Because they were the first to launch a new GPU generation in 2022.

Still see no reason why that mandates DP2 support.

 

19 minutes ago, LAwLz said:

If you can't afford the latest graphics card then using the previous generation isn't such a bad thing? Maybe turn some settings down from "ultra" and put them on "high" instead, and you'll get good FPS with much cheaper cards. I doubt many people will notice a difference when actually playing the game and enjoying it for what it is. 

About two years ago, I wanted a 3080 for 4k, but all I could get was a 3070 which I still have and use. Balance settings for quality vs frame rate. New or used, even with the price drops I'm not feeling a need to get higher end GPU. Today, a 3070 is more than sufficient at 1440p level, and higher than that mainly gets justified into the 4k region, or those that just want to see insane fps. The vast majority of the market is not represented by a few noisy people on a forum thread.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

About two years ago, I wanted a 3080 for 4k, but all I could get was a 3070 which I still have and use. Balance settings for quality vs frame rate. New or used, even with the price drops I'm not feeling a need to get higher end GPU. Today, a 3070 is more than sufficient at 1440p level, and higher than that mainly gets justified into the 4k region, or those that just want to see insane fps. The vast majority of the market is not represented by a few noisy people on a forum thread.

People are just depressed that after two years price/performance floor was not lifted, or barely lifted. Basically everyone who managed to somehow luck out and buy a 3000 series GPU near MSRP are still going to be sitting pretty for another year (or two?). Some people like me are still hoping RDNA3 improves things a little bit. 

 

 

edit: Or even worse, if you still don't have a GPU buying something for MSRP TWO YEARS later just is super depressing. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Shzzit said:

CEO guy said rasterization performance is around x2 and the DLSS is around x4.   3dmark benchmark leaks show about 90% faster than a 3090 on liquid nitrogen, so that's pretty awesome if real. 

I mean Im relying on his statement.. Id really like to play on 4k 120+ fps... Thats my whole point of upgrading the system..

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, starsmine said:

Just like no one was ripped off with the 970 ram thing.
EVERYONE bought the card BASED off the benchmarks. when it came out the 500MB of ram was slower then advertised, guess what, it still performed the exactly same as the benchmarks always did.
 

It wasn't a huge issue until games started needing more than 3.5GB.  Most benchmarks at launch likely didn't show an issue.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

 

Now, prices of everything is rising dramatically and yet some people only see their hobby getting more expensive, and then comes to the conclusion that it's Nvidia's fault. 

 

A lot of the fault for the prices isn't Nvidia BUT

- Being deceptive is on Nvidia

- AMD found a way to keep costs down if Nvidia can't innovate that is also on them.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, starsmine said:

 

You are not being ripped off, you are told the model of card, "4080 12G" and you look up benchmarks in that regard, you find the performance it is at

The name isn't the problem its naming its naming two completely different things the same thing and making it look like the only difference is the vram.  You can say all day long you can just read the specifications, but the average consumer doesn't know what they are looking at.  That is what Nvidia is going after naive consumers.  It is NOT the consumers fault for being deceived it is the deceiver's fault.  Stop blaming the victim.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

Now, prices of everything is rising dramatically and yet some people only see their hobby getting more expensive, and then comes to the conclusion that it's Nvidia's fault. 

That is debatable. If you look at other parts in PC industry they have come down in price a ton. PSU, cases, CPU coolers, storage (both HDD and SSD) have all reduced in price and keep going down. 

 

Nvidia just designed their cards for a wrong market. They finalized their design to push top tier performance right when a crash happened, so they went to a new smaller more expensive node and kept the same die size which makes them lose a lot of margin. I think a lot of people would be happier to return to 10-20% performance jump, but a price DECREASE instead of what Nvidia is doing now. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

NVidia deserves a colossal amount of blame here, as their "solution" to the shortage was cockamamied re-offerings of previous GPU's.  And not even with "ok, we realize this is old tech" pricing.

 

I get that R&D costs something, but that doesn't mean that you can market a GPU that will run in ULTRA settings for the latest games in 2035.  You can and do price yourself out of the market.  Sure, inflation.  Sure, supply chain issues.  But if consumers are already feeling the wallet pinch everywhere--why are you continuing to increase segment prices like it's the dotcom bubble all over again?

 

Hell, offering rehashes of the 30 series at a 30-50% price reduction would go over better than all of the "new hotness that we're going to charge you enough of a premium on to force you into up-scaling into the next segment unintentionally".

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, ewitte said:

The name isn't the problem its naming its naming two completely different things the same thing and making it look like the only difference is the vram.  You can say all day long you can just read the specifications, but the average consumer doesn't know what they are looking at.  That is what Nvidia is going after naive consumers.  It is NOT the consumers fault for being deceived it is the deceiver's fault.  Stop blaming the victim.

They weren't doing that before either. You still haven't given me an answer to my question why the current naming scheme is any better. To an uninformed consumer, what makes a 2080 superior to a 3050? Clearly the higher number must mean it's better. Please finally acknowledge this or stop pretending that the name featuring VRAM somehow suddenly means people won't bother looking up the rest of the specs. The people who don't look them up already didn't look them up before. Nothing changes.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

Now, prices of everything is rising dramatically and yet some people only see their hobby getting more expensive, and then comes to the conclusion that it's Nvidia's fault. 

This is very debatable as others mention, prices on CPU's, cases, mice, keyboards and monitors have remained the same price or have gotten cheaper, while features and performance have increased.

And Nvidia is using the inflation excuse while blaming it on the consumer, Nvidia still wants their mining boom margins, but can't and is left with a lot of 30 series cards, so the consumer is forced to pay higher prices if they want a graphics card that isn't from 2 years ago. It is very clear that Nvidia designed their cards for the market if miners were still buying up every GPU, 40 series pricing and power consumption makes very little sense in the current market and increasing power costs.

5 hours ago, LAwLz said:

Don't take this post as a "stop attacking Nvidia". Feel free to complain about the 4080 name for another 13 pages if you want. As I said earlier, I just think it's a funny and good sign when something so trivial as the name gets all the attention.

If you have to say you aren't defending Nvidia then you're definitely defending Nvidia, the issue isn't just the naming and and I find it interesting that 13 pages later, people are still defending Nvidia on the naming while missing the point. It isn't the naming alone, Nvidia is being deceptive naming two completely different cards with the same product tier, the excuse has been just look at the specs, except most people aren't going to do that and will buy the cheaper thing, even though people are going to get screwed buying a cut down 4080 for $900 what should be a x70Ti tier card for $600-700, and will have to spend another $200 on a new PSU to use it.

5 hours ago, LAwLz said:

What I want people to stop is feeling entitled to having the latest and greatest all the time, and now that they can't get it they try to blame Nvidia, or some other company for not getting it. Open your eyes and see that this is not just something happening in the GPU industry. The price on good in the US has gone up over 13% in the last 12 months, and are still rising. Electricity prices are up about 16%. Pretty much everything is getting way more expensive, and when this happens people need to focus on what really matters. 

You want to call people entitled, except miners have ruined the market over the past 2 years, and there are a lot of people that have waited to buy a GPU, and yet after two years price to performance hasn't improved, power consumption went up which means more expensive coolers, needing a larger case and new PSU. And I think that is on Nvidia, while they could've just focused on cards more reasonable for the market where people aren't spending $1000+ on gpu's and don't want a card drawing 400W+ of power, AIBs also take a loss when they have to design, manufacture, and ship cards with larger coolers, but Nvidia makes very high profit margins so of course they don't care about the board partners.

And using inflation really isn't a valid excuse, 13% inflation doesn't apply when Nvidia has raised the price by 42% on the x80 tier compared to the launch price of the 3080 10GB, if you want to compare x80 tier cards, even though the 12GB version isn't a real x80 class card in Cuda cores or bus bandwidth compared to the 16GB version.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×