Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

I for one am not going to feed the unrealistic wall street greed this time - I'm out and will not be buying the 40 series.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, HenrySalayne said:

The definition of visually lossless might be different than you expect:

image.thumb.png.49b30e7b2c8f60f82e61af1b5d10d260.png

The developers of DSC tested their compression with an ABX test in the early 2010s (yes, that's how old it is). They were using high-end monitors of this period and the subjects needed to decide which picture is the compressed one. The limit they set was an accuracy of 75% (in "only" 3 out of 4 cases the compressed picture could be identified) to call it "visually lossless".

It would be quite interesting if this still holds true. They set an upper limit of 3x compression and something like 4320p60@12bit would need to be compressed by a factor of three to not exceed the available bandwidth.

 

Latency on the other hand is really not an issue. The algorithm compresses each H-line individually, resulting in a 2 H-line lag compared to an uncompressed signal.

When the 1070 launched in 2016, nobody was probably thinking it will ever drive a 2160p120 display, yet in 2019 this was my monitor upgrade. And it could do it just fine.

Because they were the first to launch a new GPU generation in 2022. We are still waiting for Intel Arc and AMD's RDNA 3.

Intel's Arc has DP2.0 UHBR10, which is the "slowest" DP2.0 standard with 40 GBit/s (around 38 GBit/s usable), but it is still a bump allowing 2160p120@12bit without compression. The middle child (UHBR 13.5) would lift this to 54 GBit/s (around 50 GBit/s usable).

This is not cutting-edge technology. HDMI 2.1 has been around for almost 4 years. It's beyond me why DP2.0 is not found on a $1600 high-end graphics card in 2022.

 

So with a 4k 120hz monitor,  using hdmi 2.1 i could get 120hz with hdr at 10bit 444 with no compression right?  If i overclock screen to 138 it would use compression?

 

Dp 1.4a would use compression no matter what for 4k 120hz hdr 10bit 444?

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

Here is what I think:

  • Nvidia promised its shareholders that the 4000 series will be released this year. So, they can't delay the release to next year. Else, shareholders will start to speculate and Nvidia stock will go down, and scare off investors.
  • Nvidia wants to empty out the 3000 series stock. So having the 4000 series priced like the 3000 series MSRP at release (this also means the GeForce 4080 12GB be called the 4070... at the very most), means that the 3000 series would need to be sold at a lower price than the current to empty the stock.
  • Nvidia is a public traded company, and its role and promise to shareholders and investors is to maximize profits. So, giving out a rebate to board partners is the last resort. They will push as much as possible to avoid losing out on profits.
  • You can always reduce the price later on, with greater ease later on, if needed. So, if everyone buys AMD offering and 3000 series, then Nvidia will have to, once the stock of the 3000 runs out, reduce the price of the 4000 series... after its board partners can't take the hit anymore (rebate sent out to board partners).

In other words, from Nvidia perspective, it's all a game of chess.

 

What I expect to happen:

  • The 4090 and both 4080 will sell out like hot cakes. Who will buy them?
    • People who use GPU for work, who also need the best of the best or near that. Most of these purchases are made by companies for their employees.
    • People who want the best of the best all the time.
    • People who are desperate, and can't get to themselves to get the 3000 series.
  • Once that wave is done with, probably after the holidays, then I expect prices to drop by March/April.

AMD will definitely join Nvidia's party. Don't be fooled that they'll sale their most premium card at 700$ all by havnig close to the same performance as Nvidia's.
 

AMD is also a public traded company. They'll have to answer their shareholders if they pull something like this. They'll need to explain why they were not maximizing profits. The game of selling something really powerful at a deeply discounted price only happens to companies who try to put their foot at the door. Example: When OnePlus started. Or when Sony started with the PS1. Intel can pull it off, if they have a compelling GPU. 

 

AMD did it with Ryzen 1000, 2000 and to some extent 3000 series, because they had no choice. AMD CPUs were out of people's mind when Ryzen 1000 was first released. It's been years since they released something somewhat interesting. They needed to put their foot at the door once again, they needed to shake up the market to get motherboard manufacturers to actually start caring about them, and to get OEM attention to have their CPUs in pre-builds.

 

Sadly, for us consumers, AMD last gen GPUs sold very well. So, there is no justification to come in and pull such a move.

 

The best move consumers can do is to hold. Yes, hold some more... This is the only way for Nvidia and AMD to go "Hmmm, I guess 999$ for the highest price we can charge a GPU". And only now, AMD and Nvidia have a justification to their shareholders for the price reduction, limiting their profit gains.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Avocado Diaboli said:

Sure, but then why complain about the name? I agree, the announced cards are too expensive and not worth it for any gamer who doesn't also dabble with GPU compute, you're better off getting a 30-series card, especially a used one right now that the prices are falling. If the problem here is price, then let's focus on the actual problem: I'm not willing to pay over a grand for a GPU. And neither should you, under any circumstances. But this has nothing to do with the name.

Like I said, it's less about the name itself. I complained about the price, the memory spec, the core count, and my only complaint with the name is Nvidia leveraging the collective thinking among their customer base about what an 80-class card is and using that to shift pricing tiers yet again. It's a valid point. You and me, maybe we know that 80-class literally doesn't mean anything. Average Joe dropping into Best Buy or Micro Center might not. "Oh its an xx80, must be the best".

 

So yeah, I will absolutely complain about marketing tactics with branding being used simply to obfuscate the customer with paying more for less while thinking you got more. I'm not sure why that aspect of this is controversial.

 

The thing is, we've been through this before, in 2012. "Why complain" is a bad question to ask.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

I think this story is pretty controversial ~

 

8-pin PCIe to ATX 12VHPWR Adapter Included with RTX 40-series Graphics Cards Has a Limited Service-Life of 30 Connect-Disconnect Cycles:

 

Quote

LQ9QZlcdIgBvNssw.thumb.jpg.dd37b502064ce065b153698594428f21.jpg

 

The product page of the ZOTAC AMP Extreme has an interesting sentence describing this in-box adapter: "Limited service life with up to 30 connect / disconnects."

Apparently the adapter is only good for up to 30 connect/disconnect cycles safely, before you'll need another one. 

 

https://www.techpowerup.com/299162/8-pin-pcie-to-atx-12vhpwr-adapter-included-with-rtx-40-series-graphics-cards-has-a-limited-service-life-of-30-connect-disconnect-cycles

 

Jay covers this topic too, in a recent video:

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, BiG StroOnZ said:

8-pin PCIe to ATX 12VHPWR Adapter Included with RTX 40-series Graphics Cards Has a Limited Service-Life of 30 Connect-Disconnect Cycles:

I can't find the reference now, saw some days ago when it first got reported that apparently other commonly used power connectors in PC have similarly low contact ratings. I recall when Intel moved to LGA the socket then was also in the 10's of insertions rated life. Doesn't mean it'll fail as soon as you cross over it, in a similar way to flash SSD endurance, but they're not designed for repeated wear and tear like a USB connection is for example. With it being a high power connector I guess the worst case is one contact doesn't do its share of the work and others get a bit warm. Maybe it is a problem if you tinker a lot with cabling but that's kinda outside its intended use case, which is set it up and largely forget about it.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, porina said:

I can't find the reference now, saw some days ago when it first got reported that apparently other commonly used power connectors in PC have similarly low contact ratings.

This perhaps: https://www.molex.com/webdocs/datasheets/pdf/en-us/0455860005_PCB_HEADERS.pdf

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BiG StroOnZ said:

I think this story is pretty controversial ~

 

8-pin PCIe to ATX 12VHPWR Adapter Included with RTX 40-series Graphics Cards Has a Limited Service-Life of 30 Connect-Disconnect Cycles:

 

 

https://www.techpowerup.com/299162/8-pin-pcie-to-atx-12vhpwr-adapter-included-with-rtx-40-series-graphics-cards-has-a-limited-service-life-of-30-connect-disconnect-cycles

 

Jay covers this topic too, in a recent video:

 

 

This brings up a thought.  What if EVGA knew about this, and didn't want negative publicity from their standard fare GPU + PSU bundling...because the backwards incompatibility would come across to some--as a shrewd business move.

 

I doubt that would have been enough on its own, but in conjunction with all the NVidia fuckery, I don't think it would have helped convince EVGA to stay an indentured servant. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BiG StroOnZ said:

I think this story is pretty controversial ~

 

8-pin PCIe to ATX 12VHPWR Adapter Included with RTX 40-series Graphics Cards Has a Limited Service-Life of 30 Connect-Disconnect Cycles:

 

 

https://www.techpowerup.com/299162/8-pin-pcie-to-atx-12vhpwr-adapter-included-with-rtx-40-series-graphics-cards-has-a-limited-service-life-of-30-connect-disconnect-cycles

 

Jay covers this topic too, in a recent video:

 

 



What Jay didnt tell you is the PCIe cable ALSO has the same 30 use count.   Jay loves clickbait bs videos. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Shzzit said:

What Jay didnt tell you is the PCIe cable ALSO has the same 30 use count.   Jay loves clickbait bs videos. 

 

Can you elaborate? Are you claiming that standard PCIe cables suffer from the same issue? or are you talking about the new 12VHPWR cable included with ATX 3.0 PSUs?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

I can't find the reference now, saw some days ago when it first got reported that apparently other commonly used power connectors in PC have similarly low contact ratings. I recall when Intel moved to LGA the socket then was also in the 10's of insertions rated life. Doesn't mean it'll fail as soon as you cross over it, in a similar way to flash SSD endurance, but they're not designed for repeated wear and tear like a USB connection is for example. With it being a high power connector I guess the worst case is one contact doesn't do its share of the work and others get a bit warm. Maybe it is a problem if you tinker a lot with cabling but that's kinda outside its intended use case, which is set it up and largely forget about it.

No, what's happened is that the "low cycle count" is due to the thermal stress of pushing 600w through it. So people like Linus, Steve and Jay will end up quite early destroying the ATX 3.0 power supplies, because they frequently remove the cables. Hell they may end up in a situation where modular power supply vendors say "no" to this funny business because it knows it will cause a lot of returns and instead put a different physical, proprietary, connector on their power supplies, that then has the 12VHPWR power connector on a pigtail so the system builders can put a heatsink on it..

 

image.png.ac3f0e0cdd8a067bb6cfb2db243486bb.png

 

I think what's going to happen here is that people will simply skip out on the 40-series cards if they use the 12VHPWR connector because the connector itself has the potential to catch fire if the wires leading into it are bent due to aggressive cable management.

 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, BiG StroOnZ said:

 

Can you elaborate? Are you claiming that standard PCIe cables suffer from the same issue? or are you talking about the new 12VHPWR cable included with ATX 3.0 PSUs?

Ok this is what i found. 

 

"We have confirmed with Nvidia that the 30-cycle spec for the 16 pin connector is the same as it has been for the past 20 plu years.  The same 30 cycle spec exists for the standards PCIe/atx 8 pin connector.  The same connector is used by AMD and all other GPU vender's too. So in short, NOTHING has changed.

 

As for the cable catching fire.  That is due to the cord being YANKED on dislodging some pins causing a short.  Nothing really to do with the connector i don't think.   

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Kisai said:

No, what's happened is that the "low cycle count" is due to the thermal stress of pushing 600w through it. So people like Linus, Steve and Jay will end up quite early destroying the ATX 3.0 power supplies, because they frequently remove the cables. Hell they may end up in a situation where modular power supply vendors say "no" to this funny business because it knows it will cause a lot of returns and instead put a different physical, proprietary, connector on their power supplies, that then has the 12VHPWR power connector on a pigtail so the system builders can put a heatsink on it..

 

I think what's going to happen here is that people will simply skip out on the 40-series cards if they use the 12VHPWR connector because the connector itself has the potential to catch fire if the wires leading into it are bent due to aggressive cable management.

 

Lol im sorry but just NO.  

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Shzzit said:

Ok this is what i found. 

 

"We have confirmed with Nvidia that the 30-cycle spec for the 16 pin connector is the same as it has been for the past 20 plu years.  The same 30 cycle spec exists for the standards PCI3/atx 8 pin connector.  The same connector is used by AMD and all other GPU vender's too. So in short, NOTHING has changed.

 

As for the cable catching fire.  That is due to the cord being YANKED on dislodging some pins causing a short.  Nothing really to do with the connector i don't think. 

 

I'm assuming you got that quote from here:

 

https://wccftech.com/nvidia-geforce-rtx-40-series-pcie-gen-5-power-adapters-limited-connect-disconnect-life-of-30-cycles/

 

Interesting info to say the least. Guess we will find out what is factual over the course of the next 6-12 months. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, BiG StroOnZ said:

 

I'm assuming you got that quote from here:

 

https://wccftech.com/nvidia-geforce-rtx-40-series-pcie-gen-5-power-adapters-limited-connect-disconnect-life-of-30-cycles/

 

Interesting info to say the least. Guess we will find out what is factual over the course of the next 6-12 months. 

Ya, i got it from there, not sure if real but he did say he got it direct from Nvidia, who knows if they are just telling BS.  

Its kinda like that corsair gpu block that leaked, but only when someone yanked the shit out of it in the wrong direction.  I think for most cases they connector will work just fine.  But ya like you said we will see, its all new tech. 

 

Im personally excited about it, i preorder an thermaltake gf3 1650 watt psu.  Has 2 of the new 600 watt connectors.

I really didn't want 4+ wires going to my gpu LOL> 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, GoodBytes said:

.

They were forced to take whatever they ordered from tsmc for months, and they likely went for the models with the most vram for mining.

 

I think AD102 will sell out regardless, as it is the fastest card and decent value, individuals holding out on 4090 won't change that.

 

What i'm hoping is that people will hold out on AD103 and AD104 + ampere,  chances the sad, small die that is the 4080 16gb will likely sell out too, but i actually believe that people will not buy the 12gb 🙏

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Shzzit said:

As for the cable catching fire.  That is due to the cord being YANKED on dislodging some pins causing a short.  Nothing really to do with the connector i don't think. 

That's not a short, that's open circuit. There are two ways an over bent or stressed cable or connector can cause problems, the first is pins becoming open circuit and the second is increase in cable and/or connector resistance.

 

This specific issue around this is it increases the current in the other pin(s) making them exceeded the rated design current of the cable conductor or the connector pin, or both.

 

There are 6 12V+ pins in the new connector, that means 100W per pin/conductor. The plug current rating per pin is 9.2A which means 110W, that means there is little safety margin in the connector, over and above the safety margin part of the spec as the pins can do more than 9.2A but you never run at absolute max for a spec like this. If just one 12V+ pin becomes open circuit when the load is 600W that means 8.33A gets moved across the remaining 5 or 1.67A per pin meaning each pin is now 10A at 600W load which is above the 9.2A rating. As you can see if more than one becomes open circuit then things will get real bad real fast.

 

Basically due to the size of the connector, it's pins and conductors it's fairly easy to overbend and stress the connector causing resistance imbalance across the 6 power conductors which is actually worse than open circuit because the PSU, GPU and system knows when that is the case and will either refuse to POST the system or reduce maximum allowed power. If 1 or 2 pins end up being much lower resistance than the other 4 you'll get like 60%-70% of the total power going through them or 360W (30A) - 420W (35A) which will lead to excessive heat melting things.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, GoodBytes said:

AMD will definitely join Nvidia's party. Don't be fooled that they'll sale their most premium card at 700$ and have close to the same performance as Nvidia's. AMD is also a public traded company. They'll have to answer their shareholders if they do, on why they were not maximizing profits. They'll need justification.

They have justification, taking market share away from Nvidia. Now if they use this card now is the real question that is still unanswered. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Forgive me if this article has been posted already. I haven't followed the controversy, but found this analysis concise and interesting.

 

We've run the numbers and Nvidia's RTX 4080 cards don't add up

Quote

In terms of its relationship with the RTX 4090, the new RTX 4080 12GB is more akin to the RTX 3060 Ti with its 4,864 shaders. Except the RTX 3060 Ti at least had a 256-bit memory bus. The RTX 4080 12GB only has a 192-bit bus. Oh, and the RTX 4080 is $900.

 

Seriously? A $900 card with a 192-bit bus? The RTX 4080 16GB admittedly is a bit better, what with its 256-bit bus and based on the AD103 chip rather than AD104. But it's still miles off what the RTX 3080 was to the RTX 3090.

 

Quote

But perhaps the most damning indictment of what Nvidia is doing comes in the shape of value for money. At $1,600, the new RTX 4090 looks expensive enough to be irrelevant to the vast majority of gamers. But the fact that it looks like good value compared to the RTX 4080 12GB is completely crazy.

 

Put it this way, the cost in dollars per shader, per GB of VRAM, and almost certainly per ROP, per texture unit, and per everything once the full specs are released, is lower for the RTX 4090 than the RTX 4080.

 

Since when did a top-tier GPU beat a lower-tier model for pure bang for buck? Since never.

 

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

 

680, 104die
780, 110 die(honestly, weird)
980, 204 die 
1080, 104 die
2080, 104 die
3080 102 die
4080 12G, 104 die 4080 16g, 103 die

xx80 has been 104 die for a decade. sans the weird 780 Yes there are 480 and 580 are 100/110 AKA THE BIG DIE, but that big die is 500mm in size back then. To say the 4080 12G is really a 4070 because its a 104 die and the 3070 is a 104 is a bad argument.

100 dies today are not even seen on consumer class cards. GA102 is already 630mm

100 dies today

unknown.png

People who complain about 192 bit bus are also fooling themselves into thinking that maters in ways it don't. Like a 580 with a 512 bit bus performs worse then a 3050 with its 128bit bus.
when its able to match the rasterization performance of a 3090 (1500 msrp, street price irrelevant) with just a 192bit bus, thats the point of it. 
I dont know what that article is on about 4090 being better performance per dollar. its straight up wont be. Just because you get more cuda cores a dollar does not mean those cuda cores are being fed. 

Someone else brought that point up earlier on in the thread where the lower end cards got a frame for every 50 cores, vs 3090 that took 75 cores per frame.
Its because there are so many cores, they are not all fed per clock. 
Just like how a 16 core chip for 99% of applications is not twice as fast as an 8 core chip. It literally cant feed all the cores all at the same time.

Like I hate "defending" nvidia, it just feels all the attacks are just massive misfires, people angry about the situation, but no idea what they are actually angry at and venting it weirdly.
 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Delicieuxz said:

Forgive me if this article has been posted already. I haven't followed the controversy, but found this analysis concise and interesting.

 

We've run the numbers and Nvidia's RTX 4080 cards don't add up

Ada seems to be following RDNA2's memory strategy, or something close to it. Ada has much more L2 cache than Ampere, just they didn't give it a cheesy name like Infinity Cache. That cache could offset bandwidth demands.

 

Edit: For comparison, top GA102 (3080 Ti, 3090) had 6MB L2 cache.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, starsmine said:

xx80 has been 104 die for a decade. sans the weird 780 Yes there are 480 and 580 are 100/110 AKA THE BIG DIE, but that big die is 500mm in size back then. To say the 4080 12G is really a 4070 because its a 104 die and the 3070 is a 104 is a bad argument.

Pre 900 series there was no x100 die, 102 was the largest and 104 (or refresh of it) were second step down. I wouldn't compare back past 900 series though, too long ago and too different GPU die development process, but to nit pick that means it's only been 8 years not 10 (decade) 🙃

 

4 hours ago, starsmine said:

People who complain about 192 bit bus are also fooling themselves into thinking that maters in ways it don't. Like a 580 with a 512 bit bus performs worse then a 3050 with its 128bit bus.

Except you are comparing different generations of GDDR with widely different bandwidths, RTX 30 vs RTX 40 use the same GDDR6X with only a modest increase in package speeds so the RTX 4080 12GB at 504 GB/s is a massive step down from the RTX 3080 760 GB/s.

 

You can try and argue bus bit widths all you like, 504 is less than 760 and also less than 736 which just happens to be the RTX 4080 16GB.

 

Will this really matter? Who knows, in bandwidth sensitive games most assuredly yes but I cannot say by how much as there are other mitigating factors like cache sizes.

 

The issue is performance variance, the RTX 4080 12GB vs RTX 4080 16GB will have at least in my opinion a far too greater performance variance potential to be carrying the same model name. VRAM capacity does not, never have, never should, not ever indicate performance. This is NOT a trend we want or should be setting.

 

This was never acceptable in the past and it should not be now. Even bloody BMW does the curtesy of calling a 2 liter 3 series a 320i and a 2.5 liter a 325i and not simply call them both a BMW 3 series and then good damn luck figuring it out from there, no 12GB and 16GB are not the same as this.

 

4 hours ago, starsmine said:

Like I hate "defending" nvidia, it just feels all the attacks are just massive misfires, people angry about the situation, but no idea what they are actually angry at and venting it weirdly.

Then simply don't. When something is simply so obviously anti consumer then it doesn't matter if certain people do not fully understand the situation, it actually changes nothing at all. Shitty business practices at the compromise of consumer transparency deserve all the criticisms they get, well founded or not. 

Link to comment
Share on other sites

Link to post
Share on other sites

Ok I just go wind of the new 4000 series nvidia gpus and have been hearing tons of anger about the pricing. Is this one of those "overblown outrage" moments or is the outrage actually legitimate. 

Don't call me a nerd, it makes me look slightly smarter than you

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Theminecraftaddict555 said:

Ok I just go wind of the new 4000 series nvidia gpus and have been hearing tons of anger about the pricing. Is this one of those "overblown outrage" moments or is the outrage actually legitimate. 

wait for benchmarks really.

 

Memory bandwidth means a LOT for 3D workstations and Compute. PCIe Bus width tends to be more meaningful for texture-heavy games. Like it absolutely sucks to play a game that spends a minute loading because it has to transfer everything into the GPU, and then you spend not very much time in the area, and then have to hit a loading screen again.

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Theminecraftaddict555 said:

Ok I just go wind of the new 4000 series nvidia gpus and have been hearing tons of anger about the pricing. Is this one of those "overblown outrage" moments or is the outrage actually legitimate. 

the 80 cards have always used the 102 die with the 90 cards except this time, the 16gb is ad103 and the 12gb is ad104. As for pricing/supply and demand, i believe the 12gb is the worst priced card ever that's not low end relative to supply and demand, the 4090 and 16gb will sell out at launch but will be available above demand in 2022.

 

The 12pin adapters that comes with the cards are rushed, i would not use one in my house as it is a fire hazard, and the new atx 3.0 psus required to run these cards are not ready yet. This is the main reason i'm not getting a 4090, and while i wait i can see reviews and what amd has to offer.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×