Jump to content

Wonder why RTX 4000 series feels like poor value?

2 hours ago, Paul17 said:

What's my point?  I think it was fairly obvious.   I guess there's a lot of amd fanboys on here who can't discuss objectively.

The 'majority' are kind of dumb - which you and the other guy, Wallace, both made the point for me - they go with whatever is popular or the rep - and the 'underdog' AMD - this has nothing to do with a good card or price.   Yes, price is brought up but a lot of it is sentiment and emotion.

 

I'm arguing about practicality and getting the most from your dollar.   If I have a card that is good enough for gaming but it can also be highly functional and provide good performance - in productivity fields, then it will probably hold better value than a card that is strictly good for gaming - hence, why Nvidia cards might hold more value (in 2nd hand market) than AMD cards.   There might be some bias or belief in 'better drivers' or whatever - but, what really suggests 'more value' - is the diversity of the card and what it can offer other customers - be it gamers or content creators etc.   So, yes, this is a 'marginalized customer' - your words.   I think if AMD wants to ignore such ppl, they do at their own peril.   Obviously, nvidia cards are still being purchased more often than AMD cards - so, if you want to champion the system of capitalism, be careful.   I think it would be better if both (hey, why not Intel gpus, too?) provided both gamers and content creators good performance - and then you would have more competition, too, maybe?   

 

AMD's anti-consumerism is being mentioned, currently, ironically - both these companies are not admirable - and like I said, there's no underdog here.   

Completely agree.

1 hour ago, MeDownYou said:

Honestly, the performance is in line with what should be expected. 

The issue is simply the price pascal was 600 for a 1080 on launch (which was instantly 550 aib on launch), Today the msrp for a 4080 is 1199. That's literally double. I understand things do creep up a bit over time but 7 years for a 100% increase is quite steep. Realistically 650/700 should be the price. The 4090 should have been 999, and the Titan should have been 1500.  Maybe we'll see prices come down to earth next gen, but I'm not buying until I feel like i'm getting decent value for a top tier gpu. At the prices I listed I would have bought a 4090, However I'm sticking with a pascal card until this sorts itself out. 

I think you're forgetting that as the size decreases, the difficulty and, thus, cost goes up in a way that doesn't match the scale you're using. In not saying the prices are correct - far from it! 

I've been using computers since around 1978, started learning programming in 1980 on Apple IIs, started learning about hardware in 1990, ran a BBS from 1990-95, built my first Windows PC around 2000, taught myself malware removal starting in 2005 (also learned on Bleeping Computer), learned web dev starting in 2017, and I think I can fill a thimble with all that knowledge. 😉 I'm not an expert, which is why I keep investigating the answers that others give to try and improve my knowledge, so feel free to double-check the advice I give.

My phone's auto-correct is named Otto Rong.🤪😂

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, RevGAM said:

Completely agree.

I think you're forgetting that as the size decreases, the difficulty and, thus, cost goes up in a way that doesn't match the scale you're using. In not saying the prices are correct - far from it! 

Generational time has increased too to offset as a counter point.

The Vinyl Decal guy.

Celestial-Uprising  A Work In-Progress

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Sir Beregond said:

How so? 4090 maybe...maybe even 4080 (price aside), but the rest of the stack? No way.

 

3070 was about 2080 Ti performance. 2070 was about 1080 Ti performance. 1070 was about 980 Ti performance. Etc. The 4070? Barely maybe meeting a 3080. So the 4070 is higher in price AND doesn't perform as well as it should in it's class. The 4070 Ti should be the "4070" as it meets the performance characteristics of base non-Ti "70" branded cards in the previous gens matching the previous gen flagship.

 

Now let's look at the 4060 Ti. This is just an embarrassment of a card for the price and has no business even being a 60-class card because of both the die used, and the performance characteristics.

 

And then let's go back and look at the 4070 Ti. It absolutely matches the performance characteristics of a "70 card" notice, not a Ti. Yet its $800? Absurd.

 

No, this gen absolutely does not match with what should be expected performance wise for any given "class" or "branding" of the card below the very top end.

Remember how the RTX 4070ti was originally going to be called the RTX 4080 12GB? Pepperidge Farms remembers. In that case, its practically in the same boat as the 4060ti.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/21/2023 at 6:19 AM, Paul17 said:

AMD has professional cards - but, does anyone use them? 

 

 

Yes, they're used extensively in medical imaging, for display I usually see them as BARCO cards, for capture and processing its often just COTS Radeon Pro.  Radeon Pros have also been used quite a bit in broadcasting equipment from Harris/Imagine Communications, though its been several years since I've had my hands in those.

Link to comment
Share on other sites

Link to post
Share on other sites

Updated the RTX 4060's core count and %core since previous reports were grossly off.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/19/2023 at 3:41 PM, porina said:

You only looked at Ampere. Also look at Turing, Pascal, maybe Maxwell is stretching it a bit as I think their die naming was different then. Basically Ampere is the exception and offered far more than historic trends. Ada is more in line with Turing and Pascal, at least based on die class. I didn't look at it from core count.

To reiterate on this point you made, what I've come to conclude is that RTX 3000 series was a '+1' generation regarding the binning scheme and RTX 4000 series is a '-1' generation, this being relative to an arbitrary baseline when looking at previous generations.

 

The gap therefore being +2 between the generations which is made worse by the nearly 2x prices when comparing similar silicon bins.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Agall said:

To reiterate on this point you made, what I've come to conclude is that RTX 3000 series was a '+1' generation regarding the binning scheme and RTX 4000 series is a '-1' generation, this being relative to an arbitrary baseline when looking at previous generations.

Without agreeing or disagreeing with your assessment for now, this gave me a thought. Did both nvidia and AMD make a mistake by advancing process technology this generation? Both Ada and RDNA3 are on variations of TSMC 5 process. Ada's is probably more expensive as it is supposed to be a custom 4, which is a refinement of 5. What if they held back a node and stuck with TSMC N7, or N6 at most? It is quite a bit cheaper per area.

 

It must have been in a different thread, where I tried to estimate the cost per die of recent generations of both sides. Putting aside the specific dies, the cost per area went up significantly on both sides. We may never know the real numbers, but best I can make out AMD would still be close to 2x, and nvidia even more.

 

Ampere looked like it was disadvantaged by its process, as it performed less power efficiently than RDNA2. But was that actually a win? People didn't care much about power at the time. Lower costs mattered, and still matters. Power might be more closely scrutinised with big increases in power costs, but I think if you asked if someone could have for a given performance less power efficiency but less up front cost, I'd think it would be very attractive still. Of course, it will depend on the exact differences in power consumption and potential cost saving.

 

I think the reality is the process advancement was needed for enterprise parts regardless, and gaming just went along for a ride with it. Probably saves them a lot of design work.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Agall said:

To reiterate on this point you made, what I've come to conclude is that RTX 3000 series was a '+1' generation regarding the binning scheme and RTX 4000 series is a '-1' generation, this being relative to an arbitrary baseline when looking at previous generations.

 

The gap therefore being +2 between the generations which is made worse by the nearly 2x prices when comparing similar silicon bins.

Great way to put it. Turing was like -2 once you got to the 2070 which they bumped all the way down to TU106! Yet raised the price. Only corrected once the Super series released.

35 minutes ago, porina said:

Without agreeing or disagreeing with your assessment for now, this gave me a thought. Did both nvidia and AMD make a mistake by advancing process technology this generation? Both Ada and RDNA3 are on variations of TSMC 5 process. Ada's is probably more expensive as it is supposed to be a custom 4, which is a refinement of 5. What if they held back a node and stuck with TSMC N7, or N6 at most? It is quite a bit cheaper per area.

 

It must have been in a different thread, where I tried to estimate the cost per die of recent generations of both sides. Putting aside the specific dies, the cost per area went up significantly on both sides. We may never know the real numbers, but best I can make out AMD would still be close to 2x, and nvidia even more.

 

Ampere looked like it was disadvantaged by its process, as it performed less power efficiently than RDNA2. But was that actually a win? People didn't care much about power at the time. Lower costs mattered, and still matters. Power might be more closely scrutinised with big increases in power costs, but I think if you asked if someone could have for a given performance less power efficiency but less up front cost, I'd think it would be very attractive still. Of course, it will depend on the exact differences in power consumption and potential cost saving.

 

I think the reality is the process advancement was needed for enterprise parts regardless, and gaming just went along for a ride with it. Probably saves them a lot of design work.

I would bet that AMD has better costs with TSMC since they make all their products with TSMC and are probably considered a premiere partner. Wafer costs may have gone up, but they always come down over time, and I'd bet AMD is not paying full price. Nvidia on the other hand probably totally is paying whatever full price is for their wafers.

 

That said, I don't think it would have been a good idea to skip going to 5nm. RDNA2 was already a 7nm product and presumably a lot of the RDNA3 improvements came from the node shrink. Likewise most of Ada's improvements in performance per power is directly related to the node shrink from Ampere as aside from newer gen cores, bigger L2 cache, etc. its architecture is largely the same as Ampere.

 

But really at a certain point the calculation is yields per wafer. If yields are good, then that reduces costs. The smaller a die is means that's more dies per wafer, again reduces costs. So this calls into a lot of questions as to why AD103, AD104, AD106 based products have risen so much in price considering the very small to mid-sized dies they are. I'd suspect that why some manufacturing costs have increased, the other portion is that Nvidia just wants high margins. That's really the only good way to explain how a 379mm2 die RTX 4080 is $1200 vs the 628mm2 die RTX 3080 which MSRP'd for $699. If the only cost increases were manufacturing and inflation, I think at worst you'd see maybe $799-$899, not $1199.

 

The proof in that pudding is the fact that the RTX 3090 was $1499 and the RTX 4090 is $1599. AD102 is going to be the most expensive die to produce. Yet it's MSRP only increased $100. Everywhere else down in the stack, the dies shrank, the memory controllers were cut down, and yet they asked for ridiculous prices.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Sir Beregond said:

I would bet that AMD has better costs with TSMC since they make all their products with TSMC and are probably considered a premiere partner. Wafer costs may have gone up, but they always come down over time, and I'd bet AMD is not paying full price. Nvidia on the other hand probably totally is paying whatever full price is for their wafers.

They don't necessarily come down. Maybe there is some premium when it is new, but when mature there can be rises which TSMC have been known to do. Again, we don't know the real numbers, so the numbers I found could be very wrong.

 

Even if AMD has a discount, it's quite a price differential from N7 to N5, nearly a doubling. Nvidia going from Samsung 8nm to TSMC custom 4N could be nearer 3x cost. Based on my estimations a defect free GA102 die might be $114, and a defect free AD102 might be $378. I go into how I got to these numbers at the link below. Please note this is wafer cost divided by predicted defect free dies. It does not consider partially working dies that can still be used, which would benefit the bigger dies. I assumed they would all meet binning. Do not take these numbers seriously. They will be wrong, but I tried to make them as un-wrong as I can. If anyone has better data than I used, let me know and I'll update.

 

estimatedgpudiecost.thumb.png.5d8ab8efd5a3f0a28d2901cc5939cf91.png

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Without agreeing or disagreeing with your assessment for now, this gave me a thought. Did both nvidia and AMD make a mistake by advancing process technology this generation? Both Ada and RDNA3 are on variations of TSMC 5 process. Ada's is probably more expensive as it is supposed to be a custom 4, which is a refinement of 5. What if they held back a node and stuck with TSMC N7, or N6 at most? It is quite a bit cheaper per area.

 

It must have been in a different thread, where I tried to estimate the cost per die of recent generations of both sides. Putting aside the specific dies, the cost per area went up significantly on both sides. We may never know the real numbers, but best I can make out AMD would still be close to 2x, and nvidia even more.

 

Ampere looked like it was disadvantaged by its process, as it performed less power efficiently than RDNA2. But was that actually a win? People didn't care much about power at the time. Lower costs mattered, and still matters. Power might be more closely scrutinised with big increases in power costs, but I think if you asked if someone could have for a given performance less power efficiency but less up front cost, I'd think it would be very attractive still. Of course, it will depend on the exact differences in power consumption and potential cost saving.

 

I think the reality is the process advancement was needed for enterprise parts regardless, and gaming just went along for a ride with it. Probably saves them a lot of design work.

To give these companies a break, I imagine the binning process and testing process to be relatively expensive and/or time consuming, but that's just speculation. The actual costs of making the GPU being cheap once the manufacturing is setup, material costs being cheap.

 

I think of it like the AKM, extremely expensive to produce in small scales but extremely cheap to mass produce with full manufacturing. Doing quality control for accuracy on mass production however is a different story, which would be analogous to binning.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

They don't necessarily come down. Maybe there is some premium when it is new, but when mature there can be rises which TSMC have been known to do. Again, we don't know the real numbers, so the numbers I found could be very wrong.

 

Even if AMD has a discount, it's quite a price differential from N7 to N5, nearly a doubling. Nvidia going from Samsung 8nm to TSMC custom 4N could be nearer 3x cost. Based on my estimations a defect free GA102 die might be $114, and a defect free AD102 might be $378. I go into how I got to these numbers at the link below. Please note this is wafer cost divided by predicted defect free dies. It does not consider partially working dies that can still be used, which would benefit the bigger dies. I assumed they would all meet binning. Do not take these numbers seriously. They will be wrong, but I tried to make them as un-wrong as I can. If anyone has better data than I used, let me know and I'll update.

 

estimatedgpudiecost.thumb.png.5d8ab8efd5a3f0a28d2901cc5939cf91.png

 

MCM with RDNA3 being an amazing innovation and likely easier to scale than MCM CPUs, but the yields of the GCDs being far better with how small those dies are with respect to the absolute monster of an AD102. RDNA3 should probably be a lot cheaper than it currently costs, which is why imo the 7900 XT/XTX were a bit disappointing, but maybe the expectation of lower costs with MCM was just wishful thinking.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, Sir Beregond said:

Great way to put it. Turing was like -2 once you got to the 2070 which they bumped all the way down to TU106! Yet raised the price. Only corrected once the Super series released.

I would bet that AMD has better costs with TSMC since they make all their products with TSMC and are probably considered a premiere partner. Wafer costs may have gone up, but they always come down over time, and I'd bet AMD is not paying full price. Nvidia on the other hand probably totally is paying whatever full price is for their wafers.

 

That said, I don't think it would have been a good idea to skip going to 5nm. RDNA2 was already a 7nm product and presumably a lot of the RDNA3 improvements came from the node shrink. Likewise most of Ada's improvements in performance per power is directly related to the node shrink from Ampere as aside from newer gen cores, bigger L2 cache, etc. its architecture is largely the same as Ampere.

 

But really at a certain point the calculation is yields per wafer. If yields are good, then that reduces costs. The smaller a die is means that's more dies per wafer, again reduces costs. So this calls into a lot of questions as to why AD103, AD104, AD106 based products have risen so much in price considering the very small to mid-sized dies they are. I'd suspect that why some manufacturing costs have increased, the other portion is that Nvidia just wants high margins. That's really the only good way to explain how a 379mm2 die RTX 4080 is $1200 vs the 628mm2 die RTX 3080 which MSRP'd for $699. If the only cost increases were manufacturing and inflation, I think at worst you'd see maybe $799-$899, not $1199.

 

The proof in that pudding is the fact that the RTX 3090 was $1499 and the RTX 4090 is $1599. AD102 is going to be the most expensive die to produce. Yet it's MSRP only increased $100. Everywhere else down in the stack, the dies shrank, the memory controllers were cut down, and yet they asked for ridiculous prices.

Also considering the RTX 4090 isn't even equivalent to the RTX 3090 in terms of binning. Its basically in between the RTX 3080 12GB and RTX 3080ti. RTX 3000 series was stacked with GA102 GPU variants in the top end though, especially considering we only have one AD102 GPU right now on RTX 4000 series with the 4090.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Agall said:

Also considering the RTX 4090 isn't even equivalent to the RTX 3090 in terms of binning. Its basically in between the RTX 3080 12GB and RTX 3080ti. RTX 3000 series was stacked with GA102 GPU variants in the top end though, especially considering we only have one AD102 GPU right now on RTX 4000 series with the 4090.

Kinda had to. Samsung 8nm was ancient news by that point, functionally being a 10nm product. Could you imagine if they made the 3080 a GA104 product? 😂

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Agall said:

MCM with RDNA3 being an amazing innovation and likely easier to scale than MCM CPUs, but the yields of the GCDs being far better with how small those dies are with respect to the absolute monster of an AD102. RDNA3 should probably be a lot cheaper than it currently costs, which is why imo the 7900 XT/XTX were a bit disappointing, but maybe the expectation of lower costs with MCM was just wishful thinking.

Using the same assumptions as before, I can estimate the cost of a monolithic NAVI31 vs the actual NAVI33 + 6x MCD, which I previously came to $166. To estimate the monolithic NAVI31 first I take the area of 6x MCD. But these are N6, so can't be added directly to N5 NAVI31. Link below suggests about 35% area reduction going from N6 to N5, so that's what I used. With the MCDs added to make a monolithic NAVI31 die, the area increases, yield drops. I make the cost of that $218, or about 32% more than the MCM arrangement. Silicon costs only, doesn't include possible extra costs in package for MCM which would eat into that potential saving. Again, this does not take into consideration use of partially working dies which would lower the effective cost of bigger dies significantly.

 

https://www.anandtech.com/show/14228/tsmc-reveals-6-nm-process-technology-7-nm-with-higher-transistor-density

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, porina said:

Using the same assumptions as before, I can estimate the cost of a monolithic NAVI31 vs the actual NAVI33 + 6x MCD, which I previously came to $166. To estimate the monolithic NAVI31 first I take the area of 6x MCD. But these are N6, so can't be added directly to N5 NAVI31. Link below suggests about 35% area reduction going from N6 to N5, so that's what I used. With the MCDs added to make a monolithic NAVI31 die, the area increases, yield drops. I make the cost of that $218, or about 32% more than the MCM arrangement. Silicon costs only, doesn't include possible extra costs in package for MCM which would eat into that potential saving. Again, this does not take into consideration use of partially working dies which would lower the effective cost of bigger dies significantly.

 

https://www.anandtech.com/show/14228/tsmc-reveals-6-nm-process-technology-7-nm-with-higher-transistor-density

 

From my understanding with fabrication yields, the smaller the die the better, since impurities are constant and increasing the quantity of silicon by decreasing the size will yield higher # at a similar rejection rate or impurity % of the wafer.

 

So larger dies have higher rejection rate compared to smaller ones, being a selling point of MCM. Those GCDs being quite small compared to a normal die should have higher yield quantities, though I imagine total rejection rates are higher because of that, so who knows but AMD.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Agall said:

From my understanding with fabrication yields, the smaller the die the better, since impurities are constant and increasing the quantity of silicon by decreasing the size will yield higher # at a similar rejection rate or impurity % of the wafer.

 

So larger dies have higher rejection rate compared to smaller ones, being a selling point of MCM. Those GCDs being quite small compared to a normal die should have higher yield quantities, though I imagine total rejection rates are higher because of that, so who knows but AMD.

Yep, and given how large AD102 is, if several dies are rejected in a wafer, that's a lot of loss given that you can only get so many dies per wafer. A GCD on the other hand...

 

EDIT: If I recall, for AD102, you have something like ~90 die candidates per wafer. AMD's would be more like ~190 die candidates per wafer for the GCD's. The MCD's? ~1600 die candidates on a single TSMC 5nm wafer. Figure 5 or 6 per 7900 series card and that's what ~260 cards worth per wafer (if 7900XTX)?

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sir Beregond said:

Yep, and given how large AD102 is, if several dies are rejected in a wafer, that's a lot of loss given that you can only get so many dies per wafer. A GCD on the other hand...

Nvidia should still have millions of AD102 GPUs that are at least equivalent to the 3080's bin that aren't being used in AI cards. It seems like most of the fully unlocked AD102's are being used for such.

 

NVIDIA AD102 GPU Specs | TechPowerUp GPU Database

 

I just think Nvidia hasn't had market pressure to justify releasing them, something that might happen if Radeon releases a 7950 XTXXXXXX^2 or whatever at $1200, forcing Nvidia to chop the RTX 4080 down and release a 4080ti in its place at $1200 to compete.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Agall said:

So larger dies have higher rejection rate compared to smaller ones, being a selling point of MCM. Those GCDs being quite small compared to a normal die should have higher yield quantities, though I imagine total rejection rates are higher because of that, so who knows but AMD.

From my yield estimations previously:

AD102 56%

GA102 55%

MCD 96%

NAVI31 74%

Hypothetical monolithic NAVI31 65%

 

Particularly with the bigger dies there is potential to salvage the dies with defects as cut down models, so the effective yield wont be as bad as it looks.

 

22 minutes ago, Sir Beregond said:

EDIT: If I recall, for AD102, you have something like ~90 die candidates per wafer. AMD's would be more like ~190 die candidates per wafer for the GCD's. The MCD's? ~1600 die candidates on a single TSMC 5nm wafer. Figure 5 or 6 per 7900 series card and that's what ~260 cards worth per wafer (if 7900XTX)?

The calculator I used with defaults gives 80 complete AD102 candidates, but playing a little with shifting the pattern I could get it up to 84.

MCD 1576

NAVI31 177 to 181 with shifting

 

http://cloud.mooreelite.com/tools/die-yield-calculator/index.html

Calculator I used above. Pick 300mm / 12" wafer. I used 0.1 defect rate throughout, which is known ball park for N7 and N5. You can look up die areas. Lock the x-y dimensions and use the square root of the area. Shape doesn't matter within reason, area does, so no need to look for exact x and y sizes.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, porina said:

From my yield estimations previously:

AD102 56%

GA102 55%

MCD 96%

NAVI31 74%

Hypothetical monolithic NAVI31 65%

 

Particularly with the bigger dies there is potential to salvage the dies with defects as cut down models, so the effective yield wont be as bad as it looks.

 

The calculator I used with defaults gives 80 complete AD102 candidates, but playing a little with shifting the pattern I could get it up to 84.

MCD 1576

NAVI31 177 to 181 with shifting

 

http://cloud.mooreelite.com/tools/die-yield-calculator/index.html

Calculator I used above. Pick 300mm / 12" wafer. I used 0.1 defect rate throughout, which is known ball park for N7 and N5. You can look up die areas. Lock the x-y dimensions and use the square root of the area. Shape doesn't matter within reason, area does, so no need to look for exact x and y sizes.

Is that with the yields figured in? I did specify die candidates, not that you'd get that many.

 

Oh OK, I see how that works.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

  • 6 months later...

Updated chart with RTX 4000 Super and 4090D specs.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Agall said:

Updated chart with RTX 4000 Super and 4090D specs.

So you mean the chart in your OP?

I've been using computers since around 1978, started learning programming in 1980 on Apple IIs, started learning about hardware in 1990, ran a BBS from 1990-95, built my first Windows PC around 2000, taught myself malware removal starting in 2005 (also learned on Bleeping Computer), learned web dev starting in 2017, and I think I can fill a thimble with all that knowledge. 😉 I'm not an expert, which is why I keep investigating the answers that others give to try and improve my knowledge, so feel free to double-check the advice I give.

My phone's auto-correct is named Otto Rong.🤪😂

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/8/2024 at 12:42 PM, Agall said:

Updated chart with RTX 4000 Super and 4090D specs.

Is the righthand column the 4000 series? Seems like a header got lost... or my eyes are messing up again... or my brain. 😆 

I've been using computers since around 1978, started learning programming in 1980 on Apple IIs, started learning about hardware in 1990, ran a BBS from 1990-95, built my first Windows PC around 2000, taught myself malware removal starting in 2005 (also learned on Bleeping Computer), learned web dev starting in 2017, and I think I can fill a thimble with all that knowledge. 😉 I'm not an expert, which is why I keep investigating the answers that others give to try and improve my knowledge, so feel free to double-check the advice I give.

My phone's auto-correct is named Otto Rong.🤪😂

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RevGAM said:

Is the righthand column the 4000 series? Seems like a header got lost... or my eyes are messing up again... or my brain. 😆 

I'm seeing it

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×