Jump to content

One of the biggest consumer graphics chips ever produced by AMD? Youtuber claims Big Navis Die Size is 536mm2.

Summary

 

 

We don't yet know Big Navis specs though a lot of speculations and rumours have been popping up recently with the Youtube channel Not an Apple Fan showing information claiming the top Big Navi SKU will have a boost clock of 2.2 GHz, 80 compute units and 16GB of GDDR6 on a 256bit bus. Now the Youtube Channel Coreteks has a shown a picture of the die of what he believes to be XT/XL variant of the Navi 21 die, (Presumably the RX 6800XT/6900/XT/ if we go by AMDs previous naming schemes for the 5700 series and the 5600XT). This leaves two other variants the XTX variant and the XL/XT variant which were detailed by rogame on Twitter months ago. The die pictured in the image is a whopping 536mm2, which would be the second biggest consumer graphics chip AMD have ever made behind Fiji, though the actual die may be slightly bigger or smaller due to the masking Coreteks has done on the image. When the size of the die is combined with the efficiency gains of TSMCs 7n+ process which this GPU is rumoured to be manufactured on, (though this is unconfirmed at least for the Navi cards, the upcoming consoles are on this node however) Coreteks calculates the transistor budget to be 26.3 B transistors, slightly shy of the 28.3 B transistors found on the 3090/3080.

 

Quotes
WCCFTechs Summary Article

Quote

An alleged picture of AMD's Big Navi "Navi 21" GPU which would power the next-generation flagship, Radeon RX 6900 XT, graphics card has been leaked out by Coreteks. Powered by the RDNA2 architecture, the AMD Big Navi GPU looks to be massive in size when compared to past AMD flagship GPU offerings.

Quote

Coreteks has removed all references and labels over the GPU die to mask his sources but he did state that the GPU we are looking at is indeed the Big Navi "Navi 21" GPU which is going to be featured on the flagship Radeon RX 6900 XT graphics card. The GPU seems fairly large & is said to measure at around 536mm2 though those are just rough calculations and the final die size could be higher or lower. The die size also seems to be close to the previous rumors which pointed out a 505mm2 die size for the flagship GPU.

 

Coreteks' Video

Quote

So I used a few elements on the rest of the board to get an accurate measurement of the die, so for instance I took measurements based on the dies relative size to the PCIe connector on the board. Putting them side by side the die is roughly 29.5 PICe lanes tall by 18.5 PCie lanes wide. I did use other elements as well such as the GDDR6 modules on the PCB to get a similar approximation of the scale. So anyway after much measuring the die comes in at 29mm tall and 18.5 millimeters wide or 536mm2 roughly.

 

Quote

Again I should note that this is the XL and XT; this is NOT the XTX variant. Very few people have access to that, even inside AMD. So for those who were hoping to see HBM chips somewhere they are not there at least on the XT and XL.

 

My thoughts

Though the transitor count is lower this could be offset by the higher clock speeds detailed by Not an Apple Fan in his leak and having a card that boosts up to 500MHz faster than the 3080, not counting OCing, this could lead AMD to be very close to NVIDIA in terms of performance I think plus Amperes problems with crashing when going above 2GHz might alos contribute. The only mystery is the 256bit bus though there has been some talk about a large pool of cache that can offset the disadvantages of the low width memory bus by the channel RedGamingTech. Recent trademarks detailing an "Infinity Cache" filed by AMD also seem to show this too (this name would not be out of place for sure) though this could be being used for Zen3 knowing that the cache will be shared between all CCXs for AMDs next gen CPUs.

At the end of the day though the only concrete information we have is the picture of the die itself plus other images and details which have leaked previously. Performance extrapolation is not something that I'd advise doing here as we don't have concrete confirmation of the specs of the die, what the transistors are being used for, apart from the rough die size now. Though I'd definitely expect AMD to be using the N7+ process that includes EUV for them to maximise their performance this generation. Coreteks also didn't do any performance extrapolations either but we now have a clearer picture of what AMD are targeting with this generation. They are launching a high end product and are targeting NVIDIAs high end RTX 3080 with possibly a high end card that fits into the 3090s class, though I doubt AMD will be able to match or come anywhere close to the 3090. Im hoping AMD can bring the fight to NVIDIA because at the end of the day competition in the GPU market brings better prices for us all.

 

Sources
 

https://wccftech.com/amd-big-navi-21-gpu-for-flagship-radeon-r9-6900-xt-graphics-card-allegedly-pictured/
https://coreteks.tech/articles/index.php/2020/10/01/radeon-6900xt-specs-leak-competes-with-rtx-3090/

 


 

Edited by Albal_156
Source was incorrectly stating the die/graphics chip size for the Fury X. Title was changed. size

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

Yes but if you don't count the HBM on the die then it is. Coreteks said it himself actually. Plus the graphics die is 536mm2 if you added HBM2e to that it would be bigger. Since this uses GDDR6 then Im comparing the graphics chip to Fury Xs graphics chip.

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, for a moment there I thought it was the "biggest ever made", by anyone. But nah, Nvidia's die for the 3090 is 628mm².

 

I'm surprised it took AMD this long to made a beefy GPU die. I wonder why they limited themselves to that size though instead of going ham on it and equaling or surpassing Nvidia's die size for more transistors ?

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Albal_156 said:

The die pictured in the image is a whopping 536mm2, which would be the biggest consumer graphics chip AMD have ever made

Um, to be the bearer of bad news no this is in fact wrong. R9 Nano/Fury/Fury X GCN3 die size was 596mm2

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Albal_156 said:

Yes but if you don't count the HBM on the die then it is. Coreteks said it himself actually. Plus the graphics die is 536mm2 if you added HBM2e to that it would be bigger. Since this uses GDDR6 then Im comparing the graphics chip to Fury Xs graphics chip.

No the GPU die is 596, this is not including the HBM.

 

Quote

Finally, as large as the Fiji GPU is, the silicon interposer it sits on is even larger. The interposer measures 1011mm2, nearly twice the size of Fiji. Since Fiji and its HBM stacks need to fit on top of it, the interposer must be very large to do its job, and in the process it pushes its own limits. The actual interposer die is believed to exceed the reticle limit of the 65nm process AMD is using to have it built, and as a result the interposer is carefully constructed so that only the areas that need connectivity receive metal layers

 

Link to comment
Share on other sites

Link to post
Share on other sites

You want me to correct it to chip then? The Fury X isn't consumer though is it. Its like saying the Titan RTX is a consumer card.

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Albal_156 said:

You want me to correct it to chip then? Thats definitely correct. HBM is not graphics silicon.

Read my above post, the GPU die and only the GPU die is 596. Your source is just wrong, not saying you are it's not your fault.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Albal_156 said:

The Fury X isn't consumer though is it

Of course it is consumer, just because it used HBM doesn't make it not. It was the best AMD could do at the time and wasn't priced any higher than any other market option, it definitely was not a Titan equivalent.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

Of course it is consumer, just because it used HBM doesn't make it not. It was the best AMD could do at the time and wasn't priced any higher than any other market option, it definitely is was not a Titan equivalent.

Ahh Im totally wrong on that then. Fixed in the title and text. Its gonna be really interesting to see if this GPU has a 256-bit bus. You'd maybe think 512 might be appropriate. Though I don't think AMD would intentionally gimp their GPUs like that so Im guessing they have something that negates it? For sure its gonna be interesting.

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

*yawn*. We’ll know soon enough.  When GPUs become available they become available.  Right now they’re not, so it doesn’t matter.  There’s a saying “a bird in the hand is worth two in the Bush”. These things aren’t even in the bush.  They’re “I thought I saw a bird”

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Albal_156 said:

Ahh Im totally wrong on that then. Fixed in the title and text. Its gonna be really interesting to see if this GPU has a 256-bit bus. You'd maybe think 512 might be appropriate. Though I don't think AMD would intentionally gimp their GPUs like that so Im guessing they have something that negates it? For sure its gonna be interesting.

Using very wide memory bus means huge cost increases. However, there is no need for it though. Thanks to framebuffer compression, we can use much narrower memory bus and still get same performance. As evident from RTX 3000 series. Massive performance leaps with basically the same bus width. But higher memory speed. It's similar to dual and quad channel memory on motherboards. You can make it wider by adding channels. Or you can just make memory faster. Generally, it's cheaper to bolt on faster VRAM than making core with wider memory interface.

Link to comment
Share on other sites

Link to post
Share on other sites

One day soon the consoles and big navis will be released and they will no longer be hidden any more than the 3080 is.  Someone will take them apart and shoot the die and we will know.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, RejZoR said:

It's similar to dual and quad channel memory on motherboards. You can make it wider by adding channels. Or you can just make memory faster. Generally, it's cheaper to bolt on faster VRAM than making core with wider memory interface.

Each memory chip is essentially a channel. So to go wider, you need more chips and the capacity that goes with it. 8/16GB and 256-bit width implies 8 modules or multiples thereof. Faster isn't easy. nvidia are using 19gbps chips on 3080, with top end expected around 21gbps, that's not a lot of headroom for AMD if they choose to go that way. You'd get more bandwidth putting an extra channel on than by using the faster grade.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Albal_156 said:

Im guessing they have something that negates it?

Supposedly some sort of cache, if that's in the core, then a bit of that size is occupied by memory. Moore's Law is Not Dead channel has some info that his sources are quite certain of, bit of it being that AMD made some magic with the cache to require much lower memory throughput.

Please beware, if you're like me, this video might make you really hopeful for RDNA2, which might really hurt 3 months from now.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

Each memory chip is essentially a channel. So to go wider, you need more chips and the capacity that goes with it. 8/16GB and 256-bit width implies 8 modules or multiples thereof. Faster isn't easy. nvidia are using 19gbps chips on 3080, with top end expected around 21gbps, that's not a lot of headroom for AMD if they choose to go that way. You'd get more bandwidth putting an extra channel on than by using the faster grade.

The reason for that is because you need to put PCB traces from all those chips to the GPU. And GPU itself needs memory controller with that many channels so you can actually connect them together. And that significantly ups the complexity and then also costs.

Link to comment
Share on other sites

Link to post
Share on other sites

Well we'll see, but it would seem about right really just comparing and looking at RX 5700 XT die and doubling upon it. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, TetraSky said:

Ah, for a moment there I thought it was the "biggest ever made", by anyone. But nah, Nvidia's die for the 3090 is 628mm².

One of Nvidia's Volta or Turing dies was north of 800mm2 .

 

13 hours ago, TetraSky said:

I'm surprised it took AMD this long to made a beefy GPU die. I wonder why they limited themselves to that size though instead of going ham on it and equaling or surpassing Nvidia's die size for more transistors ?

Big dies are more expensive. It's harder to sell them for cheap.

 

If AMD really wanted to it could have sold RX 5700 and 5700XT at $199 because of how tiny the die was. Lisa Su insisted on 40% profit margins so we got bigger price tags to match.

 

Nvidia has little to no room to discount the RTX 3080 because of the high bill of materials and large die size.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, LAwLz said:

Just remember guys, size isn't everything. 

But a bigger package never hurt either 😉.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, AluminiumTech said:

But a bigger package never hurt either 😉.

Noo! It's how they use it that matters!

Just look a couple of generations ago with the AMD Fury. Nvidia was smaller but way better.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Loote said:

Supposedly some sort of cache, if that's in the core, then a bit of that size is occupied by memory. Moore's Law is Not Dead channel has some info that his sources are quite certain of, bit of it being that AMD made some magic with the cache to require much lower memory throughput.

Please beware, if you're like me, this video might make you really hopeful for RDNA2, which might really hurt 3 months from now.

Also beware: this guy was more wrong than right regarding Ampere leaks. Believe him at your own risk.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AluminiumTech said:

One of Nvidia's Volta or Turing dies was north of 800mm2 .

 

Big dies are more expensive. It's harder to sell them for cheap.

 

If AMD really wanted to it could have sold RX 5700 and 5700XT at $199 because of how tiny the die was. Lisa Su insisted on 40% profit margins so we got bigger price tags to match.

 

Nvidia has little to no room to discount the RTX 3080 because of the high bill of materials and large die size.

This is a bit I am extremely vague on: actual costs.  I don’t know what real manufacturer costs are for GPUs.  One thing that seems to be relevant is specific silicon size, but I don’t know what the numbers are.  I get the impression that margins are pretty Dang heathy though for both companies.  “Little to no room” is something that assumes that the ga chips are approaching cost.  

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bombastinator said:

This is a bit I am extremely vague on: actual costs.  I don’t know what real manufacturer costs are for GPUs.  One thing that seems to be relevant is specific silicon size, but I don’t know what the numbers are.  I get the impression that margins are pretty Dang heathy though for both companies.  “Little to no room” is something that assumes that the ga chips are approaching cost.  

For the 3080 specifically the cooler required is between $150-200. MLID says the total bill of materials for the 3080 cards is around $500-600.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, AluminiumTech said:

For the 3080 specifically the cooler required is between $150-200. MLID says the total bill of materials for the 3080 cards is around $500-600.

And MLID was wrong on pretty much everything else he said about the 30 series so I wouldn't bet on him getting the price right either.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×