Jump to content

AMD rumoured to have cancelled top die Navi 4C due to complications and also Navi 4C diagram leaked

AlTech

Summary

AMD is rumoured to have cancelled Navi 4C (aka Navi41) and the diagram of Navi 4C has also been leaked. Navi 4C's alleged cancellation is because of complications caused by Navi 4C's new architecture of CUs inside Shader Engine Dies housed on top of Active Interposer Dies which themselves are housed on Package Substrate. and the massive scaling of chiplet usage in the architecture.

 

AMD is believed to have diverted employees working on Navi 4C towards Navi 5 so that Navi 4 can be released in 2024 with Navi 5 launching in 2025.

 

Image from MLID via Videocardz

image.thumb.jpeg.14ba723a1d075c910cf127852eae9068.jpeg

 

Quotes

Quote

The leaked diagram showcases a large package substrate that accommodates four dies: three AIDs (Active Interposer Dies) and one MID (Multimedia and I/O die). It appears that each AID would house as many as 3 SEDs (Shader Engine Dies). This complex configuration represents the alleged RDNA4 architecture, or at least a segment of the GPU that was intended for future release. Notably, the diagram only presents one side of the design, omitting the complete picture. MLID notes that there should also be memory controller dies on each side, although their exact number remains unknown.

The proposed Navi 4C GPU would have incorporated 13 to 20 chiplets, marking a substantial increase in complexity compared to RDNA3 multi-die designs such as Navi 31 or the upcoming Navi 32. Interestingly, a similar design was identified in a patent titled “Die stacking for modular parallel processors” discovered by a subscriber of MLID, which showcased ‘Virtual Compute Die’ interconnected through a Bridge Chip

 

My thoughts

Not gonna lie, looking at this insanely expanded complexity I am not surprised it got cancelled. I do hope that the rumours are correct and that AMD will be back making high end GPUs for Navi 5.

 

 

Sources

https://videocardz.com/newz/amds-canceled-radeon-rx-8000-navi-4c-gpu-diagram-has-been-partially-leaked

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, WereCat said:

MLID and a water mark? Must be legit ... 😄

AMD's patent filings back it up and news sources are writing about it.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AlTech said:

Not gonna lie, looking at this insanely expanded complexity I am not surprised it got cancelled. I do hope that the rumours are correct and that AMD will be back making high end GPUs for Navi 5.

For me, that's gonna suck as I'll probably upgrade to Blackwell or towards the end of Blackwell when Nvidia is about to launch their next lineup and RDNA 5 (possibly??). 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I mean, I'm all for chiplets but this doesn't seem to be worth it. Active interposers are expensive chips, the microsoldering is expensive too.
They are probably spending more on assembly than saving with higher yields/more configurability. 

 

I guess they were going that route because the current layout just doesn't scale well.
I had a link to their presentation on how they managed to pull off chiplets on a graphics card, and it wasn't beautiful.

Going to look for it.

 

EDIT: Here it is.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Forbidden Wafer said:

I mean, I'm all for chiplets but this doesn't seem to be worth it. Active interposers are expensive chips, the microsoldering is expensive too.
They are probably spending more on assembly than saving with higher yields/more configurability. 

It's not just about money. It's about performance. At a certain point you run into the reticle limit where making a bigger GPU is no longer possible.

 

Nvidia keeps inching towards it. AMD's steering well clear of it and wants to figure out how to scale GPUs to be a lot bigger without costing a lot more or running into the laws of physics.

2 minutes ago, Forbidden Wafer said:

I guess they were going that route because the current layout just doesn't scale well.

The problem right now is that the MCDs are separate but the GCDs are still really big dies that are effectively monolithic dies glued to the interposer layer with MCDs connected to it.

 

If AMD can break down GCDs then their GPU designs can become more more scalable and much cheaper.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, AlTech said:

The problem right now is that the MCDs are separate but the GCDs are still really big dies that are effectively monolithic dies glued to the interposer layer with MCDs connected to it.

Yeah, but why? This is the answer you get in the video I updated in the previous post. I think that is why they are thinking on stacking. Adding 3D layers for easier routing.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Forbidden Wafer said:

Yeah, but why? This is the answer you get in the video I updated in the previous post. I think that is why they are thinking on stacking. Adding 3D layers for easier routing.

True but it poses a thermal challenge which is why you can't stack them vertically on top of each other directly.


Which is likely why the Navi 4C design called for spreading them out a bit horizontally and vertically.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Forbidden Wafer said:

I mean, I'm all for chiplets but this doesn't seem to be worth it. Active interposers are expensive chips, the microsoldering is expensive too.
They are probably spending more on assembly than saving with higher yields/more configurability. 

 

I guess they were going that route because the current layout just doesn't scale well.
I had a link to their presentation on how they managed to pull off chiplets on a graphics card, and it wasn't beautiful.

Going to look for it.

 

EDIT: Here it is.

 

There is no future of not going chiplet. TSMC 3nm is the LAST hurra for large monolithic.
Rhetical limit of ASML machines capable of doing GAAFET is 400 something nm. As in you CANT go bigger (unless you want to do redundant stuff like Cerberus). 

Its not just that the current layouts dont scale well. 

Better to learn now, then when you have to learn.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, starsmine said:

There is no future of not going chiplet. TSMC 3nm is the LAST hurra for large monolithic.

I know, but I mean that they are going to chiplets of chiplets, which seem to be way too aggressive for now.

There needs to be a lot of improvements for that to be really worth the additional trouble.

Link to comment
Share on other sites

Link to post
Share on other sites

I've seen the rumours of no high end AMD GPU next round, but not the why. If this is the "why" it would be an example that not everything AMD works out.

 

Just on the screenshot alone that's 10 logic pieces and 6 connectivity above substrate, which by itself is already in EPYC territory. It isn't clear how many more repetitions of this there might be in the other dimension. Even if you treat a loaded AID as a subunit, there must be quite some manufacturing (yield) risk to this complexity. The only other example like this I'm aware of might be Ponte Vecchio, at a count of 47 pieces of silicon. But that's HPC, not consumer tier offering.

 

For lower tier navi 4, I assume they more conventional monolithic route?

 

6 hours ago, starsmine said:

Rhetical limit of ASML machines capable of doing GAAFET is 400 something nm. As in you CANT go bigger (unless you want to do redundant stuff like Cerberus). 

For indication, 400mm2 would be comparable to AD103 (4080) at 379, GA104 (3070) at 392, and there isn't a recent AMD GPU in that ball park. NAVI31 (without MCD) is smaller at 304, and NAVI22 (6700) at 335. NAVI21 (6800+) is much bigger at 520. DG2-512 (ARC A700) is also there at 406.

 

It is also interesting seeing the different approaches the various companies use to get to bigger effective sizes.

 

Apple M2 Ultra is two dies of ~155 each.

Intel Sapphire Rapids is 4 dies of 400 each, a total of 1600. It isn't clear to me if EMIB used would count as extra silicon but there are 10 connections.

Genoa is up to 12x 72 CCDs + 397 IOD, totalling 1261.

 

AMD certainly is chopping things down to smaller pieces. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Think its more likely a experimental vision of a card that may or may not release like a threadripper among radeon gpu's

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, AlTech said:

It's not just about money. It's about performance. At a certain point you run into the reticle limit where making a bigger GPU is no longer possible.

Usually you hit yield issues long before you hit issues with what fits on a reticle. Additionally, GPUs are not the largest chips out there, that honour still goes to high-end FPGAs as far as I know. But these FPGAs have the distinct advantage of being extremely repetitive and only seeing industrial and military use, meaning they get better yields than random digital logic, and they can cost an arm and a leg.

 

9 hours ago, AlTech said:

Nvidia keeps inching towards it. AMD's steering well clear of it and wants to figure out how to scale GPUs to be a lot bigger without costing a lot more or running into the laws of physics.

AMD is slowly edging towards the interconnect density problem, and the diagrams shown in the recent leaks and patent filings show they're already struggling with it to some degree. I sincerely hope they back off and start investing in alternative approaches again before they hit the brick wall multiple of their competitors did in the late 80s/early 90s.

 

9 hours ago, Forbidden Wafer said:

I mean, I'm all for chiplets but this doesn't seem to be worth it. Active interposers are expensive chips, the microsoldering is expensive too.
They are probably spending more on assembly than saving with higher yields/more configurability. 

Building up that stack is not too expensive in itself as a process, however, the yield issues that come with it are. But when I see this I get really worried about long-term reliability, the package substrate will most likely be a typical PCB material, meanwhile that CoW-L silicon bridge is (as you might have guessed) made out of silicon. The difference in LTCE will strain the solder balls and is quite likely to cause cracking over repeated cooling/heating cycles. So you're in a situation where you'll want to keep the temperature of that GPU a constant most likely. Throw in the fact that you're now also putting mechanical strain on the backside of a thinned chip, where you tend to have chipping from the dicing operation, and you're in for an interesting reliability conundrum.

 

9 hours ago, Forbidden Wafer said:

I guess they were going that route because the current layout just doesn't scale well.
I had a link to their presentation on how they managed to pull off chiplets on a graphics card, and it wasn't beautiful.

Going to look for it.

The bigger issue I see is that they seem to be going for stacking, but this can seriously limit the power density you can achieve, making the same amount of die area less valuable.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, WereCat said:

MLID and a water mark? Must be legit ... 😄

The approach is certainly legit, timeline probably a bit of a I don't think do. Could have always just been R&D engineering and not at all related to any soon to be announced architectures. What is shown will happen, when is the key.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

The approach is certainly legit, timeline probably a bit of a I don't think do. Could have always just been R&D engineering and not at all related to any soon to be announced architectures. What is shown will happen, when is the key.

I've seen this concept around Zen 1 or Zen 1+ times and it has been long time coming. But I don't know if we will see it so soon despite Apple doing something similar and beating everyone else to the punch. 

I think it makes less sense to do on entry to mid level cards than vs high end as well. 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, WereCat said:

despite Apple doing something similar and beating everyone else to the punch. 

Apple's really isn't similar to this. Actually less complicated packing wise than this but still really expensive. Mx Ultra is single stack on top of passive interposer.

 

Figure9_M1_Ultra_Package_Crosssection_Sc

 

Apple is doing the left side below

aHViPTExODYyNSZjbWQ9aXRlbWVkaXRvcmltYWdl

Passive interposer (left) and active interposer (right).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Apple's really isn't similar to this. Actually less complicated packing wise than this but still really expensive. Mx Ultra is single stack on top of passive interposer.

 

Figure9_M1_Ultra_Package_Crosssection_Sc

 

Apple is doing the left side below

aHViPTExODYyNSZjbWQ9aXRlbWVkaXRvcmltYWdl

Sorry, my bad. I thought they are based on similar design without checking. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, WereCat said:

Sorry, my bad. I thought they are based on similar design without checking. 

Doesn't matter, any stacking and chip bonding is cool and we need more of it so we can find new and better ways. Apple's is still the outright best in use right now.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

I've seen the rumours of no high end AMD GPU next round, but not the why. If this is the "why" it would be an example that not everything AMD works out.

 

Just on the screenshot alone that's 10 logic pieces and 6 connectivity above substrate, which by itself is already in EPYC territory. It isn't clear how many more repetitions of this there might be in the other dimension. Even if you treat a loaded AID as a subunit, there must be quite some manufacturing (yield) risk to this complexity. The only other example like this I'm aware of might be Ponte Vecchio, at a count of 47 pieces of silicon. But that's HPC, not consumer tier offering.

 

For lower tier navi 4, I assume they more conventional monolithic route?

Yes. Navi 44 (replacement for N24 and N33) and N43 (N32 replacement) are expected to be monolithic dies.

 

There's no N42 or N41 cos of this cancellation.

 

Speculation is that N44 will be 6nm around 200mm2 die and N43 will be 3nm and be slightly better than 7900XTX perf at significantly reduced power but neither have been leaked or rumoured.

 

2 hours ago, porina said:

For indication, 400mm2 would be comparable to AD103 (4080) at 379, GA104 (3070) at 392, and there isn't a recent AMD GPU in that ball park. NAVI31 (without MCD) is smaller at 304, and NAVI22 (6700) at 335. NAVI21 (6800+) is much bigger at 520. DG2-512 (ARC A700) is also there at 406.

AMD does have big chips but they're for datacenter and AI stuff.

 

The aim eventually for AMD is to be able to Radeon GPUs the way they make Ryzen CPUs: A truly chiplet based arch instead of the current GPU chiplet approach.

2 hours ago, porina said:

It is also interesting seeing the different approaches the various companies use to get to bigger effective sizes.

 

Apple M2 Ultra is two dies of ~155 each.

Intel Sapphire Rapids is 4 dies of 400 each, a total of 1600. It isn't clear to me if EMIB used would count as extra silicon but there are 10 connections.

Genoa is up to 12x 72 CCDs + 397 IOD, totalling 1261.

 

AMD certainly is chopping things down to smaller pieces. 

Because it is cheaper and more efficient to do so. Going bigger is more expensive on monolithic and in the future it will be technically challenging as well.

 

The reticle limit of a node isn't infinite and it's easier to package small dies into a number of different configurations e.g. using Zen CCDs from Ryzen all the way to Threadripper and Epyc whilst using mostly the same or exactly the same core design except for Zen4c and Zen5c which are different.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Doesn't matter, any stacking and chip bonding is cool and we need more of it so we can find new and better ways. Apple's is still the outright best in use right now.

Might be a topic for another thread, how does it compare to EMIB for example? 

 

1 hour ago, AlTech said:

The aim eventually for AMD is to be able to Radeon GPUs the way they make Ryzen CPUs: A truly chiplet based arch instead of the current GPU chiplet approach.

Because it is cheaper and more efficient to do so. Going bigger is more expensive on monolithic and in the future it will be technically challenging as well.

I think many have long wondered when we might get GPU core based chiplets as a way to scale higher, although being realistic I think end user cost considerations have overtaken that. Even if we can make effectively bigger GPUs, I don't think most would ever pay for it. For the moment, IMO chiplet based GPU makes most sense at the high end only, nv 80 tier equivalent or higher. Once the packaging matures it could push down the stack further.

 

The Apple Ultra/Intel Sapphire Rapids way of doing it with fewer bigger dies feels, to me, the more intuitive solution. Current and previous __102 dies have been just over 600mm2. Breaking it down to two 300+ mm2 dies should make it much more comfortable while not significantly increasing packaging complexity. The same die used singly could serve the mid range mainstream too (up to nv 60, maybe 70 tier). For TSMC N5 class I estimate the yields to be 56% at 600 mm2, vs 75% at 300 mm2. This only considers defects, and not binning. Usage of cut down dies would effectively increase the yield so it wont be as bad as it looks.

 

The solution shown in OP feels to me like breaking it down too far. Individual silicon pieces might be close to 100% yield (current MCD I estimate at >96%), but the packaging complexity is beyond anything we've seen so far, at least for consumer space offerings and even most enterprise.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Might be a topic for another thread, how does it compare to EMIB for example? 

Would be another discussion yea, I honestly don't know enough. I just spent a little time looking and they seem to be very similar with TSMC's/Apple's CoWoS being the largest and high bandwidth application actually in usage. I think EMIB might actually have the edge in total possible bandwidth but both are updating every year or so getting better so which is best might change based on point in time you go looking. Intel is investing the most money in the technology, if only just a little bit more than TSMC.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, leadeater said:

Would be another discussion yea, I honestly don't know enough. I just spent a little time looking and they seem to be very similar with TSMC's/Apple's CoWoS being the largest and high bandwidth application actually in usage. I think EMIB might actually have the edge in total possible bandwidth but both are updating every year or so getting better so which is best might change based on point in time you go looking. Intel is investing the most money in the technology, if only just a little bit more than TSMC.

The question is what do you understand as EMIB at this point? I've seen a few different diagrams that were labelled with EMIB, but few of them actually contain dimensions and materials. The latter matters a lot, because if you are working on silicon substrates and are placing silicon chips the issues become relatively easy (see Apple), and it does introduce interesting evolutionary paths (e.g. plasma-activated low-T wafer-to-wafer bonding with annealing steps to bond the metal pads together). Meanwhile, if you use something else (like I think AMD was suggesting?) the problems start to compound, but you can avoid using additional wafers and avoid figuring out TSV processes - much to the disappointment of your favourite DRIE/bosch process equipment manufacturer.

 

But in terms of EMIB, if the diagrams are correct in terms of layer thickness versus thickness of the copper (and assuming standard conductor thicknesses), EMIB would easily place the die in the 100 µm thickness range, which involves very aggressive die thinning with the associated yield and process design problems, which typically mean you could just as well have gone for TSVs. So I'm a bit curious as to what the actual situation is right now.

Link to comment
Share on other sites

Link to post
Share on other sites

Not exactly surprising, Nvidia’s been doing the same thing for years. A chiplet GPU would be a fascinating idea though

CPU: Core i9 12900K || CPU COOLER : Corsair H100i Pro XT || MOBO : ASUS Prime Z690 PLUS D4 || GPU: PowerColor RX 6800XT Red Dragon || RAM: 4x8GB Corsair Vengeance (3200) || SSDs: Samsung 970 Evo 250GB (Boot), Crucial P2 1TB, Crucial MX500 1TB (x2), Samsung 850 EVO 1TB || PSU: Corsair RM850 || CASE: Fractal Design Meshify C Mini || MONITOR: Acer Predator X34A (1440p 100hz), HP 27yh (1080p 60hz) || KEYBOARD: GameSir GK300 || MOUSE: Logitech G502 Hero || AUDIO: Bose QC35 II || CASE FANS : 2x Corsair ML140, 1x BeQuiet SilentWings 3 120 ||

 

LAPTOP: Dell XPS 15 7590

TABLET: iPad Pro

PHONE: Galaxy S9

She/they 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, TempestCatto said:

It's funny how once you've built yourself a competent gaming pc that nothing new excites you as much until years later when you can financially justify spending that kind of money again. (holy run on sentence batman ._. )

That's where I am also. Combined with doing less and less gaming, I can pretty easily keep upgrading my CPU and GPU as they become necessary until bottlenecks force me to change platforms.

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×