Jump to content

General Inquiry, if CPUs used HBM like system RAM without an ability to upgrade, would you buy it?

Desktop platforms with Integrated RAM, would you buy it if there was a substantial benefit to it?  

13 members have voted

  1. 1. Desktop platforms with Integrated RAM, would you buy it if there was a substantial benefit to it?

    • Yes
      6
    • No
      7


Assume a usable amount of RAM, if that even translates 1:1, minimum of 16GB let's say.

 

If AMD or Intel figured out a way to effectively make 3D v-cache obsolete by just having the system RAM integrated onto the substrate, creating an extreme form of L4 cache, would you buy it, knowing you're locked into that amount of RAM for the life of the CPU?

 

The basic idea I had would be a larger substrate, something closer to TR4 than AM5, that would also house the DRAM. That would eliminate the need for DIMM slots entirely.

 

For some speculation, let's say the performance gains were on par with the gap between non-3D and 3D, so upwards of double the performance in some scenarios.

 

Me personally, I would be willing to compromise on RAM upgradeability for performance. It's something we've seen with laptops over the last few years, but also apparent with the handheld market. Some of the issues with capped system RAM could be mitigated by having it physically on the same substrate too, in theory, with how fast PCIe storage is now a days.

 

Ryzen 7950x3D Direct Die NH-D15, CCD1 disabled

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional IT since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Agall said:

flash NAND

Huh?

 

Guess you mean DRAM... so essentially an Apple Mx.

Would depend on the performance benefit since I strongly doubt it'd be 2x in most applications, and most importantly the cost.

 

I rarely upgrade my machines, my initial RAM amount tends to be sufficient - but the entire thing would revolve on pricing. If it's the same price as current RAM that'd be easier. If it's "pay 5-10 times the going price of sticks" à la Apple, go to hell.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

Why not just make a massive SoC at that point and plop the GPU chip there too.

 

This makes sense for some very specific workloads. LLMs I believe would benefit massively from the setup I mentioned above.

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

Link to comment
Share on other sites

Link to post
Share on other sites

Apple has shown you can do some pretty incredible things with their 512-bit and 1024-bit wide memory bus chips at crazy power efficiencies. It's not the same as traditional cache and 3D-vcache, but there are mutli-generational uplifts to be had in certain applications just by having insane memory bandwdith.

 

AMD's Strix Halo is rumoured to come with a 256-bit memory bus and I can totally see myself replacing a bulky desktop system with a sufficiently strong APU, if when selling the platform the RAM prices are not sold at crazy premiums.

 

I'm also hoping to see some interesting things with Intel's Lunar Lake MX.

Link to comment
Share on other sites

Link to post
Share on other sites

i feel like this may become a thing for embedded systems / mini computers at some point, but SKU count really makes this infeasible for the higher power options.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, CyberneticTitan said:

Apple has shown you can do some pretty incredible things with their 512-bit and 1024-bit wide memory bus chips at crazy power efficiencies. It's not the same as traditional cache and 3D-vcache, but there are mutli-generational uplifts to be had in certain applications just by having insane memory bandwdith.

Part of how they achieve that is by NOT having the RAM socketed.  When you're trying to keep latency low and efficiency high, the traces to the socket and the resistance of the socket itself becomes a factor.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

Wouldn't be surprised if we see this or next year LPDDR5 with CAMM2-Modules for high-end desktop mainboards.

People never go out of business.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Kilrah said:

Huh?

 

Guess you mean DRAM... so essentially an Apple Mx.

Would depend on the performance benefit since I strongly doubt it'd be 2x in most applications, and most importantly the cost.

 

I rarely upgrade my machines, my initial RAM amount tends to be sufficient - but the entire thing would revolve on pricing. If it's the same price as current RAM that'd be easier. If it's "pay 5-10 times the going price of sticks" à la Apple, go to hell.

Yeah, you're right, I corrected it, typo on my end.

 

Same idea too, having a monolithic platform, since its not like drop in CPU upgrades are common outside of AM4, and mostly because of 3D v-cache or from Ryzen 1st/2nd to 5th.

 

Intel's NUC got close to this, the chassis designed for a dGPU. They would've been only one step away from doing this since they're already selling the motherboard+CPU+RAM as a single package.

 

34 minutes ago, manikyath said:

i feel like this may become a thing for embedded systems / mini computers at some point, but SKU count really makes this infeasible for the higher power options.

If HBM wasn't so niche, we might see this. Otherwise soldering normal DRAM cache onto the substrate of a CPU right next to a high power die probably wouldn't end very well.

 

37 minutes ago, CyberneticTitan said:

Apple has shown you can do some pretty incredible things with their 512-bit and 1024-bit wide memory bus chips at crazy power efficiencies. It's not the same as traditional cache and 3D-vcache, but there are mutli-generational uplifts to be had in certain applications just by having insane memory bandwdith.

 

AMD's Strix Halo is rumoured to come with a 256-bit memory bus and I can totally see myself replacing a bulky desktop system with a sufficiently strong APU, if when selling the platform the RAM prices are not sold at crazy premiums.

 

I'm also hoping to see some interesting things with Intel's Lunar Lake MX.

Imagine a console style APU but with HBM. RDNA3 opened up the rabbit hole of having MCM architecture for an SoC, where there could be discrete GPU dies on the same substrate as the CPU.

 

I think its a viable option, but GPUs are far more useful to be upgradeable than CPU or RAM, which is where my question mostly lies. I don't imagine I'm alone in the idea, so I'm testing the water on that assumption.

Ryzen 7950x3D Direct Die NH-D15, CCD1 disabled

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional IT since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

I'd need to see it, but my concern is that Intel or AMD would make configurations that just don't make any sense in order to upsell - just like what's happening with VRAM for Nvidia cards.

 

Imagine that the Ryzen 5 10400 only comes with 8GB of this "HBM" or whatever we call it, so even though the Ryzen 5 10400 has the performance you want when it isn't running out of RAM, you'll need to spend extra to get the Ryzen 5 10600X to get the full 16GB you need. And then the Ryzen 7 10700X only has 16GB still. So if you want 32GB, you need to step up to the Ryzen 9 10900X. And if you want 64GB, you have to go to the Ryzen 9 10950X. And then there's the Ryzen 9 10950XE that is what you need to buy if you want 128GB.

 

I just don't trust companies not to play these kinds of games. I think it's better if AMD and Intel have to rely on third parties to produce RAM for their chips - for the sake of consumers.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Agall said:

Imagine a console style APU but with HBM. RDNA3 opened up the rabbit hole of having MCM architecture for an SoC, where there could be discrete GPU dies on the same substrate as the CPU.

 

I think its a viable option, but GPUs are far more useful to be upgradeable than CPU or RAM, which is where my question mostly lies. I don't imagine I'm alone in the idea, so I'm testing the water on that assumption.

Beyond a certain point its not as useful as you'd think, as cooling both with a single heatsink means neither can be clocked as high as when they're cooled independently.  Same applies to running enough power to a single point on the motherboard, you start to run out of space for the data lines because the power plane has to be so huge.

I'm really not interested in needing a 360-480mm AIO to cool my PC if having two air coolers, one for CPU and one for GPU is far more reliable and flexible.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

I would like to see processors with HBM near die but still have at least ONE ddr4 or ddr5 memory controller to add slower memory (compared to HBM)

 

HBM memory has the benefits of WIDE bus, you have 1024 bits per HBM stack, so one or two stacks could work great for instructions like avx-512, and some things could be optimized to work with 512 - 1024 - 2048 bit chunks , filling cache lines in multiples of such values from HBM instead of slower DDR4/DDR5

 

v-cache would not be obsoleted by HBM or other near die ram, because that 64 MB of extra cache AMD adds is basically super fast SRAM that works really fast, quite faster than the memory ... and it's tightly integrated with lots of things inside the cpu that require very low latency... HBM would not be able to replace it.

Link to comment
Share on other sites

Link to post
Share on other sites

I've long wondered about two scenarios:

Console-like designs with a pool of unified high bandwidth memory feeding both CPU and GPU. Kinda like a super APU that can challenge mid range dGPUs, not just the lowest end like current APUs do.

Large L4 cache design. By large, think GB scale. It wont replace system ram. To me this is for when AMD's 3D Cache is still way too small. Latency might be higher than L3 but it is more about preventing trips to ram and keeping data on socket, providing a hopefully unified view for multiple CCDs.

 

1 hour ago, mariushm said:

HBM memory has the benefits of WIDE bus, you have 1024 bits per HBM stack, so one or two stacks could work great for instructions like avx-512, and some things could be optimized to work with 512 - 1024 - 2048 bit chunks , filling cache lines in multiples of such values from HBM instead of slower DDR4/DDR5

AMD would have to do something radical to enable this though. Infinity Fabric bandwidth can't support it. I think they'd have to bin IF and go more direct chip to chip, similar to what Apple does with their Ultra CPUs, and whatever NV is doing with the biggest Blackwell.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Agall said:

If AMD or Intel figured out a way to effectively make 3D v-cache obsolete by just having the system RAM integrated onto the substrate, creating an extreme form of L4 cache, would you buy it, knowing you're locked into that amount of RAM for the life of the CPU?

Hell yeah, the current bandwidth in consumer system is ridiculous. Keeping 16+ cores fed with just a dual channel system is stupid, give be a large HBM or just regular DRAM with a wider bus with a reasonable enough amount (256~512gb) for a reasonable price and I'm sold.

 

I don't care about upgradeability since I'm often stuck at the max the platform offers. Currently still at 128gb with my AM4 setup, might to the jump to DDR5 when 64GB sticks become a thing so I can double this amount.

22 hours ago, Levent said:

Why not just make a massive SoC at that point and plop the GPU chip there too.

 

This makes sense for some very specific workloads. LLMs I believe would benefit massively from the setup I mentioned above.

Just give me a mini grace hopper workstation and I'd be a happy guy.

21 hours ago, mariushm said:

I would like to see processors with HBM near die but still have at least ONE ddr4 or ddr5 memory controller to add slower memory (compared to HBM)

That's literally the Xeon MAX lineup lol

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/17/2024 at 4:14 PM, porina said:

Large L4 cache design. By large, think GB scale. It wont replace system ram.

Sort of what Intel tried to do on the consumer side with Optane using PCIe storage.

 

I wouldn't be surprised if we started getting soldered DRAM on motherboards once we get high enough speeds to where the extra distance of a modular slot just isn't acceptable anymore, aka, the same rationale for soldering DRAM on laptops. Then they're just 1 step from eliminating the socket and direct die cooling with a sufficient air cooler.

 

On 5/18/2024 at 11:57 AM, igormp said:

large HBM or just regular DRAM with a wider bus with a reasonable enough amount (256~512gb) for a reasonable price and I'm sold.e

I don't care about upgradeability since I'm often stuck at the max the platform offers. Currently still at 128gb with my AM4 setup, might to the jump to DDR5 when 64GB sticks become a thing so I can double this amount.

Just give me a mini grace hopper workstation and I'd be a happy guy.

That's literally the Xeon MAX lineup lol

Not sure if you've seen it yet, but the strix point APU render:

 

AMD Strix Halo Render Reveals Powerful Ryzen APU Design: 16 Zen 5 Cores, 40  RDNA 3+ GPU Cores, 64 MB L3 Cache

 

AMD Strix Point "Ryzen" APU With 12 Zen 5 Cores On Par With 8-Core Ryzen 7 7700X "Zen 4" In Early Blender Leak (wccftech.com)

 

MCM APUs are a little HBM away from being absurdly good. In this case, what I hoped AMD would do with APUs and make a larger I/O die with a full fledged dGPU.

 

As good as AMD has been, I wouldn't be surprised if their board didn't consider this as an option until a sufficient drop in dGPU sales, being that they're still a publicly traded company.

Ryzen 7950x3D Direct Die NH-D15, CCD1 disabled

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional IT since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/17/2024 at 2:27 PM, mariushm said:

I would like to see processors with HBM near die but still have at least ONE ddr4 or ddr5 memory controller to add slower memory (compared to HBM)

 

HBM memory has the benefits of WIDE bus, you have 1024 bits per HBM stack, so one or two stacks could work great for instructions like avx-512, and some things could be optimized to work with 512 - 1024 - 2048 bit chunks , filling cache lines in multiples of such values from HBM instead of slower DDR4/DDR5

 

v-cache would not be obsoleted by HBM or other near die ram, because that 64 MB of extra cache AMD adds is basically super fast SRAM that works really fast, quite faster than the memory ... and it's tightly integrated with lots of things inside the cpu that require very low latency... HBM would not be able to replace it.

I've sort of wondered the purpose of 3D v-cache, since they could otherwise just make the die larger to add that cache. I've assumed that its because a larger die requires a larger substrate to make all the connections to the motherboard, so stacking the dies is worth the effective clock speed reduction when adding a +25C insulator on top of the CPU cores. Or its simply a cost calculation with the yield rates.

 

Its not like there's not physically more space on AM5's substrate to accommodate a few extra millimeters of silicon. Ryzen CCDs are already quite small in comparison to Intel's monolithic dies, but I imagine there's a lot of math and estimations involved with calculating failure rates and impurities relative to die size.

Ryzen 7950x3D Direct Die NH-D15, CCD1 disabled

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional IT since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Agall said:

Sort of what Intel tried to do on the consumer side with Optane using PCIe storage.

No. There's a tradeoff between speed and size, and consumer Optane Memory is too far on the size end and low on speed. Conversely I feel AMD's 3D cache is faster and smaller than I'd like.

 

The nearest consumer product example to what I'm thinking of would be Broadwell-C. At the time you had up to quad core CPUs with 4MB L3 cache. But slapped next to it on substrate was 128MB of eDRAM as L4. Best case official ram bandwidth at the time would be ~25GB/s with dual channel 1600. The eDRAM was nominally 50GB/s and latency was likely much lower. Given today we're reaching low hundreds of MB L3 per CPU, order of magnitude bigger would take us into GB range. Bandwidth will have to be faster than ram, so nothing PCIe based would meet that.

 

For consumer tier CPUs, I don't feel HBM would be a good mix as it is based on low clock but very wide, thus would likely come with a latency penalty. It makes more sense for server CPUs with many more cores, and GPUs which are inherently very wide.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Agall said:

Not sure if you've seen it yet, but the strix point APU render:

 

I had not seen that, but I was aware of strix halo. I don't think it will come with larger enough ram amounts tho, so it's not interesting for me desktop-wise. However, in case it's available with 32~64gb of ram in a ultrathin format, then it'd make for a really great laptop IMO.

 

2 hours ago, Agall said:

what I hoped AMD would do with APUs and make a larger I/O die with a full fledged dGPU.

That's something I don't really care much if it's not from nvidia (because work), and for my laptop I'd rather have something that's good enough and gives me more battery life instead.

 

2 hours ago, Agall said:

since they could otherwise just make the die larger to add that cache.

You'd be increasing latency and power consumption this way. Adding that cache as a regular 2D-plane would increase the die area by quite a lot.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Agall said:

I've sort of wondered the purpose of 3D v-cache, since they could otherwise just make the die larger to add that cache. I've assumed that its because a larger die requires a larger substrate to make all the connections to the motherboard, so stacking the dies is worth the effective clock speed reduction when adding a +25C insulator on top of the CPU cores. Or its simply a cost calculation with the yield rates.

 

Its not like there's not physically more space on AM5's substrate to accommodate a few extra millimeters of silicon. Ryzen CCDs are already quite small in comparison to Intel's monolithic dies, but I imagine there's a lot of math and estimations involved with calculating failure rates and impurities relative to die size.

The v-cache is quite large, quick google search lists it as 6mm by 6mm (36mm2) ...  the Zen 3 CCD die is approx 83.736mm2 (11.270 x 7.430mm) so you can see the  almost half the actual cpu die, and would increase the size of a cpu die significantly if incorporated.

 

See picture below, shows internals of a Zen3 cpu ... the magenta/pink stuff in the middle is the 32 MB cache, and the yellow stuff between L3 cache and the cores is the 4 MB L2 cache  ... that's how much space of the die is used by sram cache

 

When they make silicon chips on wafers (most common being 300mm or 30cm / ~12"  in diameter), the process is not perfect, you get a number of flaws per square inch. The processes at TSMC are quite good and the number of flaws is relatively small, but it's still better for AMD to have smaller dies because you'll get more fully working dies out of a wafer.  And, even on those that are hit by a flaw, some can be recovered by disabling the cores with flaws, if the flaw is in an area occupied by a core (the edges of the rectangle in the picture below).  If the flaw is in the cache memory, it's not much you can do, other then maybe trying to sell a cpu with only half the cache enabled.

 

For example, you could have 11 x 7 mm CCDs now and get 400 cpus out of a wafer, and have 20 of them broken due to random flaws ... or you could have 20 x 10 mm dies that incorporate the v-cache and you'd get maybe 250 cpus but probably  40-50 would have flaws ... so you end up with only 200 working chips and that could be less profitable.

 

AMD also reuses the CCDs for threadripper, epyc  lines of processors, they bin them and the best get used there as well, and those are more profitable.

 

You may want to watch this video as well to understand why it makes more sense to focus on small chiplets

 

 

 

 

 

pa9c5mu9y2y51.thumb.webp.1bf59b1b39fcecf4f591b6e146119e7c.webp

Link to comment
Share on other sites

Link to post
Share on other sites

in short: Yes,  give me unified anything. 

 

of course that would mean major design challenges and go against the pc enthusiast mantra of "but i wanna upgrade my pc every 2 months" so it's probably not gonna happen.

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Mark Kaine said:

in short: Yes,  give me unified anything. 

 

of course that would mean major design challenges and go against the pc enthusiast mantra of "but i wanna upgrade my pc every 2 months" so it's probably not gonna happen.

The only major disadvantage or disagreement I can think of is for troubleshooting and repair, something that having modularity assists in. That's assuming one has spare parts or capital to utilize those features, which I don't think is the case for most.

 

21 hours ago, mariushm said:

If the flaw is in the cache memory, it's not much you can do, other then maybe trying to sell a cpu with only half the cache enabled.

 

Something Intel probably does, with the way their binning scheme works with 12-14th. Each P-core or E-core cluster disabled will also forego 3MB of L3. 

 

I agree with the rest, and its only getting smaller with Zen 4c's chiplet size.

 

There's concerns I have with 3D v-cache though, since its effectively a +25C insulator on the die. Considering we know Intel doesn't do long term heavy load testing on their CPUs (from GN's interview with that Intel thermodynamics engineer) I imagine AMD hasn't with 3D v-cache CPUs either. The bonding process they do for the 3D v-cache die is at least seamless enough to where I couldn't see or feel any difference between the CCDs nor did I damage it when delidding my 7950x3D.

 

5800x3D has only been around for a little over 2 years too, so if we were to start seeing failures in the design, I imagine we'd be seeing that soon. The indium solder is a lot more ductile than I expected, which I imagine does a good job at handling the CCD's thermal cycling over time and wouldn't excessively crack.

 

The thermal limitation of the 3D v-cache mod is real though, but at least in the case of a direct-die mod, I now run into a TDP limit before anything since the 7950x3D will max draw ~155W. More than plenty performance, especially by perf/watt metrics, but far off of the 250W capability of the 7950x. Although the 7950x I also have is TDP locked at 105W because you're not getting much performance past that at stock.

Ryzen 7950x3D Direct Die NH-D15, CCD1 disabled

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional IT since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×