Jump to content

M3 Macbook Pro Reviews. 8GB of RAM on a $1600 laptop is criticised heavily

filpo
44 minutes ago, igormp said:

I'd say that it's likely to happen with 5~7 years, which should happen given how long people keep their mac devices.

There a gazillion of Macs with soldered SSDs and 8GB of RAM out there that is significantly older than 7years..

All 8GB Touchbar models which started way back in 2016 and almost the full Retina family, from end of 2013 onwards, so 10years at this point.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Dracarris said:

There a gazillion of Macs with soldered SSDs and 8GB of RAM out there that is significantly older than 7years..

All 8GB Touchbar models which started way back in 2016 and almost the full Retina family, from end of 2013 onwards, so 10years at this point.

And I did see some fail, although that's not a really common failure (touchbar models have way worse problems that happen first, like stagefright or just plain dead logic board)

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Dracarris said:

So, a Macbook which can be carried anywhere and away from a power source now all the sudden becomes a Desktop PC if it favors in your price comparison? Interesting take indeed!

You obviously have never used a Macbook and don't know what differentiates them from laptops that are a bit above garbage-Acer, and why "Professionals" keep using and paying for them. Just because a machine can run a browser fluently does in no way mean that it can provide a good overall experience and provide the same productivity boost than a much more expensive machine, whether this being a Macbook or an expensive machine from Windows land with comparable properties

It's really, really stupid to take a "better" laptop over a ergonomic workspace with a dedicated keyboard, mouse and monitor. If you have your Macbook sitting on the very same desk at home for 80% of the time and you did not buy peripherals for this workspace, you made an extremely poor buying decision.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HenrySalayne said:

It's really, really stupid to take a "better" laptop over a ergonomic workspace with a dedicated keyboard, mouse and monitor. If you have your Macbook sitting on the very same desk at home for 80% of the time and you did not buy peripherals for this workspace, you made an extremely poor buying decision.

You could just achieve both, buy a MacBook Air. Unless we are going to start saying a MacBook Air is not objectively good in all metrics being compared, and you can get it with 16GB memory as well as the extra peripherals you mention. We can stay within the Apple product family for laptops in this discussion and be easily able to show why 8GB is simply a bad value for such a product.

 

Never ceases to amaze me how willing people can be to accept being nickel and dimed so needlessly. For all the arguments about how Apple devices are premium high quality devices befitting a higher asking price and value with high finish quality on the chassis, large batteries even though power efficient, premium screens blah blah it seems to stop at system memory, why? Are Apple customers not deserving of premium system ram capacities on their premium laptops with their premium features that they are often so insistent cannot be found anywhere else, or at least not for cheaper. Weird I say, weird.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, HenrySalayne said:

It's really, really stupid to take a "better" laptop over a ergonomic workspace with a dedicated keyboard, mouse and monitor. If you have your Macbook sitting on the very same desk at home for 80% of the time and you did not buy peripherals for this workspace, you made an extremely poor buying decision.

Yes, because everyone buying a Macbook is actually stupid and used it 80% at home but wants to get it for the logo on the back and not at all for using it while commuting/travelling.

1 hour ago, leadeater said:

Are Apple customers not deserving of premium system ram capacities on their premium laptops with their premium features that they are often so insistent cannot be found anywhere else, or at least not for cheaper. Weird I say, weird.

Apple customers deserve a premium user experience, which in so many cases, as again and again demonstrated by user reports including in this very thread, does apparently not depend on having more than 8GB of system memory available. Weird, right? Yes, for people stuck in Windows-land where user experience seems to be solely determined by big numbers on the spec sheet/box and where the OS apparently continues to require an ever growing amount of resources as time goes on as you otherwise end up with an unresponsive potatoe. And where different OS that manage resources a bit more efficiently and therefore can do with less is outside of the realm of imagination, considered a glitch in the matrix. Weird I say, weird.

 

My M1 Pro has 16GB of RAM and I paid over 2k for it. What a loser I must be. Do I say that 8GB is sufficient for everyone and for all future, do I say it is ample? No. Do I say that 8GB is an unusable machine and "trash" as so many are parrotting here? No, because that's utter horseshiet.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Dracarris said:

There a gazillion of Macs with soldered SSDs and 8GB of RAM out there that is significantly older than 7years..

All 8GB Touchbar models which started way back in 2016 and almost the full Retina family, from end of 2013 onwards, so 10years at this point.

Exactly, those 2016 machines are still viable exactly because they have 8 GB of RAM instead of 4 GB. Also take note that it was the standard amount of RAM 7 years ago...

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, leadeater said:

Never ceases to amaze me how willing people can be to accept being nickel and dimed so needlessly. For all the arguments about how Apple devices are premium high quality devices befitting a higher asking price and value with high finish quality on the chassis, large batteries even though power efficient, premium screens blah blah it seems to stop at system memory, why?

 

One of the reasons why Apple doesn't have a greater market share, is that their offerings are consistantly "offensive" to people who want gameing machines, and people who want workstations. Both which require something upgradable that Apple doesn't offer at all now. It never has to be upgradable in a sense that you need to upgrade the CPU and RAM separately, but it does need to be replaceable without throwing out the rest of the system.  Computer screens FAR FAR outlast the laptops and AIO iMac desktop's Apple sells, and it's a bloody shame to pay so much money for a good screen, only to have to discard the entire thing because Apple refuses to sell an machine with upgradable ram or disk space.

 

Laptops for some reason, we accept the compromise for portability reasons, yet why 8GB? If the goal is to reduce eWaste there should only be one SKU, with the largest RAM and DISK available at the time if it's not going to be upgradeable. If someone wants more RAM at a later date, wait for the next year model and hope that Apple actually upgrades that.

 

image.thumb.png.f47a6831f9218785dd606bf96169c3ca.png

45% have 8GB 37% have 16GB. That kinda tells me the baseline should have been 16GB.

 

For completeness:

image.thumb.png.fa7f5591e95332b7068922a470d30f54.png

41% of Mac's people are playing games on are MacBookPro's, the next largest segment is the MacbookAir. So 65% of Mac owners are using laptops. 

 

Meanwhile:

image.thumb.png.402e95c071f9f133822cf3cf1d515d98.png

48% have 16GB, and the next Largest is 32GB on Windows. In addtion...

image.thumb.png.2737afdbf57785b87d7733a14618bba1.png

The most common VRAM on Windows is 8GB, 6GB and 12GB. The 1GB can only be assumed to be the Steam Hardware survey also counting iGPU's. So ignoring that, a Mac touted as a gaming device should really have 24GB to have parity with a Windows machine (16GB system, 8GB VRAM) 

 

If that is not the use case, then the VRAM on the GPU is largely going unused in Windows. On OSX, theoretically that means the RAM can be used for more general purpose uses, but again, that means an 8GB M-series laptop is equal to a Windows desktop with 6GB of RAM and 2GB of GPU or some variation that adds up to 8GB, and that is well under-performing to market it as a gaming device.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2023 at 6:28 PM, Agall said:

The testing I've done with several M1 Mac Minis and an Air M1, even in a professional environment with Adobe CC, didn't run into issues with 8GB.

So why exactly would you pay for the new "pro" model if the entry level models from 2 years ago are plenty for your use case?

 

My thinkpad from 2011 has 8GB of ram. Having the same amount in a top of the line "pro" machine released over 12 years later is comical.

On 11/6/2023 at 8:31 PM, Kisai said:

That said, I have 96GB in my desktop, and it's rarely using less than 32GB.

To be fair this is a consequence of caching more than a real "need" for all that memory. Unused memory is wasted memory so windows rightfully fills it with temporary stuff to avoid loading it again later.

On 11/6/2023 at 8:31 PM, Kisai said:

The problem I see with the Mac laptops (not the mini's or iMac's) is that this sends them to be e-Waste faster when they have anemic soldered on RAM and SSD's.  Like as much as the soldered RAM has an advantage in making the laptop thinner, there are so many downsides that it defys the conventional logic of "buy what you need" when all it takes is your web browser or main applications to be cloud-updated to a new version that then tanks the laptop performance.

I also don't buy that the laptop needs to be this thin. I'm sure if it had been 1mm thicker in exchange for even a single sodimm slot for expansion nobody would have complained. m.2 ssds are also thin enough that this is just not a believable excuse for soldering them on.

On 11/6/2023 at 8:06 PM, Alex Atkin UK said:

I suspect the speed of the SSD helps greatly, it could swap a fair bit and people never really notice.

...until the constant write access kills it.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Sauron said:

...until the constant write access kills it.

yaawwn.. this again. 10years after soldered down SSDs in Macbooks I am still waiting for this cruel disaster to happen and Macbooks in-mass dying due to swap-killed SSDs.

15 hours ago, Sauron said:

I also don't buy that the laptop needs to be this thin. I'm sure if it had been 1mm thicker in exchange for even a single sodimm slot for expansion nobody would have complained.

This has nothing to do with device thickness. Maybe do some reading about DDR5L signaling and the impacts of longer traces and connectors. There are good technical (performance+power) reasons to include the memory dies in the same package as the CPU/GPU/SoC.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Dracarris said:

yaawwn.. this again. 10years after soldered down SSDs in Macbooks I am still waiting for this cruel disaster to happen and Macbooks in-mass dying due to swap-killed SSDs.

This isn't just about the ssd being soldered, it's about the system being designed to aggressively swap to ssd due to insufficient memory and not having any options to replace the drive or even boot from an external one. It's just fact that if you write more and more often to your ssd it will die sooner; this doesn't necessarily mean they're all going to die 3 years from purchase, it just means its potential lifespan is reduced, after which it's a paperweight. Flash quality varies from drive to drive and if you're unlucky you may get to that stage earlier than you'd like.

21 minutes ago, Dracarris said:

This has nothing to do with device thickness. Maybe do some reading about DDR5L signaling and the impacts of longer traces and connectors. There are good technical (performance+power) reasons to include the memory dies in the same package as the CPU/GPU/SoC.

But no good technical reasons not to also provide an expansion slot, which is all I'm asking. They're doing tiered memory anyway so there's absolutely no reason they couldn't just add another tier.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Sauron said:

This isn't just about the ssd being soldered, it's about the system being designed to aggressively swap to ssd due to insufficient memory and not having any options to replace the drive or even boot from an external one. It's just fact that if you write more and more often to your ssd it will die sooner; this doesn't necessarily mean they're all going to die 3 years from purchase, it just means its potential lifespan is reduced, after which it's a paperweight. Flash quality varies from drive to drive and if you're unlucky you may get to that stage earlier than you'd like.

The potential lifespan even with devices that are close to a decade old currently does not seem to get reduced due to the SSD, at least there are no widespread reports about it. All the drastic warnings about aggressive swapping that we here about for years did not change anything about that.

20 minutes ago, Sauron said:

But no good technical reasons not to also provide an expansion slot, which is all I'm asking. They're doing tiered memory anyway so there's absolutely no reason they couldn't just add another tier.

So let me get this straight, in addition to the DDR5L in the SoC package you want them to add another RAM expansion slot, and then have the OS manage two different tiers/speeds of RAM?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Dracarris said:

The potential lifespan even with devices that are close to a decade old currently does not seem to get reduced due to the SSD, at least there are no widespread reports about it. All the drastic warnings about aggressive swapping that we here about for years did not change anything about that.

It's really hard to see a pattern for something like this. The best you could do would probably be to compare death rates of models with more ram compared to models with less ram, but who has access to sufficient data on this?

3 hours ago, Dracarris said:

So let me get this straight, in addition to the DDR5L in the SoC package you want them to add another RAM expansion slot, and then have the OS manage two different tiers/speeds of RAM?

It probably should be done by the memory controller rather than the OS but in theory you could do it through the OS, sure. Tiered memory is nothing new or especially difficult to implement. I mean, swapping is itself a tiered memory system.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Every time anything Apple goes wrong it's national news. If SSDs were failing as machines aged, we'd know about it-- pretty confidently, that's not an issue. 

 

That said, I think we've reached the point where 8gb is not enough. Certainly not on the $1600 pro, and IMO not on the Air either. The internet continues to get worse (need more RAM), and almost certainly will continue to do so going forward. Even if it's totally fine for your use case today, in 8 years it just won't be.  

 

It also really makes me question the point of the entire M3 powered MacBook Pro-- as in, once you add the (IMO mandatory) 8gb of ram, you're only $200 of an M3 Pro MacBook Pro-- who in their right mind wouldn't make the jump at that point? $200 to go from 8gb to 16gb of ram? Huge rip off. $200 more for 2gb more ram (18gb total), more P cores, more E cores, more GPU cores, more display drivers, and another TB port? Super good value. 

 

It's also dramatically undermining Apple's messaging on these computers. By all benchmarks, this M3 line look to be a big jump forward, especially on the GPU side. What is the predominant conversation about these computers? 8gb of RAM. Just look at this thread 😛

 

First ever 3nm process computer? New GPU memory technology? Ray tracing on an iGPU? Ai Upscaling that happens on the (otherwise unused in games) neural engine, leaving the entire GPU and CPU for the game? Nah, fuck it, let's talk about base RAM spec. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Sauron said:

It probably should be done by the memory controller rather than the OS but in theory you could do it through the OS, sure. Tiered memory is nothing new or especially difficult to implement. I mean, swapping is itself a tiered memory system.

If Intel can and has done this on two platforms and sockets for x86 already I'm sure Apple could also do it. Compressed and uncompressed memory is also technically a memory tier, OS managed.

 

7 hours ago, Sauron said:

It's really hard to see a pattern for something like this. The best you could do would probably be to compare death rates of models with more ram compared to models with less ram, but who has access to sufficient data on this?

It doesn't even matter if it has or has not largely happened yet. The issue is it will happen, 100% guarantee. NAND has finite write life, it will fail, unlike an HDD which can actually operate infinitely.

 

On the statistical data side how many people are actually going to know or be told their Apple device failed due to write wear, I don't think Apple is going to say that, I wouldn't say that to a general consumer. There isn't a lot of difference between your SSD/Storage Device failed and your SSD had worn out to most people, unless you work in IT and knowing the cause of failure mattered (database server, wrong grade of SSD).

 

It's also not just some oggy boggy fear thing, write wear failures do happen in usable life spans

image.png.a7735b04404267f03314501cf53a3bf1.png

 

This server is 6 years old and those 240GB SSDs are just standard Read Optimized server SSDs with about the same endurance as a higher grade TLC SSD people are buying in desktops and laptops. It's not as good endurance wise as a Samsung Pro to give a measure of roughly where something like this sits. Best case this SSD will last another 14 years, you'd have to be very optimistic to believe it will actually last that long. The usage profile has been a Windows server with lots of ram, no interactive users and no usage of the system drive other than Windows updates. So this 240GB SSD has at best a 20 year life in the most light usage possible, anything less would be the SSD not being used at all.

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, leadeater said:

It doesn't even matter if it has or has not largely happened or not yet. The issue is it will happen, 100% guarantee. NAND has finite write life, it will fail, unlike an HDD which can actually operate infinitely.

Like every SSD, the ones in Macbooks will eventually fail. The question is when do they fail (on a large scale). Is it after 5, 10, or 20years. In the latter two cases it‘s anyways beyond the usual lifespan of the device.

And as I said we‘re past 10yrs already, and since it didn‘t make national news, Louis Rossmann hasn‘t done 19 clickbait drama videos like about every other fart Tim Apple rips out in the loop, we can be rather certain that this as a fucking fact is not an actual relevant issue, no matter how much many people wish the opposite to be true in order to have a laugh at Apple and their sheep customers.

 

And I‘m sure you know that these self-reported lifetime figures are conservative estimates with devices in fact most of the time lasting much longer.

32 minutes ago, leadeater said:

If Intel can and has done this on two platforms and sockets for x86 already I'm sure Apple could also do it. Compressed and uncompressed memory is also technically a memory tier, OS managed.

So which consumer laptop platform exactly has two hardware-separated tiers of „fast“ RAM?

 

Out of curiosity, what are those platforms you refer to and how does the OS manage/support/know about these different tiers?

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Dracarris said:

Like every SSD, the ones in Macbooks will eventually fail. The question is when do they fail (on a large scale). Is it after 5, 10, or 20years. In the latter two cases it‘s anyways beyond the usual lifespan of the device.

Without anything excessively causing writes to them 10 years and greater is not just achievable it's what anyone should expect. SSD controller failure is more common but you shouldn't be getting this on Macs with the controller in the SoC so the more common failure point gets ruled out for Macs. Although SSD controller failure typically happens in years 1 through 3 and if you get past that it's probably not going to happen.

 

Personally I have had three 240GB SATA SSDs write wear out in my home server between 2 and 3 years, those were being used as write-back cache for my HDDs. It's not difficult to write wear out a low capacity SSD if you aren't careful about how much writes you are incurring on it. I not only knew though, I brought cheap ones with the intention for them to fail. This is why I say be careful because if you are swap thrashing then it's functionally the same as something like write-back cache, lots of writes to the SSD. Swap thrashing isn't as bad as using an SSD for write-back cache though, you'd have to be having some wildly bad swapping and ALL the time and not just under higher system usage.

 

33 minutes ago, Dracarris said:

So which consumer laptop platform exactly has two hardware-separated tiers of „fast“ RAM?

None, that isn't the point. It can be done. It's not an unkown engineering challenge on the platform side of things in relation to designing a memory controller and firmware/microcode to do it. It just really doesn't make a heck of a lot of sense on anything other than say a Mac Pro or Mac Studio (even then 🤷‍♂️). It makes no sense cost wise to do this while offering large capacities on package with the SoC. It would only make sense if the memory on package was only for and sized to be a cache and it also probably wouldn't be DRAM.

 

This isn't complexity in figuring out how to do it, it's done. It isn't an issue in figuring out how to do a system board layout to achieve it. The issue is the cost to do it for the extremely little benefit in a laptop. DIMM slots in laptops benefit the system builder/OEM/ODM the most not the consumer since you don't have to project the memory capacity purchasing demands before manufacturing the mainboard (or SoC for Apple).

 

Sure I have upgraded memory in laptops before, I think twice, three times max.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Sauron said:

It probably should be done by the memory controller rather than the OS but in theory you could do it through the OS, sure. Tiered memory is nothing new or especially difficult to implement. I mean, swapping is itself a tiered memory system.

How would the controller know which data to put into which tier? It could do some self-managed LRU along with adress translation/mapping but even then it would have to know about section start addresses and sizes. All in all quite some mgmt tasks for a memory controller for probably strongly sub-par QoS.

Also, tiered memory all nice and fine, but having two fast RAM tiers isn‘t exactly the same as fast RAM and an SSD that is a lot slower or a compressed section that first needs to pass the CPU.

 

So I don‘t really see this making a lot of sense without direct OS support, which again makes me curious which OS do support it and how.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Dracarris said:

How would the controller know which data to put into which tier? It could do some self-managed LRU along with adress translation/mapping but even then it would have to know about section start addresses and sizes. All in all quite some mgmt tasks for a memory controller for probably strongly sub-par QoS.

Also, tiered memory all nice and fine, but having two fast RAM tiers isn‘t exactly the same as fast RAM and an SSD that is a lot slower or a compressed section that first needs to pass the CPU.

 

So I don‘t really see this making a lot of sense without direct OS support, which again makes me curious which OS do support it and how.

Linux supports it without issue, Intel has that kind of use since long ago with their optane offerings, and now with their on package hbm offerings. 

The strategies used are pretty similar to multi cache system, which we have plenty of, and can be either totally transparent to the OS (you'd be flabbergasted if you knew how advanced memory controllers are), or could allow the OS to be aware in order to do some custom strategies. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, igormp said:

Linux supports it without issue, Intel has that kind of use since long ago with their optane offerings, and now with their on package hbm offerings. 

The strategies used are pretty similar to multi cache system, which we have plenty of, and can be either totally transparent to the OS (you'd be flabbergasted if you knew how advanced memory controllers are), or could allow the OS to be aware in order to do some custom strategies. 

Still does not really answer my question. To keep track of every allocated section in memory, the controller itself needs a seizable amount of memory.

How is it informed about what is where allocated and then freed?

And apart from access patterns it by definition has no clue about when to move what - when it moves at the wrong moment, performance could even be adversely affected.

 

Do you have by any chance some info about how Linux does it?

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Dracarris said:

Still does not really answer my question.

Your question was how, and it it is a variation of LRU, like most cache stuff.

2 hours ago, Dracarris said:

To keep track of every allocated section in memory, the controller itself needs a seizable amount of memory.

Not really, you don't see that happening for all of the tiered levels of cache we have and also virtual memory, that's why we have the likes of the TLB.

2 hours ago, Dracarris said:

How is it informed about what is where allocated and then freed?

The OS gets the syscalls and then sends it commands to alloc and free stuff.

2 hours ago, Dracarris said:

And apart from access patterns it by definition has no clue about when to move what - when it moves at the wrong moment, performance could even be adversely affected.

 

Hence why there's different algorithms to keep track of that. As an example, Intel's HBM offerings let you choose if you want it to run as part of the memory (so your first handful of RAM is really fast), as a cache (so totally transparent to the OS) or as standalone RAM (so no extra RAM on the system).

2 hours ago, Dracarris said:

Do you have by any chance some info about how Linux does it?

It currently uses the same idea of NUMA nodes. You can already think of memory that's attached to another CPU as tiered memory, they apply that same logic to other structures. However, there was discussion to improve this idea:

https://lore.kernel.org/lkml/CAAPL-u9sVx94ACSuCVN8V0tKp+AMxiY89cro0japtyB=xNfNBw@mail.gmail.com/

That's specially relevant given things like CXL and other HMM stuff like Grace Hopper and whatnot.

 

Anyhow, here you have the specifics for the kernel impl:

Numa stuff - https://github.com/torvalds/linux/blob/3ca112b71f35dd5d99fc4571a56b5fc6f0c15814/include/linux/numa.h

General memory management - https://github.com/torvalds/linux/blob/3ca112b71f35dd5d99fc4571a56b5fc6f0c15814/include/linux/mm.h

Code responsible for moving the pages up and down the hierarchy (only cares how to move stuff, not when) - https://github.com/torvalds/linux/blob/3ca112b71f35dd5d99fc4571a56b5fc6f0c15814/include/linux/migrate.h

Stuff related to HMM - https://github.com/torvalds/linux/blob/master/mm/hmm.c

 

Here are the relevant docs for the above code:

https://docs.kernel.org/mm/hmm.html

https://docs.kernel.org/mm/numa.html

https://docs.kernel.org/mm/page_migration.html

 

This also goes a bit into linux's memory tiering stuff:

https://stevescargall.com/blog/2022/06/10/using-linux-kernel-memory-tiering/

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Without anything excessively causing writes to them 10 years and greater is not just achievable it's what anyone should expect. SSD controller failure is more common but you shouldn't be getting this on Macs with the controller in the SoC so the more common failure point gets ruled out for Macs. Although SSD controller failure typically happens in years 1 through 3 and if you get past that it's probably not going to happen.

 

Personally I have had three 240GB SATA SSDs write wear out in my home server between 2 and 3 years, those were being used as write-back cache for my HDDs. It's not difficult to write wear out a low capacity SSD if you aren't careful about how much writes you are incurring on it. I not only knew though, I brought cheap ones with the intention for them to fail. This is why I say be careful because if you are swap thrashing then it's functionally the same as something like write-back cache, lots of writes to the SSD. Swap thrashing isn't as bad as using an SSD for write-back cache though, you'd have to be having some wildly bad swapping and ALL the time and not just under higher system usage.

 

None, that isn't the point. It can be done. It's not an unkown engineering challenge on the platform side of things in relation to designing a memory controller and firmware/microcode to do it. It just really doesn't make a heck of a lot of sense on anything other than say a Mac Pro or Mac Studio (even then 🤷‍♂️). It makes no sense cost wise to do this while offering large capacities on package with the SoC. It would only make sense if the memory on package was only for and sized to be a cache and it also probably wouldn't be DRAM.

 

This isn't complexity in figuring out how to do it, it's done. It isn't an issue in figuring out how to do a system board layout to achieve it. The issue is the cost to do it for the extremely little benefit in a laptop. DIMM slots in laptops benefit the system builder/OEM/ODM the most not the consumer since you don't have to project the memory capacity purchasing demands before manufacturing the mainboard (or SoC for Apple).

 

Sure I have upgraded memory in laptops before, I think twice, three times max.


My previous laptop was a MacBook Air— 4 gb of RAM, 128gb of SSD. I used it from 2011 till I got my M1 MacBook, so a solid decade.

 

That time, I used it probably 10 hours a day, every week day. Some of that time was loading a different (HD) movie on it every day to watch on the train. With only 4gb of RAM, I was using a TON of swap by the end of that decade. It was not an easy SSD life, and, with only 128gb, not many cells to spread the wear over. 

 

When I finished using that computer, it still had 55% SSD life remaining. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Dracarris said:

So I don‘t really see this making a lot of sense without direct OS support, which again makes me curious which OS do support it and how.

Every OS other than Windows officially, at a minimum HBM only mode would work with Windows but Intel just doesn't offer official support yet and HBM cache mode doesn't have a specific reason why it also wouldn't work. VMware ESXi does have official support and you can run a Windows VM on the ESXi host with Intel Xeon Max for official supported configuration.

 

It's not really what Intel Xeon Max CPUs are for though, but you can. We aren't prevented from doing dumb things 🙃

 

6 hours ago, Dracarris said:

How would the controller know which data to put into which tier? It could do some self-managed LRU along with adress translation/mapping but even then it would have to know about section start addresses and sizes. All in all quite some mgmt tasks for a memory controller for probably strongly sub-par QoS.

It's architecturally different memory controller groups on a CPU die or tile, so you can have defined memory pools and be able to choose how to treat them. They could both be DDR memory controllers but they have to actually be logically separate. 

 

 

3 hours ago, igormp said:

It currently uses the same idea of NUMA nodes. You can already think of memory that's attached to another CPU as tiered memory, they apply that same logic to other structures. However, there was discussion to improve this idea:

https://lore.kernel.org/lkml/CAAPL-u9sVx94ACSuCVN8V0tKp+AMxiY89cro0japtyB=xNfNBw@mail.gmail.com/

That's specially relevant given things like CXL and other HMM stuff like Grace Hopper and whatnot.

 

Quote

Since HBM is invisible for OS under Cache Mode, there are only two NUMA nodes for DDR RAM of each CPU when a dual-socket system is under Quadrant Clustering.

Quad%20Cache.png

 

Quote

Since HBM is invisible for OS under Cache Mode, there are only four NUMA nodes for DDR RAM of each CPU when a dual-socket system is under SNC4 Clustering. The following figure shows NUMA architecture of Cache Mode under SNC4 Clustering.

SNC4%20Cache%20numactl.png

 

https://lenovopress.lenovo.com/lp1738-implementing-intel-high-bandwidth-memory

 

Quote

Here is an example of the chip split into quadrants with 16GB of HBM2e and 32GB of DDR5 (2x 16GB DDR5-4800 DIMMs) per quadrant. One can see that our total memory capacity is 128GB even though we have 64GB of HBM2e and 128GB of DDR5.

xeon-max-cache-topo.png

https://www.servethehome.com/intel-xeon-max-9480-deep-dive-intel-has-64gb-hbm2e-onboard-like-a-gpu-or-ai-accelerator/3/

 

P.S. Windows is NUMA aware, since Windows 7/Server 2008 R2

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Every OS other than Windows officially, at a minimum HBM only mode would work with Windows but Intel just doesn't offer official support yet and HBM cache mode doesn't have a specific reason why it also wouldn't work. VMware ESXi does have official support and you can run a Windows VM on the ESXi host with Intel Xeon Max for official supported configuration.

 

It's not really what Intel Xeon Max CPUs are for though, but you can. We aren't prevented from doing dumb things 🙃

 

It's architecturally different memory controller groups on a CPU die or tile, so you can have defined memory pools and be able to choose how to treat them. They could both be DDR memory controllers but they have to actually be logically separate. 

 

 

 

Quad%20Cache.png

 

SNC4%20Cache%20numactl.png

 

https://lenovopress.lenovo.com/lp1738-implementing-intel-high-bandwidth-memory

 

xeon-max-cache-topo.png

https://www.servethehome.com/intel-xeon-max-9480-deep-dive-intel-has-64gb-hbm2e-onboard-like-a-gpu-or-ai-accelerator/3/

 

P.S. Windows is NUMA aware, since Windows 7/Server 2008 R2

I wish i could understand what any of this means as a first year computer engineering student

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/10/2023 at 9:34 AM, Sauron said:

it's about the system being designed to aggressively swap to ssd due to insufficient memory and not having any options to replace the drive or even boot from an external one

You can boot Macs from an external drive. Always could, still can. Whether a write-worn SSD would somehow stop that from working though I couldn’t say.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Dracarris said:

How would the controller know which data to put into which tier?

Oh dear, are we really about to question the whole concept of tiered memory? This is established computer science, it has been and is done, swap and cache are just two of the ways it's widely used.

 

There are plenty of algorithms to choose from ranging from simply always prioritizing fast memory and only falling back to slow memory when the fast memory is full (which is more or less how swap works) to smarter predictive algorithms like the ones used to populate cache. This is a solved problem.

3 hours ago, Paul Thexton said:

You can boot Macs from an external drive. Always could, still can. Whether a write-worn SSD would somehow stop that from working though I couldn’t say.

Fair enough, I thought I remembered you couldn't.

16 hours ago, leadeater said:

The issue is the cost to do it for the extremely little benefit in a laptop. DIMM slots in laptops benefit the system builder/OEM/ODM the most not the consumer since you don't have to project the memory capacity purchasing demands before manufacturing the mainboard (or SoC for Apple).

That might be the case... if the systems didn't come out the factory with extremely limited capacity and the OEM capacity upgrades didn't come at multiple times the market value of consumer RAM. As a consumer it's also good to at least have the option, it's not always easy to project what your memory needs will be 5 years from now.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×