Jump to content

Apple ARM is superior?

Hawkeye27

I found this on Quora. It is accurate?

"ARM, when Apple first decided to start using it, had really crappy memory bandwidth, and no ability to engage in out of order execution of instructions, and a very bad GPU.

Apple bought PA Semi, which had been founded by people from NetScaler. The people they bought were very, very good at chip design. This is the path they followed, in reverse order:

  • NetScaler was acquired by Citrix
    • People left to found PA Semi, because they didn’t want to work for Citrix
  • DEC was acquired by Compaq
    • People left to found NetScaler, when Compaq cancelled the DEC Alpha

So PA Semi had a lot of NetBSD people and the original DEC Alpha chip designers, and they were freakishly good chip designers.

Apple had them fix the chip design flaws in the ARM platform. Specifically, the out of order execution, which required Power Architecture style instruction and data pipeline sync instructions be added, memory bandwidth, and memory access latency issues.

Most recently, they had them tackle 64 bit, and GPU design issues.

So your answer?

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

And no one else can buy them, because Apple isn’t selling them into the mass market. After all… Apple isn’t a chip company."

Link to comment
Share on other sites

Link to post
Share on other sites

The 10-12 years ahead thing is complete speculation. The M1 is merely comparable to the x86 competition. But it is true that Apple isn't selling the M1 to mass market outside of the products that contain it.

Main: AMD Ryzen 7 5800X3D, Nvidia GTX 1080 Ti, 16 GB 4400 MHz DDR4 Fedora 38 x86_64

Secondary: AMD Ryzen 5 5600G, 16 GB 2667 MHz DDR4, Fedora 38 x86_64

Server: AMD Athlon PRO 3125GE, 32 GB 2667 MHz DDR4 ECC, TrueNAS Core 13.0-U5.1

Home Laptop: Intel Core i5-L16G7, 8 GB 4267 MHz LPDDR4x, Windows 11 Home 22H2 x86_64

Work Laptop: Intel Core i7-10510U, NVIDIA Quadro P520, 8 GB 2667 MHz DDR4, Windows 10 Pro 22H2 x86_64

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Hawkeye27 said:

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

i'd say this bit is debatable. Regardless of method , if the systems aren't much faster in similar tasks i wouldn't consider them that far ahead.

Speed relative to power consumption is another factor , but in the desktop industry people dont care much for efficiency over speed.

Link to comment
Share on other sites

Link to post
Share on other sites

they optimize that cpu for their os and programs so a lot of it is just optimization. I dont care how fast macs get I will never use a closed system that reeks of apple drm. eventfully desktops will move on and maybe do arm or something else, and as long as it has backwards compatibility with old instruction sets im all for it. 

Link to comment
Share on other sites

Link to post
Share on other sites

Is XYZ superior?

 

What kind of question is that.

 

 

Apple ARM CPUs simply have next gen disruptive performance per watt and room for unprecedented workflow-driven customization year after year. No biggie. 

Link to comment
Share on other sites

Link to post
Share on other sites

Btw, T minus 30 days for the first desktop CPU from Apple.

 

The mobile M1 is 15-30W TDP and rivals some desktop CPUs already. 

 

The desktop M1T at 65-95W will melt faces. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Hawkeye27 said:

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

citation needed.

 

People seem to think ARM only exists in smartphones and tablets. I guess Fugaku can't hold a candle to a macbook pro.....

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

It's only superior if it fits your exact needs/work flow. Otherwise, it's irrelevant. Is it a step forward for faster tech? Totally. But not everyone needs an M-series Mac. Some people only need a machine that lets them browse the web, other need Quadro's paired with Threadripper's for extreme workloads. It's not one size fits all.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Hawkeye27 said:

I found this on Quora. It is accurate?

"ARM, when Apple first decided to start using it, had really crappy memory bandwidth, and no ability to engage in out of order execution of instructions, and a very bad GPU.

Apple bought PA Semi, which had been founded by people from NetScaler. The people they bought were very, very good at chip design. This is the path they followed, in reverse order:

  • NetScaler was acquired by Citrix
    • People left to found PA Semi, because they didn’t want to work for Citrix
  • DEC was acquired by Compaq
    • People left to found NetScaler, when Compaq cancelled the DEC Alpha

So PA Semi had a lot of NetBSD people and the original DEC Alpha chip designers, and they were freakishly good chip designers.

Apple had them fix the chip design flaws in the ARM platform. Specifically, the out of order execution, which required Power Architecture style instruction and data pipeline sync instructions be added, memory bandwidth, and memory access latency issues.

Most recently, they had them tackle 64 bit, and GPU design issues.

So your answer?

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

And no one else can buy them, because Apple isn’t selling them into the mass market. After all… Apple isn’t a chip company."

Conclusion not supported by description

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TempestCatto said:

It's only superior if it fits your exact needs/work flow. Otherwise, it's irrelevant. Is it a step forward for faster tech? Totally. But not everyone needs an M-series Mac. Some people only need a machine that lets them browse the web, other need Quadro's paired with Threadripper's for extreme workloads. It's not one size fits all.

I do feel that this leads to a much bigger, broader and more important point: "the best setup" for anyone is always going to be something that works for their needs. I could never find a use for an ARM Mac - or any Mac, for that matter. They're just purposeless for the analog video recording and game asset ripping that I do. That's why my desktop has an AMD Ryzen 7 3800XT and not an Intel Core i7 or i9 of some sort. 

But for others, Macs are optimal for their needs or even complete monsters at the shit they wanna do. Applying a one-size-fits-all motif to someone's needs is very hard to do because it's not rational. Don't get me wrong, I'd love a crazy ass Threadripper setup - but it'd go down the drain unless I constantly hammered my setup with 2160p60 recording. But I don't, the vast majority of things I record are at 984x720. My needs aren't yours, your needs aren't anyone else's, anyone else's needs aren't mine, etc.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, PlayStation 2 said:

I do feel that this leads to a much bigger, broader and more important point: "the best setup" for anyone is always going to be something that works for their needs. I could never find a use for an ARM Mac - or any Mac, for that matter. They're just purposeless for the analog video recording and game asset ripping that I do. That's why my desktop has an AMD Ryzen 7 3800XT and not an Intel Core i7 or i9 of some sort. 

But for others, Macs are optimal for their needs or even complete monsters at the shit they wanna do. Applying a one-size-fits-all motif to someone's needs is very hard to do because it's not rational. Don't get me wrong, I'd love a crazy ass Threadripper setup - but it'd go down the drain unless I constantly hammered my setup with 2160p60 recording. But I don't, the vast majority of things I record are at 984x720. My needs aren't yours, your needs aren't anyone else's, anyone else's needs aren't mine, etc.

I have a somewhat similar issue: the m1 is internal gpu only.  My main use case is video games, which like extremely powerful graphics.  

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Hawkeye27 said:

I found this on Quora. It is accurate?

"ARM, when Apple first decided to start using it, had really crappy memory bandwidth, and no ability to engage in out of order execution of instructions, and a very bad GPU.

Apple bought PA Semi, which had been founded by people from NetScaler. The people they bought were very, very good at chip design. This is the path they followed, in reverse order:

  • NetScaler was acquired by Citrix
    • People left to found PA Semi, because they didn’t want to work for Citrix
  • DEC was acquired by Compaq
    • People left to found NetScaler, when Compaq cancelled the DEC Alpha

So PA Semi had a lot of NetBSD people and the original DEC Alpha chip designers, and they were freakishly good chip designers.

Apple had them fix the chip design flaws in the ARM platform. Specifically, the out of order execution, which required Power Architecture style instruction and data pipeline sync instructions be added, memory bandwidth, and memory access latency issues.

Most recently, they had them tackle 64 bit, and GPU design issues.

So your answer?

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

And no one else can buy them, because Apple isn’t selling them into the mass market. After all… Apple isn’t a chip company."

BS the chip is good compared to ARM and mediocre compared to x86 when it is tested with real workloads that use 100% of the cores

 

 

In some synthetic benches where the CPU is not even used to 10-20% most of the time of duration of the bench (e.g geekbench and some webbenches) it gains a better score because those benches heavily rely on memory speed and latency of which the m1 has about 4300 mhz low latency ram and they are comparing it with laptops that only have 2666 mhz ram (so 2ghz slower ram compared to m1) at best or with desktops with the same speed of ram with aforementioned laptops  or 3200 mhz

 

But even that huge advantage in the unfair comparisons doesnt help the arm chip that much if you really compare head to head and dont just want to give a promotion/advertisement for apple 

Link to comment
Share on other sites

Link to post
Share on other sites

Are you asking if the history is right? Probably...I'd have to research it, but that's not the kind of thing someone makes up.  And yes,

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, papajo said:

BS the chip is good compared to ARM and mediocre compared to x86 when it is tested with real workloads that use 100% of the cores

 

 

In some synthetic benches where the CPU is not even used to 10-20% most of the time of duration of the bench (e.g geekbench and some webbenches) it gains a better score because those benches heavily rely on memory speed and latency of which the m1 has about 4300 mhz low latency ram and they are comparing it with laptops that only have 2666 mhz ram (so 2ghz slower ram compared to m1) at best or with desktops with the same speed of ram with aforementioned laptops  or 3200 mhz

 

But even that huge advantage in the unfair comparisons doesnt help the arm chip that much if you really compare head to head and dont just want to give a promotion/advertisement for apple 

Its mediocre when running stuff through Rosetta.  So the chip can get gud’nuff running stuff designed for x86 only.  The supposition is when it runs stuff actually designed to work with it it may be fast.  There was a lot of worry that it would be a lot worse than it is. What is amazing about the m1 is not that it’s astoundingly fast but that it’s not astoundingly slow 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

M1 chip is OK but not the best CPU, only fan boys keep saying it the best. It also has a lot of disadvantages from only have limit amount of memory and cannot upgrade the amount that the system have, IO problem (e.g. cannot have dedicated GPU). There also the problem of having GPU, Core and memory all in one chips, it could mean that it may not be able scale well. 

 

Other problem is that some instructions like in CISC (e.g. x86-64) are really useful and make program more efficient but as M1 is RISC many of instructions are not there, so take more instructions just to do one thing making the program run less efficient. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, A51UK said:

M1 chip is OK but not the best CPU, only fan boys keep saying it the best. It also has a lot of disadvantages from only have limit amount of memory and cannot upgrade the amount that the system have, IO problem (e.g. cannot have dedicated GPU). There also the problem of having GPU, Core and memory all in one chips, it could mean that it may not be able scale well. 

 

Other problem is that some instructions like in CISC (e.g. x86-64) are really useful and make program more efficient but as M1 is RISC many of instructions are not there, so take more instructions just to do one thing making the program run less efficient. 

 

 

This seems to boil down to the m1 isn’t good because cisc is better than risc.  The opposite was once considered true.  My suspicion is they’re pretty close to the same these days. M1s do have limitations, and if you require those things where the m1 is limited it’s not a good choice. If you don’t though it has its points, and if your requirements dovetail with the strengths of the m1, it’s actually better for that use case.  What impresses people it seems is the weaknesses aren’t particularly weak, allowing the parts where it does excel more limelight. It excels mostly it seems in low power consumption and portability.  

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Hawkeye27 said:

So your answer?

Apple ARM chips are about 10–12 years ahead of everyone else’s chip.

ARM is just a ISA. Their µArch is nice but nothing that impressive, there are 2 really nice features found on it: the wide-ass decoder (and the accompanying and really freaking huge reorder buffer, both of which are twice as large as your other existing µArches), and the implementation of the x86 memory model in hardware (that's why the perf penalty from rosetta isn't that huge).

 

Other than that, it managed to be good as a whole system, not only due to the CPU itself. You have tons of cache and tight memory integration on the SoC itself, allowing for really high memory bandwidth and low latencies in order to keep your CPU fed (remember the keeping your CPU feed is how AMD is mostly getting their perf improvements gen after gen), along with the fast NVMe that allows apple to abuse of the swap in order to keep the system feeling snappy.

 

And when it comes to efficiency, all of the above + the fact that they're on the most bleeding edge node currently available (5nm) while none of their competitors is helps a lot.

 

3 hours ago, A51UK said:

Other problem is that some instructions like in CISC (e.g. x86-64) are really useful and make program more efficient but as M1 is RISC many of instructions are not there, so take more instructions just to do one thing making the program run less efficient. 

That's irrelevant since you need to decode those anyway, and it only refers to the ISA, while what really matters is the µArch behind it.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, saltycaramel said:

Btw, T minus 30 days for the first desktop CPU from Apple.

 

The mobile M1 is 15-30W TDP and rivals some desktop CPUs already. 

 

The desktop M1T at 65-95W will melt faces. 

power consumption does not scale equally with performance.

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, svmlegacy said:

The 10-12 years ahead thing is complete speculation. The M1 is merely comparable to the x86 competition. But it is true that Apple isn't selling the M1 to mass market outside of the products that contain it.

Nah, 10-12 years ahead is a stretch. Maybe 2-3 years ahead of everyone else. Which is still a lot in tech world.

Link to comment
Share on other sites

Link to post
Share on other sites

  

28 minutes ago, igormp said:

That's irrelevant since you need to decode those anyway, and it only refers to the ISA, while what really matters is the µArch behind it.

Nothing of this makes sense. 

 

the μArch of each individual CPU is build around its ISA. So there is no "only" the ISA is the whole deal the μΑrch is only the practical approach to make the ISA work by using circuitry/logic

 

And @A51UK's argument is not irrelevant you need to translate the source code in order to make it work in a different ISA its not as easy and trivial as changing mp3 to to mp4 or something like that. 

Just now, RejZoR said:

Nah, 10-12 years ahead is a stretch. Maybe 2-3 years ahead of everyone else. Which is still a lot in tech world.

They are 0 years ahead of anyone else

 

They just have fast RAM the cores are slower compared to x86 ones

 

The thing that makes it snappy is apple's closed ecosystem in addition with the fast ram

 

If they are ahead in something is that they are a 2 trillion dollar company and can produce stuff like that with nice polishing as where e.g ASUS, Toshiba etc couldnt just make an armchip with all that software development needed to polish it and make it at huge quantities so that they could sell it at a remotely viable pricetag (not that the m1 is cheap but it is far cheaper than other competitors would need to sell their arm version ) 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Letgomyleghoe said:

power consumption does not scale equally with performance.

Does not scale linearly.

It does scale somehow tho. 

And it’s gonna be glorious if we extrapolate current performance-per-watt to a desktop thermal envelope. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, saltycaramel said:

Does not scale linearly.

It does scale somehow tho. 

And it’s gonna be glorious if we extrapolate current performance-per-watt to a desktop thermal envelope. 

it's an arm chip don't get so much excited besides that it will probably be the same thing just bigger with more cores.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, saltycaramel said:

Does not scale linearly.

It does scale somehow tho. 

And it’s gonna be glorious if we extrapolate current performance-per-watt to a desktop thermal envelope. 

but scaling current performance-per-watt to a desktop chip is assuming that it does scale linearly. Which I can guarantee you it will not, sure it's gonna be a great chip, but it's not gonna be a 5950x or 10900k.

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×