Jump to content

Nvidia announces better ARM support and new ARM CPU - x86 not the main player anymore

igormp
3 hours ago, LAwLz said:

universal binaries are not at all like UWP

Why not, they are both solutions that can solve the very same use case and requirement. UWP is a lot wider scope but one of it's purposes is to achieve the same thing so it's completely fair to compare them.

 

[Edit]

While I will acklowdge the technologies themselves are totally different and the UWP App is not a fat binary containing all ISA builds and will only deliver the UWP binary for the target platform UWP itself allows you to not have to code for each platform and thus can target them all, or any combination you want.

 

So I as the developer don't have to handle different toolchains for different platforms and can build for any I choose, like you would when building a Mac Universal Binary, they just end up in separate build packages and the UWP platform handles which one to give out to the user.

 

I as a user don't have to care how it is handled, I can consume the same application on my ARM device or my x86 device and it can run natively on both, so long as it's be built for both.

 

Thus I would say two very different solutions for the same requirement therefore fine to compare for that section of each.

[/Edit]

 

3 hours ago, LAwLz said:

Again, what are you on about? This has nothing to do with Rosetta 2

Yes it does, may I ask you what Rosetta fundamentally allows? x86 on ARM correct? So ISA translation correct? So how is UWP allowing iOS and Android ARM apps to run on x86 not the same thing, but like I said in the reverse direction to Rosetta.

 

3 hours ago, LAwLz said:

Sounds to me like you're just blaming users for Microsoft making a shitty product.

They are not blameless. If something isn't good enough you have the option to work with the vendor to do improve it, if you see that it seeking to provide a solution to a real problem. And there is your issue right there, the Windows developer community just do not have any interest in something like UWP to be able to run their application across PC, console, AR Device, ARM, x86 etc. Most of them are perfectly happy either creating Win32 Apps or Web Apps which can run on anything anyway so have little reason to care about UWP allowing their application which already can run on an ARM device to run on an ARM device (wait... a solution for a non existent problem).

 

That is why UWP largely failed, it's a solution for a problem that barely exists and few have any interest in because the consumer space is largely going SaaS and web application or web container applications (the App is just a browser shell).

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

-snip-

Single-core performance (int) (under all-core load):

Xeon 8380 - 3.84

EPYC 7763 - 3.43

Altra Q80-33 - 3.23

 

Single-core performance (fp) (under all-core load):

Xeon 8380 - 4.11

EPYC 7763 - 3.32

Altra Q80-33 - 2.37

 

Multi-core performance (int):

Altra Q80-33 - 258.3

EPYC 7763 - 255.0

Xeon 8380 - 153.5

 

Multi-core performance (fp):

EPYC 7763 - 212.6

Altra Q80-33 - 189.3

Xeon 8380 - 164.3

 

Power consumption:

Xeon 8380 - ~260W

EPYC 7763 - ~220W

Altra Q80-33 - ~200W

 

Price:

Xeon 8380 - $8099

EPYC 7763 - $7890

Altra Q80-33 - $4050

 

 

Compared to EPYC, Altra offers around 83% (ST) and 95% (MT) of the performance, with about 10% lower power consumption at almost half the price.

If you can run your workloads on the Altra processor, it's by far the better buy than Intel or AMD.

Also, it's depressing how much behind Intel is.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Why not, they are both solutions that can solve the very same use case and requirement. UWP is a lot wider scope but one of it's purposes is to achieve the same thing so it's completely fair to compare them.

 

Yes it does, may I ask you what Rosetta fundamentally allows? x86 on ARM correct? So ISA translation correct? So how is UWP allowing iOS and Android ARM apps to run on x86 not the same thing, but like I said in the reverse direction to Rosetta.

 

They are not blameless. If something isn't good enough you have the option to work with the vendor to do improve it, if you see that it seeking to provide a solution to a real problem. And there is your issue right there, the Windows developer community just do not have any interest in something like UWP to be able to run their application across PC, console, AR Device, ARM, x86 etc. Most of them are perfectly happy either creating Win32 Apps or Web Apps which can run on anything anyway so have little reason to care about UWP allowing their application which already can run on an ARM device to run on an ARM device (wait... a solution for a non existent problem).

 

That is why UWP largely failed, it's a solution for a problem that barely exists and few have any interest in because the consumer space is largely going SaaS and web application or web container applications (the App is just a browser shell).

What I think the user will want ideally is ability to run an x86 app straight with no OS change, and have things just work.  The conversion methodology that does that better  will just win.  I think Chips are often faster than they need to be right now for a lot of things (but not everything) so there is a bit of leeway for doing that. It used to be a chip had to be able to emulate faster than a native chip could run, but I’m not sure that is still the case.  So there may be some room. Not a whole lot though. It can’t be too too slow or too too hard to code for.  Fast enough to work well enough is fast enough to work well enough  though.  This means really old stuff is fast enough on everything because there was just so much less available performance.  Gta5 runs fine on a phone. If it’s just WP video and web browsing a conversion app will easily do.  Things get fast quickly after that though.  The thing about games is there are quantum levels.  Can you break 40fps? Can you break 60fps? Can you break 120fps?   It could be a situation where, say, aaa rpgs could be played but not competitive shooters or something. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LAwLz said:

Single-core performance (int) (under all-core load):

Xeon 8380 - 3.84

EPYC 7763 - 3.43

Altra Q80-33 - 3.23

 

Single-core performance (fp) (under all-core load):

Xeon 8380 - 4.11

EPYC 7763 - 3.32

Altra Q80-33 - 2.37

 

Multi-core performance (int):

Altra Q80-33 - 258.3

EPYC 7763 - 255.0

Xeon 8380 - 153.5

 

Multi-core performance (fp):

EPYC 7763 - 212.6

Altra Q80-33 - 189.3

Xeon 8380 - 164.3

 

Power consumption:

Xeon 8380 - ~260W

EPYC 7763 - ~220W

Altra Q80-33 - ~200W

 

Price:

Xeon 8380 - $8099

EPYC 7763 - $7890

Altra Q80-33 - $4050

 

 

Compared to EPYC, Altra offers around 83% (ST) and 95% (MT) of the performance, with about 10% lower power consumption at almost half the price.

If you can run your workloads on the Altra processor, it's by far the better buy than Intel or AMD.

Also, it's depressing how much behind Intel is.

Not sure intel will stay behind though.  I don’t know how much FTM is involved with intel though they keep promising and it’s been getting worse not better.  They’ve got to come up with a real wolf soon or people will just walk away.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Vishera said:

But they are in everyday regular laptops,desktops are a different story.

If you want an ARM laptop with Windows,then there is the Microsoft Surface Pro X for example,there are also Chromebooks,and more.

I don't think most people would categorize Surface Pro X and *some* Chromebooks as everyday laptops. Stop grabbing straws here

9 hours ago, Bombastinator said:

Having a platform supplant another platform in its space has happened, but when it’s happened it’s been a very slow process.  Multiple years. Arguably decades in some instances.  There is a potential shortcut if one system is able to run the software of the other though. 

Even if it's slower, it is definitely faster if Apple didn't do anything. People lost count at icrosoft's attempt at Windows on Arm

8 hours ago, leadeater said:

No they aren't lazy, there isn't a economic or business driving reason to change out an already working toolset and also require a lot of professional development to upskill employees (which isn't free) so what would be for them no gain at all.

I am speaking on real world basis here (no i dont care about paper specs on some random upcoming chips, etc. If it's not availble in the market today, it doesn't count). The M1 chips currently on the market pretty much has the best perfomance per watt. And all applications being ported natively on the M1 are seeing in the range of 30% improvement in performance along with effciency, soemtimes even compared to the previous Intel gen Intel. Just today parallels native version was announced

https://www.macrumors.com/2021/04/14/parallels-desktop-native-m1-support/

8 hours ago, leadeater said:

You have a very simplistic view of a very complex situation and also not realize that for the software ecosystem side of the equation switching has no direct benefit to them at all. Windows ecosystem is in at least, very minimum, a 3 way standoff with nobody willing to shoot first. Apple on the other hand was at most in a 2 way standoff but also had the much bigger gun.

Remember I am not talking about Microsoft SQ1 chip here. If a similar processor like Apple's M1 makes their way to Windows and outperforms whatever Intel and AMD have at the time (while also being more efficient) developers definitely have the incentive to make their app available on the ARM machine - as consumers would be swayed a little for having the advatages of the ARM machines (and yes I am perfectly aware that Microsoft tried this already but it had too many drawbacks that Apple gracefully handled. So Im expecting something similar from Microsoft). And it's only going to increase when adoption increases

8 hours ago, leadeater said:

So you are going to ignore the fact that there actually isn't a direct performance gain switching to ARM at all? What's inaccurate is saying ARM has greater performance to x86.

My previous two answers answer the same damn thing. Why are you asking the same question. Clearly we saw how good Apple's ARM architecture is. Eventually Qualcomm and whoever else are looking into ARM processors will catch up. And in long term basis I do believe ARM architecture can be designed to be much more efficient than typical x86 architectures, so ARM will eventually enroach the ultrabook segement at the very least

8 hours ago, leadeater said:

They did and literally everyone yelled and screamed and tossed their toys out and refused to adopt it, UWP.

UWP was and is trash. I think others have explained why it is. From pure user point of view, using a UWP makes you want to pull out your hair

8 hours ago, leadeater said:

These are after all chalk and cheese design requirements, but as to the point high performance ARM has existed for a long time, even before these, and anybody could design and manufacture a high performance mobile or desktop class product during that time if they so wished but nobody did because their wasn't a market for it.

So what you are claiming is that ARM was just a golden goose for years and everyone knew about it, but people like were yeah, nope? That is quite the dumbest argument I've heard. Yeah you can keep cherry picking specifications to highlight the point but Im sure those chips had to give up something that was crucial to consumer desktop/laptop for it to not be viable. And the implication here is that Qualcomm, you know the non Apple ARM manufacturer, is grossly incompetent, when apparently Fujitsu could've easily entered the mobile space and wrecked Qualcomm.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

Compared to EPYC, Altra offers around 83% (ST) and 95% (MT) of the performance, with about 10% lower power consumption at almost half the price.

If you can run your workloads on the Altra processor, it's by far the better buy than Intel or AMD.

Also, it's depressing how much behind Intel is.

All 3 of those prices aren't actually what you'd pay though, unless you went pure retail chain and purchased a single server. I'm actually in the process right now of getting quotes for a bunch of 7763, 75F3 and 7543 for different use cases, it'll be interesting how different the pricing is compared to list, not that I could disclose our pricing though. We have never paid anywhere near $8k for 8380 class Intel products though.

 

Intel is giving deep discounts right now, $2k difference less than 12 month apart for the exact same server (literally buying another of the same).

 

So I would say Intel's list price is the most different with EPYC and Altra probably quite close to it, doubt the margin will change much between those two.

 

Also the Altra Q80-33 should average out to better than 10% lower power due to EPYC aggressive boost clocking.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, LAwLz said:

Also, it's depressing how much behind Intel is.

I just want to add, that despite all of this, they still have the highest priced chips, not just the most expensive but HIGHEST priced chips and they can get away with cause enterprises will buy it.

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

Why not, they are both solutions that can solve the very same use case and requirement. UWP is a lot wider scope but one of it's purposes is to achieve the same thing so it's completely fair to compare them.

It's not fair to compare them because like you said, UWP is a lot wider scope and it doesn't even guarantee ISA-independence.

There are a ton of UWP applications that only work on x86. 

 

I barely got any UWP programs installed and yet I have four that are x86 only. Python, Messenger, My phone and Citrix Workspace.

 

 

I don't think you can compare universal binaries to UWP because universal binaries is one specific thing that only does one specific thing, and it does it really well.

UWP is an entire toolkit that does a billion different things, one of which is universal binaries but it's optional.

Appx vs universal binaries is a far better comparison.

 

 

23 minutes ago, leadeater said:

Yes it does, may I ask you what Rosetta fundamentally allows? x86 on ARM correct? So ISA translation correct? So how is UWP allowing iOS and Android ARM apps to run on x86 not the same thing, but like I said in the reverse direction to Rosetta.

Because Rosetta is about running programs on different ISAs.

UWP bridges (which is not the same as UWP) is about running programs on different OSes.

 

One translates x86 instructions to ARM instructions. The other translates system APIs to other system APIs.

And as a side-note, like I said earlier they scrapped the Android bridge.

 

 

26 minutes ago, leadeater said:

That is why UWP largely failed, it's a solution for a problem that barely exists and few have any interest in because the consumer space is largely going SaaS and web application or web container applications (the App is just a browser shell).

That I can agree with.

UWP was a solution to a problem that didn't exist. On top of being garbage in many other ways. Like putting heavy restrictions on what programs could do, restrictions on what language and APIs you could develop with, limiting you to the store (at first) where Microsoft controlled distribution, etc.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Grace will probably be based on N2 (not released yet). What I am interested in is how much better N2 is compared to N1.

ARM's own figures claims ~50% higher IPC which would be fantastic, but that remains to be seen.

It'd be cool indeed, but Grace isn't meant to be a fast CPU anyway, it's just meant to be a CPU with tons of nvlink in order to not bottleneck the GPUs. In the scenarios where such a server would be deployed, the raw CPU performance will probably be meaningless

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, RedRound2 said:

I am speaking on real world basis here (no i dont care about paper specs on some random upcoming chips, etc. If it's not availble in the market today, it doesn't count). The M1 chips currently on the market pretty much has the best perfomance per watt. And all applications being ported natively on the M1 are seeing in the range of 30% improvement in performance along with effciency, soemtimes even compared to the previous Intel gen Intel. Just today parallels native version was announced

https://www.macrumors.com/2021/04/14/parallels-desktop-native-m1-support/

I was exclusively talking about ARM CPUs on the market today, you can buy A64FX and Q80-33 servers right now if you want.

 

SPECint2017Rate N

7763: 255 (220W) 255/220 = 1.159

Q80-33: 258.3 (200W) 258.3/200 = 1.2915

8380: 167.6 (260W) 167.6/260 = 0.644

M1: 28.85 (26.8W) 28.85/26.8 = 1.076

 

M1 is only 3rd behind Altra Q80-33 and AMD EPYC 7763

 

SPECfp2017Rate N

7763: 212.6 (220W) 212.6/220 = 0.966

Q80-33: 189.3 (200W) 189.3/200 =  0.946

8380: 164.3 (260W) 164.3/260 =  0.632

M1: 38.71 (26.8W) 38.71/26.8 =  1.444

 

M1 is first here, by a significant amount. Which makes sense as Apple went with a beafy FP architecture design and it's paying off well. Sadly I cannot get you the performance figures for the A64FX but I can tell you the power for it is 188W and it 1.56x the performance of Intel Xeon, so very similar to the M1 in efficiency but on TSMC 7nm not 5nm.

 

Also I'm not allowed to also mention future up coming CPUs as well? You know you can also talk about future Apple CPUs/SoCs as well, which will be even more efficient with more performance cores with the same elements that exist in the M1 right now.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, LAwLz said:

I don't think you can compare universal binaries to UWP because universal binaries is one specific thing that only does one specific thing, and it does it really well.

Yet can have the same failing you experienced, if the universal binary does not have the build in it for you architecture then well.. basically SOL without binary translation. You shouldn't encounter that as Apple's programmer guidance/policy is must include all correct? I can think of a solution to your issue on the Microsoft side then.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

Why are you asking the same question.

Because when you quote performance increases like 30% from running under Rosetta 2 to native ARM that has literally zero baring on the performance of x86 CPUs on the market. This performance gain is irrelevant, you should be talking about native vs native and on the Mac OS side you only have a selection of mostly poor performance or vastly inferior compared to AMD products or even different ones from Intel.

 

If you don't apply an unnecessary handicap to the comparison the M1 is not as much better as you're thinking. It's still excellent but that's not because it's ARM it's because Apple has great engineers and access to TSMC 5nm where as others only have 1 of those two things.

 

Now I have shown you the performance of some of the best ARM CPUs on the market today  and how similar they are to AMD EPYC 7003. There is no clear win for any of these ARM 7nm products other than the A64FX in FP workloads. ARM is merely competitive and that simply is not enough to push the Windows ecosystem away from x86 for essentially nothing. ARM has it place even in the consumer space and even for Windows but it's not the be all and end all and has no specific advantage in performance or performance efficiency, these all come from architecture design choices that you can equally apply to ARM or x86.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

So what you are claiming is that ARM was just a golden goose for years and everyone knew about it, but people like were yeah, nope? That is quite the dumbest argument I've heard. Yeah you can keep cherry picking specifications to highlight the point but Im sure those chips had to give up something that was crucial to consumer desktop/laptop for it to not be viable. And the implication here is that Qualcomm, you know the non Apple ARM manufacturer, is grossly incompetent, when apparently Fujitsu could've easily entered the mobile space and wrecked Qualcomm.

I was talking mobile as in laptops etc, the discussion at no point has been about phones. Zero cherry picking used. If any company wanted to make a competitive ARM CPU to Ryzen Mobile U or H/HS or Intel Mobile then they could have, but like I said there simply wasn't a market for it or a demand so nobody did it, or will until that changes.

 

Tell me again how good Windows on ARM is (the product)? Yea thought so.

 

Also I'm not the one trying to imply that being merely competitive is enough to drive a complete industry shift.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/15/2021 at 9:36 PM, leadeater said:

Because when you quote performance increases like 30% from running under Rosetta 2 to native ARM that has literally zero baring on the performance of x86 CPUs on the market. This performance gain is irrelevant, you should be talking about native vs native and on the Mac OS side you only have a selection of mostly poor performance or vastly inferior compared to AMD products or even different ones from Intel.

 

If you don't apply an unnecessary handicap to the comparison the M1 is not as much better as you're thinking. It's still excellent but that's not because it's ARM it's because Apple has great engineers and access to TSMC 5nm where as others only have 1 of those two things.

 

Now I have shown you the performance of some of the best ARM CPUs on the market today  and how similar they are to AMD EPYC 7003. There is no clear win for any of these ARM 7nm products other than the A64FX in FP workloads. ARM is merely competitive and that simply is not enough to push the Windows ecosystem away from x86 for essentially nothing. ARM has it place even in the consumer space and even for Windows but it's not the be all and end all and has no specific advantage in performance or performance efficiency, these all come from architecture design choices that you can equally apply to ARM or x86.

I dont remeber the context anymore but I think ti was quite evident that the M1 chips ran many of the applications faster under Rosetta 2 compared to previous gen MBA/MBP processors. Heck, most people are still losing their minds on how the M1 can outperform a spec'd out 16" MBP in some tasks on twitter. And 5nm advantage isn't really much as you make it out to be (15% speed improvement or 30% lower power consumption - apple's implementations has to be between the two).

 

Let me spell out in simple terms why ARM is desirable to consumers (please dont bother bringing up some server grade CPUs and professional software that most people dont run as examples in your rebuttal). More types of form factors, thinner, lighter devices, possibilities of integrating a modem all while remaining powerful. Only hurdle is the software compatibility. Given most people do their stuff on a web browser, it isn't as big of a hurdle as you make it out to be. But it still is. And slowy with time, consumer space will migrate unless x86 can suddenly match up in speed and efficiency with the M1

On 4/15/2021 at 9:42 PM, leadeater said:

I was talking mobile as in laptops etc, the discussion at no point has been about phones. Zero cherry picking used. If any company wanted to make a competitive ARM CPU to Ryzen Mobile U or H/HS or Intel Mobile then they could have, but like I said there simply wasn't a market for it or a demand so nobody did it, or will until that changes.

 

Tell me again how good Windows on ARM is (the product)? Yea thought so.

 

Also I'm not the one trying to imply that being merely competitive is enough to drive a complete industry shift.

When did I talk about phones? I just asked you whether the whole industry knew ARM's potential. And clearly they didn't because M1 performance was a surprise to everybody. And in the reply ou claim that anyone could do it. SO why didn't they? Qualcomm and Microsoft did it before Apple, but their SQ1 was and is garbage. One one hand you praise APple enginners for developing M1 and on the other hand you claim that anyone could make ARM CPU as capable of M1 but nodody did. Having such a technology that legit made people re consider ARM's viability on the desktop (and for Apple users an obvious next step into next gen computing) but deciding to leave it under the rug because no sofware support yet is probably the most boneheaded decision one could make for their own company

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2021 at 9:55 AM, RedRound2 said:

It's funny how like 2-3 years ago, people were swearing on themselves than ARM PCs are never going to be a thing. CPUs have just become really interesting and it's gotten even more spiced up after M1 along with ARM's big entrance. Pretty excited to see what's coming up in the next 5 years in the CPU space

This really has nothing to do with Apple, ARM in the server space is nothing new and it makes a lot of sense for nVidia to make their own CPUs right now. They can't make them x86 sooooo...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

I dont remeber the context anymore but I think ti was quite evident that the M1 chips ran many of the applications faster under Rosetta 2 compared to previous gen MBA/MBP processors

Yes that is correct, that doesn't change the issue I pointed out to you that those CPUs in those Apple products are vastly slower than options on the market today and even back then. Though to be fair some of those back then CPU products were to high power class to really consider as viable for those Apple devices and additional to that Apple wasn't going to switch to AMD Ryzen Mobile either with their own SoC's planned.

 

So I think you missed the point again, if you look at the actually completive x86 mobile CPUs on the market today the M1 is only competitive in the market with some strong points that make it very appealing. That's not enough to drive any industry wide change away from x86.

 

1 hour ago, RedRound2 said:

And in the reply ou claim that anyone could do it. SO why didn't they?

I told you why, no market for it. Yes Qualcomm and Microsoft tried and it failed big time and it wasn't solely because the performance of the SoC was bad, Windows ARM implementation is outright horrible and even today it's still horrible and that edition of Windows isn't even feature comparable with I think even Home edition. Nobody is going to spend a bunch of money developing hardware for a platform that has clearly demonstrated issues and no interest from consumers.

 

Developing a SoC is expensive, where there isn't a clear market for something you don't going spending any more than you need to. Apple knew how big their market was and knew the demand, they are in control of the entire ecosystem so for them there is justification to spend as much as required to ensure success as there is a proven market for it, their intention to use it on their own products.

 

Apple and Microsoft ARM situations are quite different. If you rewound time and were Qualcomm would you have invested as much as Apple did with a partnership with Microsoft? Would you trust that an ARM editions of Windows would be good, would you trust that the Windows community and developers would get behind it? I wouldn't of and I don't now.

 

1 hour ago, RedRound2 said:

on the other hand you claim that anyone could make ARM CPU as capable of M1 but nodody did

Yes people have, I've literally showed you at least 3 of them and I can think of 2 more and I'm likely missing some. They are all server CPUs but they are still ARM and they are still high performance and some came well before M1. Your unawareness of them doesn't mean they didn't exist. They may not be applicable to the consumer market but you said the M1 proved something, yet it was already proven by other ARM CPUs it's just you and other never noticed and I certainly don't blame you for that either.

 

1 hour ago, RedRound2 said:

consumer space will migrate unless x86 can suddenly match up in speed and efficiency with the M1

Already happened, Ryzen Mobile 5000 U series. M1 still a bit more efficient, obviously since TSMC 5nm vs 7nm but that's got nothing to do with "x86".

 

I think you have a typical Apple user syndrome, only looking within the Apple ecosystem like your comparisons to the old Intel Mac products as if those were the best CPU products on the market. Like I get it that's what is important to you and what affects you, but there are other things outside of what Apple was/is using.

 

As far as consumers are concerned, the wider meaning, none of them will care or be aware that the M1 is an ARM architecture. To them it's the Apple M1 and all they know is Apple made it themselves. Few are going to realize it shares heritage to the SoC in their phones (iOS (a bit more literally) or Android).

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

-snip-

I agree with most of what you said, but I disagree with this:

3 hours ago, leadeater said:

Already happened, Ryzen Mobile 5000 U series. M1 still a bit more efficient, obviously since TSMC 5nm vs 7nm but that's got nothing to do with "x86".

I am not sure if that's your intention or not, but it sounds like you're saying if Ryzen mobile was on TSMC 5nm, it would be as efficient as the M1.

I don't think that's the case. I would be surprised if a zen 3 core could match the performance (core for core) with the M1 at the same power consumption, even if they were on the same node.

Not to mention the M1 has some very big benefits as well, such as a ridiculously fast memory, and stupidly fast exaction of atomic operations.

I've seen numbers of ARC being 5-8 times faster on the M1 than on the Intel Macs, and that probably relies on some clever tricks using ARM's more relaxed memory ordering.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

I am not sure if that's your intention or not, but it sounds like you're saying if Ryzen mobile was on 5000, it would be as efficient as the M1.

It is on 5000? Or do you mean TSMC 5nm? Not sure, since Ryzen Mobile 5000 H/HS has been out for a while and also U (reviews are just starting to show up now), I'm going to assume you mean TSMC 5nm.

 

No M1 would still be more efficient, both in respect to the CPU cores and the memory controller. Firestorm cores can do the same/similar amount of instructions per cycle at a lower power (much so in FP) than Zen 3 can, talking total vs total (what I mean is if you take a fixed number of instructions and compared i.e. 100 vs 100). The statement itself is very rough as operating frequency and power limits are part of it but just purely from a architectural and academic level the Firestorm cores are more efficient over the same amount of work, because it only needs X number of cores where as Zen 3 needs 2*X numbers of cores.

 

Then since M1 is BIG.little it can leverage those Icestorm cores for lighter tasks/threads which are very low power, much more so than a Zen 3 core is at a minimal light load it'll gain efficiency over a Zen 3 based SoC there too.

 

So if you compared on common TSMC 5nm node M1 should be more efficient than Zen 3, Zen 4 who knows but there will or will be coming a new generation Firestorm and Icestorm anyway. Just theory of course.

 

Ryzen Mobile 5000U has both 15W and 25W operating modes and the 15W mode is I think a little less SoC package power than the M1 is but at a lower performance even for the 5800U (8 cores vs 4+4). The 5800U in 25W mode is actually really good but still less than H/HS obviously so you can compare those higher power ones with the M1 as has been done to see that outside of really really parallel threaded workloads the M1 is faster (15W, 25W, 35W or 45W). There are cases where Ryzen Mobile 5000 U/H/HS does better but overall the M1 is better and would stay better in the same areas if Zen 3 were on TSMC 5nm, all that would happen is where Zen 3 is better it would get better and were it's not only a slight improvement.

 

Plus the GPU in the M1 is killer awesome, so yea probably like 90% of the time better off with the M1. The only time I really load up all my CPU cores on my desktop is running BOINC, not exactly widely applicable.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

Yes that is correct, that doesn't change the issue I pointed out to you that those CPUs in those Apple products are vastly slower than options on the market today and even back then. Though to be fair some of those back then CPU products were to high power class to really consider as viable for those Apple devices and additional to that Apple wasn't going to switch to AMD Ryzen Mobile either with their own SoC's planned.

They were slower by how much? The previous gen was obviously a lot slower, so that made M1 impressive at the time. And even if you argue that Apple nuked performance on their laptops due to thermal constraints, it is not like the M1 requires more TDP, in fact a lot less than previous gen and current gen. I think you keep forgetting the fact that M1 is the lowest end chip competing with current generation of higher end SKUs from Intel and AMD. I have yet to see a fanless design laptop having similar performance and battery life to the M1. Yeah Intel probably wasn't the best chips of yesteryear, but it is not like AMD is leaps and bounds ahead that Intel is joke.

2 hours ago, leadeater said:

So I think you missed the point again, if you look at the actually completive x86 mobile CPUs on the market today the M1 is only competitive in the market with some strong points that make it very appealing. That's not enough to drive any industry wide change away from x86.

 

I told you why, no market for it. Yes Qualcomm and Microsoft tried and it failed big time and it wasn't solely because the performance of the SoC was bad, Windows ARM implementation is outright horrible and even today it's still horrible and that edition of Windows isn't even feature comparable with I think even Home edition. Nobody is going to spend a bunch of money developing hardware for a platform that has clearly demonstrated issues and no interest from consumers.

 

Developing a SoC is expensive, where there isn't a clear market for something you don't going spending any more than you need to. Apple knew how big their market was and knew the demand, they are in control of the entire ecosystem so for them there is justification to spend as much as required to ensure success as there is a proven market for it, their intention to use it on their own products.

Nah, the consumer industry largely makes up of people who do not give a shit about specs or architectures or x86 or arm. If you actually provide them with a usable product, it will eventually find it's way into mainstream market. Case in point, Chromebooks. An actual competitve Windows tablet with a good ARM processor was more than enough for Microsoft to pretty much destroy the chromebook market.

 

And as I argued earlier, which you promptly ignored, M1 laptops so far have better battery life, thermals, with performance close to what you expect out a real computer. And according to Qualcomm and Microsoft, M1 proved ARM's viability in consumer space and they just couldn't do it before because they didnt know how to make a good processor.

 

Yes, apple has the advatage, so that's why Apple can make sweeping changes like this fast and smoothly. But where credit is due for them is the software, cohesiveness, ecosystem, tools etc - while anyone including Apple were equally capable of making a competive ARM chip 

2 hours ago, leadeater said:

Apple and Microsoft ARM situations are quite different. If you rewound time and were Qualcomm would you have invested as much as Apple did with a partnership with Microsoft? Would you trust that an ARM editions of Windows would be good, would you trust that the Windows community and developers would get behind it? I wouldn't of and I don't now.

All these points are you re-iterating the same thing. Chromebooks have pretty much zero software, yet they are the best selling laptops now. Get rid of your notion that all PC users are extreme gamers, or hardcore netword admins, etc

2 hours ago, leadeater said:

Yes people have, I've literally showed you at least 3 of them and I can think of 2 more and I'm likely missing some. They are all server CPUs but they are still ARM and they are still high performance and some came well before M1. Your unawareness of them doesn't mean they didn't exist. They may not be applicable to the consumer market but you said the M1 proved something, yet it was already proven by other ARM CPUs it's just you and other never noticed and I certainly don't blame you for that either.

No I don't know enough about these server CPUs for me to verify anything, nor do I have the time to go into it, but weren't you one of the powple who were extremely skeptical of M1 right after the week it was annoucned before the reviews. If you weren't like the 99% others in the thread claiming that M1 is a nothingburger, then fine I'll believe you. You knew what was coming. But back then if you took ARM as a joke and now you're all of the sudden claiming tht "brr, these are nothing new or impressive and has been done before", it would be a direct contradiction and the CPUs you keep quoting and cherry picking from has some sever drawbacks and limitations that makes it unviable for consumer space

2 hours ago, leadeater said:

I think you have a typical Apple user syndrome, only looking within the Apple ecosystem like your comparisons to the old Intel Mac products as if those were the best CPU products on the market. Like I get it that's what is important to you and what affects you, but there are other things outside of what Apple was/is using.

Nobody said that. THe way to measure perofrmance objectively is to run the previous gen and the current gen with same constraints. Saying that the "it had more potential" means nothing when the freeing the potentia just fundamentally breaks the end goal of the final product. And the M1 equipped macs have even bigger design constraints than the previous gen macs, so i dont get your point

2 hours ago, leadeater said:

As far as consumers are concerned, the wider meaning, none of them will care or be aware that the M1 is an ARM architecture. To them it's the Apple M1 and all they know is Apple made it themselves. Few are going to realize it shares heritage to the SoC in their phones (iOS (a bit more literally) or Android).

Didn't Apple make the A series chips? The competition has lagged behind Apple for many years when it came to Phone SoCs, so again not sure how Qualcomm chips somehow contributed to M1's existence. Yes, M1 is a product of Apple and everyone including Intel knows that Apple in the CPU buisness is a threat. SO there is credit where credit is due

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, RedRound2 said:

They were slower by how much?

A lot, have you even looked at any Ryzen Mobile reviews for 3000 series and newer? As for Intel's own CPUs the fastest ones were too high power class to put in a MacBook but that's on purpose from Intel as they are specifically for very high end gaming laptops with OC capability. If you're just going to talk about performance improvements and generalize it such as you have been i.e. your all x86 then you must actually do it on the basis of what is best on offer from that architecture not just from a limited selection that Apple just happens to use, doing that is invalid.

 

8 hours ago, RedRound2 said:

And even if you argue that Apple nuked performance on their laptops due to thermal constraints, it is not like the M1 requires more TDP, in fact a lot less than previous gen and current gen

I'm not saying that at all, something like the Intel 10980HK just really isn't for an Apple laptop. Intel CPUs over time were just simply putting out more heat and while I have argued and still do that Apple wasn't properly accommodating that by continuing to make their laptops thinner which does affect cooling this is unrelated to what the actual possible x86 mobile CPUs are on the market. The market is bigger than what Apple was using but you're only making the comparison using that which as I'll continue to point out is flawed.

 

If you only want to talk about generational improvements that affect Apple product sure enough, but don't use that to generalize all of x86.

 

8 hours ago, RedRound2 said:

And as I argued earlier, which you promptly ignored, M1 laptops so far have better battery life, thermals, with performance close to what you expect out a real computer. And according to Qualcomm and Microsoft, M1 proved ARM's viability in consumer space and they just couldn't do it before because they didnt know how to make a good processor.

I didn't ignore it, you barely even talked about it and yet again Ryzen Mobile offers massive improvements in all those areas. This is looking a lot like you haven't looked at any Ryzen Mobile reviews ever which is going to make this discussion extremely futile.

 

8 hours ago, RedRound2 said:

No I don't know enough about these server CPUs for me to verify anything, nor do I have the time to go into it, but weren't you one of the powple who were extremely skeptical of M1 right after the week it was annoucned before the reviews.

The only disparaging comments I ever made before the official release of M1 was that I'll only believe it when I see it (at least that I remember anyway). I simply do not take rumors from places like WCCF at face value ever, or any similar place. Rumors are rumors not facts or real things. There is more to it than just the M1 SoC so without any of the other things Apple did to make M1 a success in their products there wasn't anything more to say than "I'll believe it when I see it".

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, RedRound2 said:

Nobody said that. THe way to measure perofrmance objectively is to run the previous gen and the current gen with same constraints. Saying that the "it had more potential" means nothing when the freeing the potentia just fundamentally breaks the end goal of the final product. And the M1 equipped macs have even bigger design constraints than the previous gen macs, so i dont get your point

Well then let me spell it out very clearly for you, you made comments directly about ALL of x86 by using generational improvements of ONLY MacBook products which is TOTALLY invalid, simply and succinctly as that. I don't know how to spell it out more simply for you.

 

How about you drop that sweeping of all x86 into a specific product line from Apple and then the issue goes away. MacBooks do not and have never represented the entire x86 mobile CPU market.

 

Apple M1 MacBook % improvements over Apple Intel MacBooks != how much better M1 is to all x86 mobile CPUs ever. This is especially problematic when the best x86 mobile CPUs are made by AMD not Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

A lot, have you even looked at any Ryzen Mobile reviews for 3000 series and newer? As for Intel's own CPUs the fastest ones were too high power class to put in a MacBook but that's on purpose from Intel as they are specifically for very high end gaming laptops with OC capability. If you're just going to talk about performance improvements and generalize it such as you have been i.e. your all x86 then you must actually do it on the basis of what is best on offer from that architecture not just from a limited selection that Apple just happens to use, doing that is invalid.

 

I'm not saying that at all, something like the Intel 10980HK just really isn't for an Apple laptop. Intel CPUs over time were just simply putting out more heat and while I have argued and still do that Apple wasn't properly accommodating that by continuing to make their laptops thinner which does affect cooling this is unrelated to what the actual possible x86 mobile CPUs are on the market. The market is bigger than what Apple was using but you're only making the comparison using that which as I'll continue to point out is flawed.

 

If you only want to talk about generational improvements that affect Apple product sure enough, but don't use that to generalize all of x86.

 

I didn't ignore it, you barely even talked about it and yet again Ryzen Mobile offers massive improvements in all those areas. This is looking a lot like you haven't looked at any Ryzen Mobile reviews ever which is going to make this discussion extremely futile.

 

7 hours ago, leadeater said:

Well then let me spell it out very clearly for you, you made comments directly about ALL of x86 by using generational improvements of ONLY MacBook products which is TOTALLY invalid, simply and succinctly as that. I don't know how to spell it out more simply for you.

 

How about you drop that sweeping of all x86 into a specific product line from Apple and then the issue goes away. MacBooks do not and have never represented the entire x86 mobile CPU market.

 

Apple M1 MacBook % improvements over Apple Intel MacBooks != how much better M1 is to all x86 mobile CPUs ever. This is especially problematic when the best x86 mobile CPUs are made by AMD not Intel.

Basic summary of all these paragraphs you repeated were that x86 is more powerful, and Apple just didn't use the powerful SKUs in their designs. Right?

 

Well, I don't know what kind of mental gymnastics you went through to ignore my point about the design constraints. I have never argued that current ARM chips are just a whole lot better than x86. Of course not. The M1 chip on the iPad Pro cannot compete with a Ryzen 5950X. I may have stated that the prospects for future chips based on what we've seen is very promising.  Let me dumb this down for you, putting a 5950X on an iPad pro is not possible in regards to power and thermal capacity. So similarly, you can't put an overclockable laptop chips into a thin chassis like how you do with the M1 on a Macbook Air. How difficult is this for you to comprehend? We always talk about similar class performance. Nobody in their right mind compares and Intel U series chip with H or HK series of chips

 

My whole argument is that the similar x86 CPUs that could be placed in chassis like MBA (and even the iPad Pro now) without overheating to death were grossly incompetent with the M1, in terms of power and efficiency (don't keep bringing up TSMC 5nm as the only reason for effciency - it's so stupid if you actually think that). Even the current gen ones, competing with the M1 are high end SKUs of ultrabook class x86 processors that need a fan to operate with nowhere near close battery life to the M1.

 

And about Ryzen, yes they are better than Intel chips I agree. But is it as good as M1. I have yet to again see a fanless design Ryzen performing about the same. And M1 has already shown us that it doesn't need a fan unless its running a sustained workload for more than 15 min (which doesn't really cover the use case of 90% of general users these devices are targetted towards)  

 

You have changed your whole argument at this point with whatever we started with. At first you were like M1 isn't anything impressive and we had server CPUs before, but nobody did anything when "because there wasn't a market for it". Clearly that is demonstrably false since what people actually care about is usability, battery life and cost, shown to us by the ever popular chromebook, all of which and more can be achieved with an ARM procssor comapred to a similar grade x86.

All these server grade CPU manufaturers could've easily partnerd up with Google, Microsoft or Samsung to make Apple level device and crush Qualcomm's monopoly in the process - but they didn't. And the rabbit hole of - if they did, this, then they did this and that you keep describing to scale down a 200W chip (or whatever you keep talking about) to consumer devices are just magnifying the unlikliness due to you just brushing away technical hurdles of the process like it's a simple math problem. And seriously own up on  what you couldn't argue back on by acknowledging it, rather than promptly ignoring it and twisiting my words to mean something convinent for you in an attempt to stretch our conversations past the point being where it just starts being a waste of time

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

Basic summary of all these paragraphs you repeated were that x86 is more powerful, and Apple just didn't use the powerful SKUs in their designs. Right?

Ahh no, not at all. Nowhere did I say it was "more powerful" at all. I said your sweeping statements about how much better M1 is and revolution to ARM is wrong when you exclusively reverence the performance increase from the last generation Intel Mac products to the current generation M1 Mac products.

 

This issue still stands that doing that is deeply flawed, so flawed to the degree of only comparing basic sports cars to Tesla Model S P100D then saying that all petrol powered cars are slower than the Tesla. There are hundreds of cars faster than the Tesla, there aren't hundreds of CPUs faster than the M1 but the number is non zero. And for the mobile market the Ryzen 5000 U series in both 15W mode and 25W mode are vastly better than the Intel CPUs in the previous generation Intel Macs.

 

BS numbers here because at this point I can't be bothered getting the real figures but if the M1 is 2x better than the last gen Intel MacBooks and the Ryzen Mobile 5000 series is 1.9x better then how much better is the M1 to the Ryzen Mobile 5000? Is it 2x like you've been saying? Nope.

 

1 hour ago, RedRound2 said:

At first you were like M1 isn't anything impressive and we had server CPUs before, but nobody did anything when "because there wasn't a market for it"

Sighhh, this is exceedingly tiring. Please stop making things up. Is it really that hard? You said M1 proved ARM can deliver powerful CPUs, I said no server ARM CPUs have already proven that. I didn't say it wasn't impressive, I've said it isn't as impressive as some people, such as you, are trying to make out. Not as != not. You seem to have consistent difficultly understanding the concept of degrees, the degree of how much better something is or isn't for example.

 

1 hour ago, RedRound2 said:

ARM procssor comapred to a similar grade x86

Ryzen Mobile 5000 U

 

Hopefully that was big enough for you to not miss it for like the 100th time.

 

1 hour ago, RedRound2 said:

if they did, this, then they did this and that you keep describing to scale down a 200W chip (or whatever you keep talking about) to consumer devices are just magnifying the unlikliness due to you just brushing away technical hurdles of the process like it's a simple math problem.

Your assumptions about how hard it is to scale down a server CPU doesn't make you correct. These server CPUs are in fact scaled up from reference ARM archecture with the cores themselves largely not changed at all, the one with the significant change is the A64FX which those changes are now in the ARM reference architecture. They are powerful because they have 80 cores, the same cores in your (well not your) Android phone. Yea sure they didn't put in to the SoC USB controllers or Wifi but that is only because it's not needed in a server, not because it's technically hard, not at all.

 

You're doing 1 +1 = 3, your math is wrong not mine.

 

You accept that you are not well versed in the server market and server CPUs and yet I actually am, literally my job, you want to treat your assumptions as more correct than my real knowledge? I sincerely hope you don't go to WebMD rather than to a doctor.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, RedRound2 said:

Well, I don't know what kind of mental gymnastics you went through to ignore my point about the design constraints. I have never argued that current ARM chips are just a whole lot better than x86. Of course not. The M1 chip on the iPad Pro cannot compete with a Ryzen 5950X. I may have stated that the prospects for future chips based on what we've seen is very promising.  Let me dumb this down for you, putting a 5950X on an iPad pro is not possible in regards to power and thermal capacity. So similarly, you can't put an overclockable laptop chips into a thin chassis like how you do with the M1 on a Macbook Air. How difficult is this for you to comprehend? We always talk about similar class performance. Nobody in their right mind compares and Intel U series chip with H or HK series of chips

So basically TL;DR still going to ignore Ryzen Mobile 5000 U series and comparing to desktop CPUs? Could you please actually listen to anything at all I've said and look at the exact series of CPUs I've pointed to you. Unless you talk about Ryzen Mobile this conversation is over.

 

Because yes, you can do fanless Ryzen Mobile U, if one chooses to. The 15W TDP is lower than the M1 SoC package power is, or appears to be as measuring that as accurately as you can the Ryzen is not as good.

 

If you think I've been changing my argument or twisting words that's because I'm responding to things you bring up, unless you don't want to me actually address points you make, so if you're done making illogical leaps about things I've said then the discussion will get a whole lot simpler.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, leadeater said:

 

You accept that you are not well versed in the server market and server CPUs and yet I actually am, literally my job, you want to treat your assumptions as more correct than my real knowledge? I sincerely hope you don't go to WebMD rather than to a doctor.

I think you got baited there. This is a phenomenon I see on nerd-oriented sites a lot where two people will argue about some fact that neither have direct experience with, and instead try to use their existing position somewhere as proof-of-fact. You are not a chip engineer, Red is not a chip engineer, I am not a chip engineer.

 

We are all arguing from where our experience covers. 

 

https://www.cpubenchmark.net/power_performance.html

 

There are 6 Ryzen U parts that are more performative than the M1. Intel's Y parts are basically garbage and laptops you find with them, feel exactly like you expect an underpowered laptop to feel, long loading times, even off a SSD, Windows Update taking literately 8 hours for minor updates, vs the H parts taking seconds to boot and load apps, and several minutes to run Windows update.

 

image.thumb.png.5178a212a498d394d6c0e769e81161f6.png

That M1 ARM cpu is clocked 50% higher than the Ryzen part (which performs better than the M1 at the same TDP), has a higher single-thread rating than the highest performing Ryzen part (shown above), and both the Ryzen and M1 handily beat the Intel Y part into the ground. The Intel U part, which is also 15w is 17th on the list, two after the Y part.

 

What's the common denominator between the Ryzen and the M1? they are smaller nm designs, with that Ryzen part nearly double the cpumark because it has double the cores.

 

From experience, Intel's Y parts are completely unusable, and I've only seen them in convertible tablet's, which fair enough , maybe that's what they were intended for this time, but they are so lacking in performance that there is no point in redeploying them, nobody wants them, and they aren't even available to purchase. The least performative laptop people use at the office are U parts with 16GB of RAM, and I need to second-guess what they're being used for otherwise some poor CAD engineer is going to take it to do a presentation to the client and have nothing but problems. But if it's just people going out and recording numbers from holes in the ground, that doesn't require a 15" laptop with a GPU. Heck they could do that on an iPad if only their engineering tools could run on one.

 

All Intel 15w U parts all have fans in them, at least the ones I've seen that haven't been pure rubbish. The M1 in the Air doesn't have a fan, but it also pays the price for it. The same as the iPad Pro and iPhone do. They get hot, likely unbearably hot. That also has consequences for other parts in the system. 15w Intel parts with fans, the fans always wear out, sometimes as soon as 6 months. Intel H parts, likewise need fans replaced after about a year. 

 

These thin-and-light designs keep being made to be unserviceable, but often require twice as much maintenance, so pursuing a fanless design is probably what Apple wants, but isn't what Apple, or anyone else will get. As soon as it gets warm, it will be throttled back if it's intended to last more than a few weeks.

 

But I think we've strayed well off the topic here.

 

nVidia is very much, unlikely to sell a CPU+GPU designed for a laptop, many have tried, all have failed other than Apple. nvidia has been trying to do this for over a decade, and their only win eventually was the Nintendo Switch. Their shield product is popular with internet pirates and nobody else, and even then nvidia isn't the only player, just the only one producing a consistent product. It's also 6 years old and long in the tooth, and the same SoC hardware in the Nintendo Switch. The fact we haven't seen a 4K switch is likely due to the same reason we haven't seen a 4K Shield, because the Shield can only play 4K video, not software.

 

Qualcomm and Samsung have been trying to do ARM laptops by repurposing their smartphone CPU's too, and not just because Apple did it. Yet they need Microsoft to play ball here because nobody wants, or has ever wanted a Linux laptop. You're not going to build a SoC for the 1000 people who might tinker with it. Chromebook hardware does't even come with Linux most of the time.

 

So if Apple is going to come out and go "yeah ARM as a mainstream laptop and desktop is viable" I'm pretty sure OEM's will sit back and watch what happens here and then come out with their own under-engineered copycats. They (eg samsung) did that with the iphone, ipad, appletv, and haven't had much success other than in the phone space. We've seen this repeatedly from the Raspberry Pi space, and there has never been any mass uptake there either. The Pi is a hobby platform, not a computer.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×