Jump to content

Apple promises to support Thunderbolt on its new ARM Macs

Mario5
16 hours ago, igormp said:

 Also, ARM is getting more complex by the day, I wouldn't dare to call it a RISC architecture anymore, Digikey even has a nice article about that.

I have heard that before, I read an article a few years back that basically said trying to label a CPU as being either RISC or CISC is a bit erroneous.  I find it interesting that so many people are surprised (to the point of debate) that either processor foundation can or cannot be scaled to suit any given end use condition (all forms of processors have been doing that since the inception of silicon).    It should not surprise anyone that ARM could easily be scaled to desktop (all other things being equal) and perform on par with x86 (again some variance will occur but we are talking generally).  It should not surprise anyone that software can both be developed to run adequately on either but also run better on one just by virtue of what it was designed to achieve and the foundation of that software running on the specific design of the CPU. 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, mr moose said:

I have heard that before, I read an article a few years back that basically said trying to label a CPU as being either RISC or CISC is a bit erroneous.  I find it interesting that so many people are surprised (to the point of debate) that either processor foundation can or cannot be scaled to suit any given end use condition (all forms of processors have been doing that since the inception of silicon).    It should not surprise anyone that ARM could easily be scaled to desktop (all other things being equal) and perform on par with x86 (again some variance will occur but we are talking generally).  It should not surprise anyone that software can both be developed to run adequately on either but also run better on one just by virtue of what it was designed to achieve and the foundation of that software running on the specific design of the CPU. 

 

 

“Can” might be misleading. Can allows for inefficient and edge cases. Some things scale better than others.  Scaling also generally has limits of some sort.  Netbouy worked great if there were less than I think 8(?) things on the network.  Was  Famous as being one of the worst network systems of all time though because it scaled so poorly beyond that.  The problem is lack of knowledge of the public.  Where those limits are is unknown to laymen.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Bombastinator said:

“Can” might be misleading. Can allows for inefficient and edge cases.

There are always edge cases.  A quick look at anything we humans try to box/categorize will quickly highlight how many edge cases there are, especially in computing.

 

5 minutes ago, Bombastinator said:

Some things scale better than others.  Scaling also generally has limits of some sort.  Netbouy worked great if there were less than I think 8(?) things on the network.  Was  Famous as being one of the worst network systems of all time though because it scaled so poorly beyond that.  The problem is lack of knowledge of the public.  Where those limits are is unknown to laymen.

This forum considers itself to not be the general public.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RedRound2 said:

I am aware of ARM server processors. But they're designed for a specific use case in mind and not general desktop computing. Obviously that's what Apple is targetting here, and it's uncharted territory in this modern age, if you dont count the Qualcomm 8CX (which is a joke)

What`s different from a Graviton core to a regular core when it comes to usage? Unless you're talking about peripherals such as media decoders and whatnot, a server CPU is just a general purpose CPU like any other, since the underlying ISA is still the same.

Quote

Isn't that for video playback. There is usually a lot of CPU usage when you do scrub in timelines.

Never said heterogenous computing is exclusive to Apple. Rather Apple Silicon itself can encompass a wide variety of different parts (for lack of better term), like Neural engine, specific for Metal, allowing them to make OS and device specific optimizations.

Yeah, Apple has the upper hand when it comes to extra peripherals because they can just hide those behind some APIs and force everyone in their platform to use those because Apple. But still, such dedicated hardware are still available in other platforms and lots of softwares make use of them.

 

Quote

CISC is more complex, its there in the name.

I think you didn't understand what I said. Please read it again.

RISC is more complicated to program for due to its very granular control, while CISC have usually had tonnes of function packed into single instruction, as you said. But with ARM, they can further optimize and drop those unnecessary computation that inevitable happens in x86. That's where I believe where there's huge huge efficieny gains in a RISC based arcitecture. And turns out, so far at least, you can squeeze out performance from a RIS based architecture to the current x86-64 processors. The iPad Pro A12Z processor is already more powerful than the lower end Intel processors, and they seem to have a clear roadmap to match the comeptition in 2 year timeframe. Let's just wait and see what happens

I read, my point still stands. You talk about architecture complexity and mention programming complexity, those are 2 different things. CISC is a complex implementation, but allows you to write shorter code (not necessarily easier nor harder, just smaller), while RISC makes you do tons of redundant stuff. 

 

What I find weird is that you vouch for heterogeneous computing, something that a modern x86 CPU comes armed to the teeth by default with all the extra extensions available, but at the same time dismiss those because ARM cores are usually simpler by themselves and rely on extra IPs on the SoC.

 

Anyway, claiming for performance for the first batch of ARM macbooks is silly IMO, since the first released devices will probably be macbooks airs that cater to  people who don't need much more than a web browser, and raw performance doesn't really matter as much as battery life, something that those ARM devices will excel at. For their prosumer devices, they'll probably crank their current designs up to 11, and put a ton of extra peripherals to offload some tasks, otherwise I doubt they'd be able to match the performance of current high-end x86 parts.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, mr moose said:

 

This forum considers itself to not be the general public.

 

 

That’s ...nontopical.  It’s like you’re playing The corrupt a wish game by using alternate definitions of terms. I’m referring to the same group as was mentioned earlier.  That’s a tactical response rather than a topical one.  Same problem I had with some of @RedRound2’s stuff.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Bombastinator said:

That’s ...nontopical.  It’s like you’re playing The corrupt a wish game by using alternate definitions of terms. I’m referring to the same group as was mentioned earlier.

I'm am not sure what the issue is with what I said then.  

 

All I am saying is there is no reason to make any assumptions in this thread.  ARM can scale and does scale, ARM has a variance of advantages and disadvantages just like x86 and I don't think much of the debate is even necessary.  It puts me in mind of people debating which is the best dinosaur.   What is logical really depends on what you value and need out of your computer.  for the majority (and I mean the vast majority) of end users, current mobile SoC's are adequate, so stepping that up just a tee nee bit to achieve what is essentially an overpowered ipad with a keyboard and mouse and the ability to run more desktop type software should not be an unreasonable proposition to accept.    By the same token,  it is not an unreasonable proposition that a lot of current software by its nature will still run better on x86.  This is not a which CPU is better debate, but  a which CPU is more suited to a specific task discussion.  Using edge cases to decree the advantage or disadvantage of ARM (especially the performance of such that isn't even released yet) in  general terms is pointless and doesn't adequately address the reality of what's possible.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, mr moose said:

I'm am not sure what the issue is with what I said then.  

 

All I am saying is there is no reason to make any assumptions in this thread.  ARM can scale and does scale, ARM has a variance of advantages and disadvantages just like x86 and I don't think much of the debate is even necessary.  It puts me in mind of people debating which is the best dinosaur.   What is logical really depends on what you value and need out of your computer.  for the majority (and I mean the vast majority) of end users, current mobile SoC's are adequate, so stepping that up just a tee nee bit to achieve what is essentially an overpowered ipad with a keyboard and mouse and the ability to run more desktop type software should not be an unreasonable proposition to accept.    By the same token,  it is not an unreasonable proposition that a lot of current software by its nature will still run better on x86.  This is not a which CPU is better debate, but  a which CPU is more suited to a specific task discussion.  Using edge cases to decree the advantage or disadvantage of ARM (especially the performance of such that isn't even released yet) in  general terms is pointless and doesn't adequately address the reality of what's possible.

I wasn’t actually disagreeing with you.  It was a point of language use only.  What I was saying is the “general population” is not familiar with which limitations may apply to which design.  “Can” is a vague term in this case.  “Can” is not “can usefully” though it doesn’t preclude it.  The general public has not seen ARM based chips that are high performance.  The majority of ARM based systems have been low power specialty things and thus the majority of development has been in that direction.  Doesn’t mean it can’t but doesn’t mean it can either.  There is the possibility that if it could it would have already.  It’s unknown though.  

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, igormp said:

What`s different from a Graviton core to a regular core when it comes to usage? Unless you're talking about peripherals such as media decoders and whatnot, a server CPU is just a general purpose CPU like any other, since the underlying ISA is still the same.

Exactly. All CPUs do have heterogenous architecture that makes it optimized for specialized tasks. What I meant by referring with Apple Ill come to it later in the next para reply. But here, server ARM processors I presume wouldn't really run in a general dektop computer well. Same reason why an epyc 64 core would suck at games compared to Ryzen. Obviously what Apple is trying to do is focus on the general computing experiance. I'm not 100 percent sure, but I read that the ARM server processor has 80 core variant, so it's safe to assume it would suck at single threaded applications, an area where Apple has historically been good at with their A series chips.

Quote

Yeah, Apple has the upper hand when it comes to extra peripherals because they can just hide those behind some APIs and force everyone in their platform to use those because Apple. But still, such dedicated hardware are still available in other platforms and lots of softwares make use of them.

You make that sound like it's bad thing. Making developers actually use hardware to make things faster and better is always a good thing. And only Apple can do this sort of things since they have full control over hardware and software. This is what I also meant my heterogenous compute. With this move Apple can ensure all the new ported ARM pplications will eventually properly utilize all cores (both low and high powered, big.LITTLE) and neural engines, etc since Apple can effectively guarantee that all Mac computers will have the same hardware. Something that just isnt possible with Windows due to sheer number of configurations available. For loose analogy, the amount of performance you can squeeze out of PS4 or an Xbox One due to optimization is something no PC config can ever hope to dream of. Similarly Apple can add features and elements to their SoC that makes it faster in certain workloads, and then they can design the OS to utilize those hardware available fully. And developers wounldn't have to worry pouring their time in developing their app to utilize only small number of computers

Quote

I read, my point still stands. You talk about architecture complexity and mention programming complexity, those are 2 different things. CISC is a complex implementation, but allows you to write shorter code (not necessarily easier nor harder, just smaller), while RISC makes you do tons of redundant stuff. 

Yes, CISC architecture has generally had a lot of functions that do a lot of things. But from optimizability standoint, that's hardly efficient. Again, bringing back my previous example of C++ and Python. While Python is much easier to program with, it's a whole lot more slower than the quivalent, much more complicated C++ program. Micro code efficieny and optimizability is possible with RISC, and that's where a lot of efficiency gains come from 

Quote

What I find weird is that you vouch for heterogeneous computing, something that a modern x86 CPU comes armed to the teeth by default with all the extra extensions available, but at the same time dismiss those because ARM cores are usually simpler by themselves and rely on extra IPs on the SoC.

Replied to this above. The problem with current market is that there's way too many configurations. Intel, AMD and Nvidia all with their own feature sets. In Macs, apple will have direct control of the chip and add features they deem required and put it under an API. And because it's Apple and it's ecosystems, developers will actually take advantage of it

Quote

Anyway, claiming for performance for the first batch of ARM macbooks is silly IMO, since the first released devices will probably be macbooks airs that cater to  people who don't need much more than a web browser, and raw performance doesn't really matter as much as battery life, something that those ARM devices will excel at. For their prosumer devices, they'll probably crank their current designs up to 11, and put a ton of extra peripherals to offload some tasks, otherwise I doubt they'd be able to match the performance of current high-end x86 parts.

Never said first Apple silicon macs will be crush x86 or anything along those lines. I said the A12Z is already competitve with the low powererd Intel. And A12 is a two year old architecture. Plus, I'm making an assumption that since they have an agressive two year transition period, at the end of which even the Mac Pro should be ARM, they seem fairly confident that they can match or beat the x86 platforms. Whether it is by raw CPU performance, or with optimizations with other units, doesn't really matter - because only the end performance does. And they seem confident they can do so - and Apple has a good track record for these sort of things

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, RedRound2 said:

Exactly. All CPUs do have heterogenous architecture that makes it optimized for specialized tasks. What I meant by referring with Apple Ill come to it later in the next para reply. But here, server ARM processors I presume wouldn't really run in a general dektop computer well. Same reason why an epyc 64 core would suck at games compared to Ryzen. Obviously what Apple is trying to do is focus on the general computing experiance. I'm not 100 percent sure, but I read that the ARM server processor has 80 core variant, so it's safe to assume it would suck at single threaded applications, an area where Apple has historically been good at with their A series chips.

The only problem with server parts is their usual low clock speeds when compared to desktop parts, but their IPC numbers are still the same, and those same clocks aren't usually much lower than a laptop CPU, so it's valid to try to see how Apple's ARM CPU will fare against x86 ones by looking at current high-end ARM offerings.

Quote

You make that sound like it's bad thing. Making developers actually use hardware to make things faster and better is always a good thing. And only Apple can do this sort of things since they have full control over hardware and software. This is what I also meant my heterogenous compute. With this move Apple can ensure all the new ported ARM pplications will eventually properly utilize all cores (both low and high powered, big.LITTLE) and neural engines, etc since Apple can effectively guarantee that all Mac computers will have the same hardware. Something that just isnt possible with Windows due to sheer number of configurations available. For loose analogy, the amount of performance you can squeeze out of PS4 or an Xbox One due to optimization is something no PC config can ever hope to dream of. Similarly Apple can add features and elements to their SoC that makes it faster in certain workloads, and then they can design the OS to utilize those hardware available fully. And developers wounldn't have to worry pouring their time in developing their app to utilize only small number of computers

Didn't mean to make it sound like a bad thing, my bad if so. But you seem to think that devs that target Apple stuff optimize their stuff like hell: they don't. Apple takes care to cover all of that stuff under their pretty APIs (which they're not afraid to break overnight and force everyone to update), so using those extra chip is actually pretty seamless to the dev,

Also, current consoles are basically fancy x86 machines with APIs similar to what you have on a desktop, most ports that come from a console aren't that hard to bring over to a PC, and usually run better than a console out of the box, devs just don't go further because there's really no reason since it runs good enough on consoles due to their previous efforts, and even better on a PC just due to the more powerful hardware.

Quote

Yes, CISC architecture has generally had a lot of functions that do a lot of things. But from optimizability standoint, that's hardly efficient. Again, bringing back my previous example of C++ and Python. While Python is much easier to program with, it's a whole lot more slower than the quivalent, much more complicated C++ program. Micro code efficieny and optimizability is possible with RISC, and that's where a lot of efficiency gains come from 

No, you can't compare RISC and CISC like that. Both are just concepts and there's no better one. A RISC CPU usually has higher IPC, but needs to waste tons of cycles doing tons of redundant instructions, while a CISC CPU has lower IPC but can achieve more with less, so in the end it's basically a stalemate.  Optimizing RISC like you say isn't that much easier either, much less so with ARM. If you wanted something that is truly capable of being optimized like that, you hould have a look at VLIW or Mill architecture, which are way too hard to optimize for and usually don't go that far (remember itanium?), with the exception being DSPs, which are really purpose-specific.

 

Also, that analogy of yours isn't really good, since compilers take care of most things ISA related for you, and your comparision doesn't really fit either architecture well: is python meant to be CISC because it's shorter to write code, or RISC because each operation is way simpler? C++ should represent CISC because it has a somewhat steeper learning curve, or RISC because it's way more verbose?

I do both cpp and python for a living (embedded + ml), so I believe that I have good enough grasp of the languages btw.

Quote

Replied to this above. The problem with current market is that there's way too many configurations. Intel, AMD and Nvidia all with their own feature sets. In Macs, apple will have direct control of the chip and add features they deem required and put it under an API. And because it's Apple and it's ecosystems, developers will actually take advantage of it

Don't forget that apple also has a somewhat large pool of specs to take care of. From 4th/5th gen mac minis and macbooks, to new Xeon CPUs, AMD and Intel GPUs, etc etc. Most people don't try to optimize for an specific spec, but rather just focus on using a proper API that should handle all of those, and it's expected that such API are standardized across such devices (even though that sometimes isn't true, sadly).

Quote

Never said first Apple silicon macs will be crush x86 or anything along those lines. I said the A12Z is already competitve with the low powererd Intel. And A12 is a two year old architecture. Plus, I'm making an assumption that since they have an agressive two year transition period, at the end of which even the Mac Pro should be ARM, they seem fairly confident that they can match or beat the x86 platforms. Whether it is by raw CPU performance, or with optimizations with other units, doesn't really matter - because only the end performance does. And they seem confident they can do so - and Apple has a good track record for these sort of things

Won't really comment much on here since it's just speculation, but my guess is that it won't match x86 raw performance, but that this won't matter since it'll have a longer battery life and be good enough for web browsing.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×