Jump to content

Is ARM the future?

curiousmind34
2 hours ago, Zodiark1593 said:

Internally, high performance “x86” CPUs break down x86 instructions to instructions that are remarkably reminisce of RISC. Intel had done this since the Pentium Pro.

Yes, that's the complex part of the CISC name. The instruction is complex internally doing many things so that the machine code doesn't need to handle a lot of that, the instruction set does it for it.

 

2 hours ago, Zodiark1593 said:

On the other end, ARM had increased instructions within their ISA over the years. Both ISAs implement similar techniques to improve performance (pipelining, superscalar execution, OoOE, caches, etc). The performance of ARM’s SVE (the parallel to AVX) will likely be determined by the implementation by ARM themselves, or those with an architecture license. 

Unfortunately SVE hasn't really been implemented widely yet unlike AVX which is in pretty much every CPU made by Intel and AMD except for extremely low power chips thanks in part to ARM focusing on low power uses for so long. Yes it's a few years newer but even original AVX was more widespread at the equivalent point in time.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/12/2021 at 12:28 PM, .Apex. said:

The M1 has demonstrated that ARM could have a much higher IPC than even Zen 3 at low frequency, if an M1 equivalent reaches desktop for the mass market (not just Apple devices) then it could be clocked much higher, though currently for Snapdragon the IPC is about the same as Zen 3 if taking rough numbers into account, I wouldn't want the switch to ARM to happen anytime soon and hopefully not ever, mainly for software backwards compatibility, but it's ignorant to say that it has no potential in single-threaded performance

Doesn’t Rosetta fix the x86 software issue though?  I heard 20% performance hit. So about 1.5 generations of cpu.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Bombastinator said:

Doesn’t Rosetta fix the x86 software issue though?  I heard 20% performance hit. So about 1.5 generations of cpu.

Yeah but we are talking Microsoft here. They have a compatibility layer for their ARM surface tablet. Last I checked their implementation was lacking quite a bit. I mean they could improve it, but even Rossetta 2 doesnt necessarily run everything flawlessly. Microsoft has to cater to Enterprise users. Apple does not. Apple knows they have a cult following and knows that they can do what they want and people will still buy their products. Microsoft on the other hand doesnt really have that. Sure they have the market share, but mostly due to businesses adopting their OS and software back in the early days. There are many Linux Distro's out there now days. Microsofts market share has been slipping, Im sure they have noticed. 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bombastinator said:

Doesn’t Rosetta fix the x86 software issue though?  I heard 20% performance hit. So about 1.5 generations of cpu.

Apple's Rosetta 2 is very good, but as far as I know it isn't perfect. There are programs that have compatibility issues or don't run at all. Apple also seems to have included some form of hardware acceleration in their CPU, which is why its performance impact is as low as it is. Microsoft has no such control over the hardware, so their competing solution is software only, as far as I know.

 

Even worse, Microsoft's emulator has only supported 32 bit for the longest time. Support for 64 bit emulation has been available as a preview only since Dec 2020. Unless they can get it working perfectly, I don't see many businesses moving to emulation. To get them to move, it would not only have to work, the cost of new hardware and the running costs due to higher efficiency would have to be very compelling compared to x86 to offset the potential risks.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

What if we’re all missing the point and the “future” (for mainstream personal devices and some work devices) is not about this or that ISA per se but about the “appliance”-ization of the silicon inside a PC? 

 

Like extreme customization (year after year) of the SoC to achieve supreme efficiency and speed in doing what the software dictates. Like it’s a washing machine or a smart fridge. 

 

Apple is now in the position of “playing Lego” with blocks of the SoC year after year. It’s not just about the CPU cores and the ISA. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, saltycaramel said:

What if we’re all missing the point and the “future” (for mainstream personal devices and some work devices) is not about this or that ISA per se but about the “appliance”-ization of the silicon inside a PC? 

 

Like extreme customization (year after year) of the SoC to achieve supreme efficiency and speed in doing what the software dictates. Like it’s a washing machine or a smart fridge. 

 

Apple is now in the position of “playing Lego” with blocks of the SoC year after year. It’s not just about the CPU cores and the ISA. 

Your describing IOT?  I avoided “smart” stuff because it seems to have really severe security issues. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Bombastinator said:

Your describing IOT?  I avoided “smart” stuff because it seems to have really severe security issues. 

Not if you’re in HomeKit and using a HomeKit secure router— you can deny the IoT devices internet access, but still access them remotely through your AppleID.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Obioban said:

Not if you’re in HomeKit and using a HomeKit secure router— you can deny the IoT devices internet access, but still access them remotely through your AppleID.

So still security issues but with a firewalled trusted router.  Not for me.  I don’t trust the trusted systems concept.  It’s a lot like quarantine.  If something gets behind the firewall there are all sorts of problems.   This thread isn’t about IOT though.  That’s a different thing entirely.  The concept that generalized computing will be totally abandoned in favor of a series of smart devices strikes me as wildly unlikely.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Donut417 said:

Yeah but we are talking Microsoft here. They have a compatibility layer for their ARM surface tablet. Last I checked their implementation was lacking quite a bit. I mean they could improve it, but even Rossetta 2 doesnt necessarily run everything flawlessly. Microsoft has to cater to Enterprise users. Apple does not. Apple knows they have a cult following and knows that they can do what they want and people will still buy their products. Microsoft on the other hand doesnt really have that. Sure they have the market share, but mostly due to businesses adopting their OS and software back in the early days. There are many Linux Distro's out there now days. Microsofts market share has been slipping, Im sure they have noticed. 

Businesses usually have pretty specific individual requirements though.  A given business will need to run a particular set of programs which generally isn’t all that large, but can be very obscure.  If all the programs happen to run on rosetta2 they’re good for mac.  They won’t necessarily do it though and it could very likely need to be tested in each instance of an unusual program.  I would think the big barrier would be infrastructure there.  With windows stuff it just isn’t ready for prime time and windows history for making desperately needed improvements in something is abjectly awful. Their more or less refusal and foot dragging to create a decent multitasking operating system is more or less what created the multithreading environment we are in. The industry basically had to go around them because they refused to keep up.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

If all the programs happen to run on rosetta2 they’re good for mac.

Not necessarily. Apple doesnt plan to support Rosetta 2 forever. They are even discontinuing support in select markets coming up soon. Microsoft would be a required to support the comparability layer long term. Because some businesses are not going to make their software work on ARM, especially if its software they created internally for their use only. 

 

 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Donut417 said:

Not necessarily. Apple doesnt plan to support Rosetta 2 forever. They are even discontinuing support in select markets coming up soon. Microsoft would be a required to support the comparability layer long term. Because some businesses are not going to make their software work on ARM, especially if its software they created internally for their use only. 

 

 

Hmm.. didn’t know that.  I can see that as being a potentially massive problem.  Doesn’t strike me as a good idea.  If rosetta2 is going away in future OS updates me buying a Mac may not be a good idea.  I could be left with non functional software and a computer that while still technically works, in practice doesn’t for me.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bombastinator said:

Your describing IOT?  I avoided “smart” stuff because it seems to have really severe security issues. 

No, I meant what if the speed of the CPU cores plateau or becomes a given and the new way of making new hardware better consists of adding custom modules at the chip design level to serve the everchanging needs of the OS and the software in a bespoke way. 

 

Of course to do this you need to have a vertical control of

- chip design

- OS 
- software (or have a fast reacting dev community that quickly adopts the APIs/frameworks/etc. you introduce)

 

Like Apple. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, saltycaramel said:

No, I meant what if the speed of the CPU cores plateau or becomes a given and the new way of making new hardware better consists of adding custom modules at the chip design level to serve the everchanging needs of the OS and the software in a bespoke way. 

 

Of course to do this you need to have a vertical control of

- chip design

- OS 
- software (or have a fast reacting dev community that quickly adopts the APIs/frameworks/etc. you introduce)

 

Like Apple. 

Seems far fetched.  Speed of cpu cores plateaued or nearly plateaued some years ago.   There are still tricks to pull.  They were collecting them way back before multi-core turned out it worked.  You’re looking for a long chain of fairly unlikely things to happen.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

I heard LEG are the next big thing, once they get a foothold in the market 

i5 8600 - RX580 - Fractal Nano S - 1080p 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

X86_64 is a powerful instruction set,I don't see it going away anytime soon.

ARM is more useful for low power applications.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Vishera said:

X86_64 is a powerful instruction set,I don't see it going away anytime soon.

ARM is more useful for low power applications.

Part of the problem with this is x86 has gotten more RISClike and ARM has gotten more x86like so there’s less of a gap than there appears.  Neither remains the same.  ARM has advantages in wait state power use.  That’s about it though.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Bombastinator said:

Part of the problem with this is x86 has gotten more RISClike and ARM has gotten more x86like so there’s less of a gap than there appears.  Neither remains the same.  ARM has advantages in wait state power use.  That’s about it though.

Pretty much this. Wouldn’t call it a problem though so much as progression. 
 

The only meaningful differences now lie in vendor and software implementation. Apple has shown that ARM compatible cores are more than capable of delivering on compute intensive tasks. There’s absolutely no debate on that. 
 

Legacy software support is a pretty massive hurdle however. Having ARM cores of merely comparable performance to x86 cores (or even a moderate to solid performance advantage) of the same generation won’t cut it. For ARM compatible cores to begin to take desktop share, they need a pretty drastic performance advantage over a same-gen x86 core, so as to translate x86-64 code fast enough and put it on even footing. Only Apple has really come close. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/12/2021 at 11:57 AM, Amias said:

ARM is just better in almost everyway.

What? Have you actually looked at real benchmark scores? The most common M1 chips have only like a quarter the single or multithreaded performance for most non synthetics that the big boys (i7's and threadrippers) have.

 

 

On 3/12/2021 at 11:36 AM, curiousmind34 said:

lagging behind a bit in the generation to generation performance uplift that apple's arm chips do

Just wait a bit on that one. The big limitation for x86/64 chips is particle physics. The big limitation for ARM is that no one has tried big, fast, ARM chips before.

Let's just see what happens with the benchmarks over the next decade. Apple's ARM chips have HUGE ground to cover to catch up to current x86/64 offerings.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Zodiark1593 said:

Pretty much this. Wouldn’t call it a problem though so much as progression. 
 

The only meaningful differences now lie in vendor and software implementation. Apple has shown that ARM compatible cores are more than capable of delivering on compute intensive tasks. There’s absolutely no debate on that. 
 

Legacy software support is a pretty massive hurdle however. Having ARM cores of merely comparable performance to x86 cores (or even a moderate to solid performance advantage) of the same generation won’t cut it. For ARM compatible cores to begin to take desktop share, they need a pretty drastic performance advantage over a same-gen x86 core, so as to translate x86-64 code fast enough and put it on even footing. Only Apple has really come close. 

Close enough for me I suspect but I’m not a super demanding user. This whole “we’re only going to do Rosetta for a while though” thing implies that Apple is going to back off from that though. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, straight_stewie said:

What? Have you actually looked at real benchmark scores? The most common M1 chips have only like a quarter the single or multithreaded performance for most non synthetics that the big boys (i7's and threadrippers) have.

 

 

Just wait a bit on that one. The big limitation for x86/64 chips is particle physics. The big limitation for ARM is that no one has tried big, fast, ARM chips before.

Let's just see what happens with the benchmarks over the next decade. Apple's ARM chips have HUGE ground to cover to catch up to current x86/64 offerings.

Re: just better

while I agree, your example compares a thin-and-light laptop CPU to an enterprise level CPU.  When compared thin-and-light to thin-and-light the m1 does really well.  Scalability may be a factor though.  Enterprise level RISC chips are being made though which implies it doesn’t have to be.

 

Re: too early to say

again I agree, but while full blockage bugbears may emerge, you attempt to predict how long the life expectancy of x86 is.  I would say that is unknown as well.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Bombastinator said:

Re: just better

while I agree, your example compares a thin-and-light laptop CPU to an enterprise level CPU.  When compared thin-and-light to thin-and-light the m1 does really well.  Scalability may be a factor though.  Enterprise level RISC chips are being made though which implies it doesn’t have to be.

 

Re: too early to say

again I agree, but while full blockage bugbears may emerge, you attempt to predict how long the life expectancy of x86 is.  I would say that is unknown as well.

Well x86 lasted far longer then Intel expected. Not bad for a Stopgap design. Everyone in the Industry did accept the x86 will be replaced by both RISC and VLIW ISA designs. Even Intel.

 

AMD gave the IA-32 ISA a new Lifespan by extending it to 64 Bits.

Link to comment
Share on other sites

Link to post
Share on other sites

Most likely 

Phone 1 (Daily Driver): Samsung Galaxy Z Fold2 5G

Phone 2 (Work): Samsung Galaxy S21 Ultra 5G 256gb

Laptop 1 (Production): 16" MBP2019, i7, 5500M, 32GB DDR4, 2TB SSD

Laptop 2 (Gaming): Toshiba Qosmio X875, i7 3630QM, GTX 670M, 16GB DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, whm1974 said:

Not bad for a Stopgap design.

What are you talking about?

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Vishera said:

What are you talking about?

Might be referring to AMD64. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Vishera said:

What are you talking about?

https://en.wikipedia.org/wiki/Intel_8086

Quote

The 8086 project started in May 1976 and was originally intended as a temporary substitute for the ambitious and delayed iAPX 432 project. It was an attempt to draw attention from the less-delayed 16- and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to counter the threat from the Zilog Z80 (designed by former Intel employees), which became very successful. Both the architecture and the physical chip were therefore developed rather quickly by a small group of people, and using the same basic microarchitecture elements and physical implementation techniques as employed for the slightly older 8085 (and for which the 8086 also would function as a continuation).

See?, A Stopgap...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×