Jump to content

Future of x86

Pc6777
30 minutes ago, Commodus said:

That's not actually true. We've seen M1-based Macs trounce comparable x86 PCs, and even some ostensibly more powerful ones, in a variety of real-world tests. And those apps that don't run aren't limited by ARM, they're typically limited by having to run highly x86-specific code (like virtual machines).

 

Maybe you're just used to the shoddy state of Windows on ARM, where few things run natively and emulation is poor... but that doesn't represent ARM as a whole.

i was gonna point that out, arm is for macbook since it already is limited for applicatins, windows has been on x64 and x86 stuff for long, so transitioning might be hard

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ComputerBuilder said:

i was gonna point that out, arm is for macbook since it already is limited for applicatins, windows has been on x64 and x86 stuff for long, so transitioning might be hard

To be clear, it's not about the number of apps available — it's about how well Apple, Microsoft and Qualcomm handle compatibility.

 

Apple both engineered the M1 to more elegantly handle non-native code and is much more experienced with platform transitions, having first released Rosetta and 'fat' binaries (where an app bundle runs on multiple chip architectures) back in 2005-2006. You can launch an app and expect it to run well unless you have explicit reason to believe otherwise.

 

Qualcomm doesn't have that hardware element to improve non-native code handling, so there's a huge performance hit when running x86 code. And I'll be blunt: Microsoft's approach to emulating x86 on ARM sucks. It only just added 64-bit emulation at the very end of 2020, and even then only in preview form.

 

You see what I'm getting at?  The transition to ARM on Windows has floundered because Microsoft and Qualcomm keep screwing it up, more than anything. Legacy apps might keep some people on x86 for now, but if Microsoft and Qualcomm don't get their act together, that argument won't hold water for long. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Commodus said:

Qualcomm doesn't have that hardware element to improve non-native code handling, so there's a huge performance hit when running x86 code. 

They actually do. There have been some claims floating around (I think even LTT, incorrectly, said it) that the M1 has special hardware for handling the inconsistent memory models between x86 and ARM, but that is actually a standard ARM ISA feature that is implemented in newer ARM cores as well.

I would not be surprised if Microsoft don't use those instructions in Windows though. They wrote their x86 compatibility layer for the Snapdragon 835 and since Windows on ARM hasn't really gotten much success, I wouldn't be surprised if they never updated their compatibility layer to take advantage of newer instructions.

 

Plus, the cores Qualcomm uses suck ass.

Link to comment
Share on other sites

Link to post
Share on other sites

The architecture is not going anywhere anytime soon. However, the market share of ARM based chips is only going to rise. The caveat to that though is the best ARM desktop class CPU (The Apple M1) still needs special x86 instruction accelerators to be a viable replacement. 

 

Apple is currently the market leader in ARM processors and I don't know of any challengers coming to the market in the near future. But if Apple is operating alone in this market, the pressure to change the rest of the industry is very small....unless of course Apple somehow manages to become a significant portion of laptop and desktop sales. 

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, DrMacintosh said:

The caveat to that though is the best ARM desktop class CPU (The Apple M1) still needs special x86 instruction accelerators to be a viable replacement. 

And I wouldnt call it a desktop class CPU. 

 

ARM's main advantage is power efficiency. 

 

x86 Laptop CPUs are cripled in many ways (which reduces their performance) for the sole reason of limiting their power consumption and only against those CPUs (and under specific scenarios) it trades blows with them 

 

It is a very cool chip (no pun intended) having reduced thermals and a huge battery life despite the small battery size ! So for a laptop/tablet scenario it is ideal. 

 

 

Against a modern desktop x86 CPU (where power consumption and sufficient cooling is not an issue)  it simply cant compete at least not in terms of performance (maybe as a cheaper alternative for having less performance for a smaller pricetag if you are willing to live with whatever compatibility issues that will occur - and for that only reason the best case scenario I predict is ARM to take over next gen consoles since consoles are a closed environment as well so compatibility can be achieved to 100%  but for desktops I dont think so) 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, papajo said:

Against a modern desktop x86 CPU (where power consumption and sufficient cooling is not an issue)  it simply cant compete at least not in terms of performance (maybe as a cheaper alternative for having less performance for a smaller pricetag if you are willing to live with whatever compatibility issues that will occur - and for that only reason the best case scenario I predict is ARM to take over next gen consoles since consoles are a closed environment as well so compatibility can be achieved to 100%  but for desktops I dont think so) 

What do you base that opinion on?

The M1 is able to compete with desktop x86 CPUs all the way up to 4 thread workloads. It's only when you start comparing something like 8 x86 cores to 4 Firestorm cores that x86 starts pulling ahead.

 

Core for core, Firestorm is more than capable to compete with even the best desktop CPUs from Intel and AMD.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

The M1 is able to compete with desktop x86 CPUs all the way up to 4 thread workloads.

So we are talking about CPUs that were made like half a decade ago 

 

1 hour ago, LAwLz said:

Core for core, Firestorm is more than capable to compete with even the best desktop CPUs from Intel and AMD

On what do you base that? 

 

1 hour ago, LAwLz said:

What do you base that opinion on?

On comparing it with a humble 4 year old refresh of intel namely the i7 8700K 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, papajo said:

So we are talking about CPUs that were made like half a decade ago 

No, we are talking about Zen 3 which was released 3 months ago.

 

9 minutes ago, papajo said:

On what do you base that? 

Pretty much every single benchmark that has been shown, such as SPEC, dav1d, CineBench, a variety of browser based benchmarks, GeekBench, Clang, HandBrake, SciMark, PyPerformance, SQLite, and several more.

 

12 minutes ago, papajo said:

On comparing it with a humble 4 year old refresh of intel namely the i7 8700K 

Good thing other people have access to newer chips and can test it for us. Firestorm stacks up very well against zen 3 cores as well.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

Pretty much every single benchmark that has been shown, such as SPEC, dav1d, CineBench, a variety of browser based benchmarks, GeekBench, Clang, HandBrake, SciMark, PyPerformance, SQLite, and several more.

First of all most synthetic benchmarks in which you dont use 100% of the CPU and in which you cant witness workload done per unit of time are worthless especially Greekbench who nobody cared about and became prominent when m1 launched... 

 

Having said that which are the performances you speak of in SQlite, Pyperfomance Cinebench and handbrake (although in handbrake it wouldnt be fair to compare since it would probably use its GPU with special video encoders but I still would like to see that comparison) 

 

9 minutes ago, LAwLz said:

Good thing other people have access to newer chips and can test it for us

Like who? All the youtube is filled with "apple" channels that praise the m1 chip comparing it with older apple laptops or laptops in general I havent seen a head to head comparison with real life workloads compared with desktop CPUs (btw and I dont say that as an excuse since I believe they will fair better but AMD CPUs have lower IPC per core since you mention that and I think you mean ryzen 3000 because I doubt it could compare with zen3 which is the 5000 series ) 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, papajo said:

First of all most synthetic benchmarks in which you dont use 100% of the CPU

It's usually the other way around. Real world workloads are very mixed and can run into several bottlenecks that are not CPU related. Or they aren't drastically parallelized and you run into issues there.

 

 

3 hours ago, papajo said:

and in which you cant witness workload done per unit of time are worthless

What do you mean?

 

3 hours ago, papajo said:

especially Greekbench who nobody cared about and became prominent when m1 launched... 

1) Just because you didn't know about Geekbench doesn't mean it wasn't prominent before, or that it is worthless. Here are a couple of posts from me where I talk about it, before the M1 came out. In case you think it is just some flavor of the month and for some reason therefore it not being good. Here is a comment from me in 2012, where I mention Geekbench because The Verge used it in a review. Here is a post where I mention it in 2013. And if you search on this forum you will find a lot of posts about it, going as far back as 2013.

2) Just because something becomes popular doesn't mean it is bad.

 

3 hours ago, papajo said:

Having said that which are the performances you speak of in SQlite, Pyperfomance Cinebench and handbrake (although in handbrake it wouldnt be fair to compare since it would probably use its GPU with special video encoders but I still would like to see that comparison) 

I'll have to look up each individual benchmark if you want that. Or you can just type in "M1 benchmark" and look for it yourself if there are some specific programs you want to see.

Here are some benchmarks for real-world programming tasks.

Here is dav1d results.

A bunch of results, including a Handbrake result, although it is against a zen3 processor with twice the amount of cores.

SPEC2006 and SPEC2017 results.

 

 

  

3 hours ago, papajo said:

Like who? All the youtube is filled with "apple" channels that praise the m1 chip comparing it with older apple laptops or laptops in general I havent seen a head to head comparison with real life workloads compared with desktop CPUs (btw and I dont say that as an excuse since I believe they will fair better but AMD CPUs have lower IPC per core since you mention that and I think you mean ryzen 3000 because I doubt it could compare with zen3 which is the 5000 series ) 

No I mean zen3, as in Ryzen 5000.

A Firestorm core at ~3.2GHz (~7 watts) is competitive with a zen3 core at ~4.7GHz (~40 watts).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

It's usually the other way around. Real world workloads are very mixed and can run into several bottlenecks that are not CPU related. Or they aren't drastically parallelized and you run into issues there.

I think you just desperately trying to defend your side of the argument here... saying that synthetic benchmarks are more reliable than actual workload tests (so things that people actually gonna use the CPUs for) is like saying exit pols are more reliable than actual elections 😛

 

besides that there is a technical aspect also involved, if you try to find out which CPU is more performant than the other you have to utilize them both at 100% and measure how much time it took to complete the same task, many synthetic benchmarks dont utilize the cpu at 100% e.g geekbench and they results could heavily be influenced by software optimization differences or other hardware aspects such as ram and latency + their scores are an arbitrary number not something tangible e.g Xcalulations per second its more like here is your score 1212312 deal with it 😛

 

1 hour ago, LAwLz said:

What do you mean?

 

As I said the CPUs must both be used at 100% and the result must be tangible e.g it rendered the picture at X seconds, or exported the files at X seconds or it calculated X digits of pi at Y seconds and so on and so forth 

 

1 hour ago, LAwLz said:

Just because you didn't know about Geekbench doesn't mean it wasn't prominent before, or that it is worthless.

There were like 0 mentions of geekbench in any e.g ltt videos or other youtube channels or forums videos or forums before the advent of Iphone comparisons or m1 chips. 

 

and it doesnt utilize the CPU to 100% during the entirety of the test plus it gives out a generic number 

 

1 hour ago, LAwLz said:

I'll have to look up each individual benchmark if you want that. Or you can just type in "M1 benchmark" and look for it yourself if there are some specific programs you want to see.

Here are some benchmarks for real-world programming tasks.

Here is dav1d results.

A bunch of results, including a Handbrake result, although it is against a zen3 processor with twice the amount of cores.

SPEC2006 and SPEC2017 results.

 

So basically a bunch of obscure benchmarks that dont use 100% of the CPU and probably are affected by other factors as well (os/software optimization and ram latency + speed which the M1 has a very fast one 4266Mhz low latency ram as wehere the 3800x on your first link was paired with an unknown latency 3200mhz ram we are not talking about products here but about architectures and/or each CPU in its self) 

 

And the only practical benchmarks are that dav1D but in the best light (single core perfomance comparison with a laptop CPU in which the m1 still lost but being very close) and the ltt benchmarks on cinebench where it got crashed also the "theoretical" chart were it takes the score of the M1 and double it is just a blunder since a) the score it the two times multiplication was based used all 8 cores not only the fast 4 cores  b) it doesnt work like that because 8 firestorm cores would not fit in the package (note that the realestate of the icestorm cores -had they not been present- is about a third of the rea estate of the firestorm cores)

M1_575px.png

 

And because of cache being shared between the extra cores. 

 

And that handbrake score in which it again looses both using the CPU cores and when using its GPU by almost double the time compared to the ryzen chip which only has 6 cores (instead of 8  ) and no, you cant pick and choose we are talking about architecture here and this is how it is been done the 6 cores are inside the package of the ryzen and the 8 cores are inside the m1 package and they couldnt fit more cores if theyd liked to unless they change to a smaller nm lithography (and also there are other technical reasons e.g that's why amd cant fit all the cores of the threadripper in one chip and brakes them down into multiple chips introducing latency and performance loss by doing so)

 

Edit oh I forgot the other real scenario being tested in the ltt video blender in which again the m1 loses almost with double the time this time 

 

Its a nice chip and will disrupt the laptop market for sure due to its snapyness and power efficiency but its not a desktop class cpu. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×