Jump to content

Apple M1 = the rest of us are living in the stone age!?

3 hours ago, leadeater said:

I'm not sure how Apple is going to handle development and release cycles of future CPUs as it actually wouldn't be possible for them to support another product right now, doing so would just make all products become supply limited which is something Apple wouldn't want.

That kind of resource and supply management was Tim Cook's trademark before he became CEO. I suspect all this M1 stuff was just so he could play with time tables again instead of another Apple Watch 🙂

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Letgomyleghoe said:

I feel like a lot of that "snapiness" is OS related, I felt that same feeling changing my DNS server to cloudflare's 1.1.1.1 and using linux. 

My wife has a 2020 i7 16" MacBook Pro, on the same OS version. It does not have the M1 general snappiness. 

 

Whole house is on 1.1.1.1, at the router level. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/26/2020 at 4:07 PM, gal-m said:

I've really been impressed with the Apple M1 chip, though it seemed too good to be true at first, but after researching it extensively it looks like the claims actually do hold up.

 

My question is, what's next? Can Apple Silicon wipe everything else out? I've recently been thinking about building a new high end gaming system, but now it just seems like Apple might speed past anything in the years to come (in reference to single thread performance at least). The whole thing is making me extremely discouraged to invest in any type of Windows machine at the moment... - Yes I DO know that Windows and Macs, don't compare directly depending on a persons specific workload, but I think you get what I'm trying to say.. it's just making x86 CPUs feel a bit old :(...

 

EDIT: I am NOT saying I want to play games on a Mac. I am just simply stating that the potential of Apple's new chips to wipe every other Intel or AMD off the face of the planet might move manufacturers like Intel or AMD to start looking at developing an ARM based chip as well and the potential that would have in a Windows machine..

 

Anyways, what do you guys think?

There is no apples to apples comparison (no pun intended) 

 

Apple ARM chip is an ARM chip totally different than a x86 chip.

 

It does less, provides less functionality and has a limited instruction set, yes that allows it to be faster sometimes under specific scenarios by using "simpler" programs or simplified ports for arm(also take here generic benchmarks with a pinch of salt especially Geekbench which was a "nobody" and suddenly came to light because it gives high (and arbitrary in the sense that it doesnt measure any performance unit per second but just gives a number as a score) scores to ARM chips... ) 

 

The only reason for the industry to be taken over by ARM would be a financial one and not a technical one (e.g more people starting buying apple and hence companies start to make "me too" arm laptops etc etc) 

 

there is no way in the foreseeable future to have an ARM chip having all the capabilities (and compatibilities) with a x86 cpu. 

 

Its usecase is nice for power critical applications though because it is simpler (also cheaper to produce) it doesnt consume much power and if it can run basic stuff snappy enough (<-- the entire bet of apple here for making that chip) then its a win win. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, NunoLava1998 said:

And one reason for that is probably how well the M1 is optimized web browsing. On Speedometer 2.0 the M1... well

image.thumb.png.d497e820153c6c32266136f407c2f4b9.png

image.png.d0f61d458e63e39c577e3d6136289ab1.png

This was exactly my point when I was mentioning

19 minutes ago, papajo said:

(also take here generic benchmarks with a pinch of salt especially Geekbench which was a "nobody" and suddenly came to light because it gives high (and arbitrary in the sense that it doesnt measure any performance unit per second but just gives a number as a score) scores to ARM chips... ) 

 

What is this test? what exactly does it test? it doesnt say. 

 

I tried it 3 tiems on a stock 6c 8700k with 3 different browsers 

 

Chrome scored 78.9 

Edge scored 137.5

Firefox scored 110 

 

So besides it being browser depended (looking that my poor 6core 8700 outperformed the 5900x and also its core i9 bigger brother which is obviously not the case) 

 

It surely is also internet speed dependent especially in terms of ping to the particular server of that website...

 

So after M1 rumored to release a ton of "nobody" synthetic tests have surfaced which all are garbage imho. 

 

They are telling you nothing useful and are doing "secret" stuff you dont know what exactly and calculate a number .... 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, papajo said:

This was exactly my point when I was mentioning

 

What is this test? what exactly does it test? it doesnt say. 

 

I tried it 3 tiems on a stock 6c 8700k with 3 different browsers 

 

Chrome scored 78.9 

Edge scored 137.5

Firefox scored 110 

 

So besides it being browser depended (looking that my poor 6core 8700 outperformed the 5900x and also its core i9 bigger brother which is obviously not the case) 

 

It surely is also internet speed dependent especially in terms of ping to the particular server of that website...

 

So after M1 rumored to release a ton of "nobody" synthetic tests have surfaced which all are garbage imho. 

 

They are telling you nothing useful and are doing "secret" stuff you dont know what exactly and calculate a number .... 

 

 

There's an About section, which explains in a lot of detail what the benchmark is doing. It's quite a lot so I won't really try to explain it here, but it is pretty browser-dependent (which explains why those results vary so much). I tested mine on Safari but Chrome is close too.

I can also confirm that it loads webpages quite a lot faster than my Ryzen 5 1500X PC; it's very noticeable.

 

What I meant to say though is not that the 8700K, or the 5950X, or any other CPU really is slower than the M1; that's pretty wrong. I meant more that Apple has optimized this and their previous mobile chips (A14, A13 apparently get pretty high scores too) for web browsing, and it makes the overall experience feel very smooth when it comes to that.

Ryzen 7 3700X / 16GB RAM / Optane SSD / GTX 1650 / Solus Linux

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, NunoLava1998 said:

There's an About section, which explains in a lot of detail

Nope it doesnt. It just vaguely describes it. 

 

If it was a gaming benchmark of a game you can not see on screen and  it would be like saying we are using DriectX and lots of physX scripts and so on and so forth... if you dont know what exactly it does (what the workload is and how it is executed etc) then its useless. 

 

This becomes evident because I rerun the tests again (again getting variances in the scores) but with task manager open and the CPU usage was like 3 to5% so without knowing how many threads it invokes and what those threads do it is meaningless. 

4 minutes ago, NunoLava1998 said:

I can also confirm that it loads webpages quite a lot faster than my Ryzen 5 1500X PC; it's very noticeable.

I doubt that, if that is the case then you have either an internet connection issue or a slow ram/hdd on that PC

on a 50Mbps connection(mine) everything loads within less than a second even on my potato laptop.  but opening a new tab on my potato laptop (due to ram) is slower also opening the browser if it wasnt already opened (due to slow ssd) is slower. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, papajo said:

Nope it doesnt. It just vaguely describes it. 

 

If it was a gaming benchmark of a game you can not see on screen and  it would be like saying we are using DriectX and lots of physX scripts and so on and so forth... if you dont know what exactly it does (what the workload is and how it is executed etc) then its useless. 

 

This becomes evident because I rerun the tests again (again getting variances in the scores) but with task manager open and the CPU usage was like 3 to5% so without knowing how many threads it invokes and what those threads do it is meaningless. 

Can't really say I disagree with you, I just find it pretty fast for web browsing. The Octane 2.0 score I got was around 62,000, if you find that more reliable.

3 minutes ago, papajo said:

I doubt that, if that is the case then you have either an internet connection issue or a slow ram/hdd on that PC

on a 50Mbps connection(mine) everything loads within less than a second even on my potato laptop.  but opening a new tab on my potato laptop (due to ram) is slower also opening the browser if it wasnt already opened (due to slow ssd) is slower. 

Said PC has 16GB of RAM, a lightweight-ish linux distro, an Optane 800P SSD, and a wired 500mb/s connection. My MacBook Air has 8GB of RAM, a regular NVMe(?, around 2000-2500mb/s) SSD and a wireless connection (which yields around 90mb/s).

The difference I'm talking about here is 0.5-1s to basically instant for smaller websites, and 1-2s for say twitter to just under 1s

Ryzen 7 3700X / 16GB RAM / Optane SSD / GTX 1650 / Solus Linux

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, NunoLava1998 said:

Said PC has 16GB of RAM, a lightweight-ish linux distro

Yea I did not account for that, when I was comparing my difference in loading etc I was having in mind windows PCs maybe the difference has to do with the different OS as well Mac vs your linux distro 

 

28 minutes ago, NunoLava1998 said:

The Octane 2.0 score I got was around 62,000

Could you make a print screen with the tiles that have the individual results? it takes a medium of all the results and most of mine where over 60k but like a couple well under 

 

I would like to see what the numbers are in your case to see in what parts of this test (which is retired btw meaining its code doesnt get updated) it fairs better. thanks. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, papajo said:

Could you make a print screen with the tiles that have the individual results? it takes a medium of all the results and most of mine where over 60k but like a couple well under 

 

I would like to see what the numbers are in your case to see in what parts of this test (which is retired btw meaining its code doesnt get updated) it fairs better. thanks. 

Sure!

image.thumb.png.7e9c786a4ab3f65b0443c5bc74b78222.png

Ryzen 7 3700X / 16GB RAM / Optane SSD / GTX 1650 / Solus Linux

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure how those of Us who are using x86-64 based Modern Computers are Living in the Stone Age by not replacing with Apple's M1 SoC...

 

Most People don't even own a Mac anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

I've noticed some Windows PC fans react to the M1 like Luke reacted to news Darth Vader was his dad... "no, that's not true! That's impossible!"

 

We do have to be careful about hyping up Apple Silicon beyond its actual capabilities, but there is this contingent of Windows supporters that seems almost pathologically averse to acknowledging that the M1 is better at some tasks. Like recognizing this would be tantamount to saying they backed the "wrong" computer platform. That's not necessarily true, certainly not if you're a gamer, but the discomfort is tangible.

 

You see the same kind of response when PC gamers insist computers are always better than consoles at everything, or even when a demagogue politician they've hyped up turns out to be worse than the status quo leader that demagogue replaced. It's that reluctance to accept your world view needs to change, that the person or thing you revere isn't all it's cracked up to be.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok so first things first I tried this test with a variety of browsers I even downloaded some  I didnt have ( vivaldi,opera,firefox,chrome,chromium,latest compatible version of safari) and the results were all over the place(25k to 53k) even if reruning the same test on the same browser I get a delta of more than a couple of thusand

 

All this while CPU utilization wasnt even close to 50% and usually hovering around less than 20% 

 

That leads me to believe that it the scores are mostly dependent on OS and software code efficiency rather than the hardware (namely the CPU) it self. 

 

Now to see the differences in the tiles I paste here my best score with chrome 

 

cf45d619950601e5faf928d8c32b6150.png

 

 

In which we see that it does take memory read/write latency into account but without adjusting for it (since it takes the geometric mean all values are just numbers without any other gravity on to them) 

 

e.g the SplayLatency and the MandreelLatency score might be a result of a few ms difference which practically wouldnt mean much yet the score generated for them can affect the overall mean drastically.  on the other side  Navier strokes for example (which are floating point math and equation solver of double precision arrays)  might be more meaningful in terms of determining calculating power of a CPU eitherway we dont know what e.g 43778 means... e.g 43778 lines of code per second? 43778 rating stars given after measuring how much time it did to do the calculation ? 😛  

 

Hence these are meaningless comparisons. 

 

If you want to compare hardware in terms of performance both need to be used at 100% and both need to do the same task the result of which  needs to return work (e.g FPS, pixels,no of results from a calculation etc) per second as a unit/score. 

 

Otherwise it just doesnt make sense. 

Link to comment
Share on other sites

Link to post
Share on other sites

@NunoLava1998 Since as I understand you have a m1 macbook what we good do to test the actual raw compute performance is to run the same compute intensive tests 

 

You could download prime95 from here https://www.mersenne.org/download/ 

 

Go to Options> Torture test 

Then in the workers/torturetestthreads field enter 8 and choose the 2n option "Small FFTs" 

 

Here is my result 

 

rhgJtK1.png

(click to enlarge) 

 

Note that because you have 8 cores but 4 of them are higher performant than the other 4 check and expand a little bit some of the worker windows for the lower performant workers as well to have a general idea (it will be the windows where "Self test passed" took more time to appear (dont do that while running the test to increase your results by not loading the pc with your input just have one worker window open at the start to know when it completed the self test as a reference then go to "Test" tab and click "stop" to stop the test and then you can expand the necessary windows, doing it on fullscreen will make things easier )

 

So for me as seen on the prin screen it took 7 minutes to finish 3 test (per worker so 3*12 =36 tests done in 7 minutes in total, since the 8700 has 12 threads in total ) and 5 minutes to reach the "self test passed mark" 

 

 

Then what you could do is download cinebench r20 (to have the same reference so no other version) and run the bench (do the prime 95 first please so that your CPU will be kinda warm 😛 ) 

 

here is mine 

 

k7n3wMF.png

 

Dunno why there is a slight deviation from the baked in 8700k result maybe its because I have a liquid cooler 😛 but take which ever you like. 

 

 

by doing this two tests we can have an idea how it really performs in terms of "horse power" doing calculations and rendering by the same code/program which would be OS intended while the CPUs in all cases will reach 100% capacity. So apples to intels 😛

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, papajo said:

Note that because you have 8 cores but 4 of them are higher performant than the other 4 check and expand a little bit some of the worker windows for the lower performant workers as well to have a general idea (it will be the windows where "Self test passed" took more time to appear (dont do that while running the test to increase your results by not loading the pc with your input just have one worker window open at the start to know when it completed the self test as a reference then go to "Test" tab and click "stop" to stop the test and then you can expand the necessary windows, doing it on fullscreen will make things easier )

 

So for me as seen on the prin screen it took 7 minutes to finish 3 test (per worker so 3*12 =36 tests done in 7 minutes in total, since the 8700 has 12 threads in total ) and 5 minutes to reach the "self test passed mark" 

I wasn't able to do Small FFTs, only Smallest FFTs; which gave me an FFT size of 4K, then 5K, etc.; that probably changes the results as yours was 192K.

But it did run quite quickly (especially as it's using Rosetta, which means it's not even compiled for Apple Silicon)

Screenshot 2021-01-30 at 21.18.56.png

Ryzen 7 3700X / 16GB RAM / Optane SSD / GTX 1650 / Solus Linux

Link to comment
Share on other sites

Link to post
Share on other sites

As for Cinebench R20, that doesn't have an Apple Silicon version, only R23 does. I can still run it, but the performance will be much worse.

I did take an R23 test when I first got my mac and it was around 7500 multi-core and 1500 single-core, which is what most results online show.

1 minute ago, NunoLava1998 said:

(Previous reply)

 

Ryzen 7 3700X / 16GB RAM / Optane SSD / GTX 1650 / Solus Linux

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, NunoLava1998 said:

I wasn't able to do Small FFTs, only Smallest FFTs; which gave me an FFT size of 4K, then 5K, etc.; that probably changes the results as yours was 192K.

But it did run quite quickly (especially as it's using Rosetta, which means it's not even compiled for Apple Silicon)

Screenshot 2021-01-30 at 21.18.56.png

download the same file from the link so that we can have the exact same version (you seem to have an other version or the command line version ) 

 

Also do not forget the cinebench r20 one please 🙂 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, NunoLava1998 said:

As for Cinebench R20, that doesn't have an Apple Silicon version, only R23 does. I can still run it, but the performance will be much worse.

I did take an R23 test when I first got my mac and it was around 7500 multi-core and 1500 single-core, which is what most results online show.

 

Well that would serve its purpose well if you like do the r23 as well I will download it but the r20 will be inticative on how the performance of an armchip will fair for actual programs since an other drawback with arm is that it can't natevily run normal programs and that it runs only compatible to arm programs natevily 

Link to comment
Share on other sites

Link to post
Share on other sites

OK here is the r23 score I didn't know how to make it just run 1 pass so I put a custom time of 1 minute for it to run 

 

vZchIMN.png

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, papajo said:

There is no apples to apples comparison (no pun intended) 

 

Apple ARM chip is an ARM chip totally different than a x86 chip.

 

It does less, provides less functionality and has a limited instruction set, yes that allows it to be faster sometimes under specific scenarios by using "simpler" programs or simplified ports for arm(also take here generic benchmarks with a pinch of salt especially Geekbench which was a "nobody" and suddenly came to light because it gives high (and arbitrary in the sense that it doesnt measure any performance unit per second but just gives a number as a score) scores to ARM chips... ) 

 

The only reason for the industry to be taken over by ARM would be a financial one and not a technical one (e.g more people starting buying apple and hence companies start to make "me too" arm laptops etc etc) 

 

there is no way in the foreseeable future to have an ARM chip having all the capabilities (and compatibilities) with a x86 cpu. 

 

Its usecase is nice for power critical applications though because it is simpler (also cheaper to produce) it doesnt consume much power and if it can run basic stuff snappy enough (<-- the entire bet of apple here for making that chip) then its a win win. 

This is a naive understanding of the difference between RISC and CISC computers. I suggest you read “Instruction Sets and Beyond: Computers, Complexity, and Controversy” by Robert P. Cowell et al. published in the September 1985 edition of IEEE. It's freely available on Google, just search the title (I'm not going to link it because I'm not 100% sure of the licensing of the article).

 

But essentially, RISC and CISC are just two different design philosophies for processors and represent the ends of a spectrum of design choices. 

 

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

  

26 minutes ago, Blade of Grass said:

published in the September 1985

yea, no I will pass 😛 

 

by that time those two architectures were actually competing with each other and there is a reason x86 cisc CPUs became the norm.  Pentium 1 was released almost a decade after that publication. 

 

Anyway since NunoLava1998 didnt update his results yet for over an hour (I mean I expected them to be slower but not that slow 😛 ) I take it he just doesnt want to bother. 

 

Which is a pitty because google is filled mostly with bestcase scenarios for Apple M1 shadowing the downsides and lighting the good points (e.g no r20 scores which is the industry standard r23 was mainly released exactly for supporting ARM cpus etc)  and a more head on head comparison of raw compute power would be an ideal contribution for this thread. 

 

 

So we dont know r20 scores, we dont know actually heavy prime 95 times (the few instances prime95 was used and published on the web was like just to showcase the CPU throttling of the apple x86 counterpart without showing exact numbers etc) 

 

but we have r23 scores 

 

Apple_M1_Cinebench_R23_Benchmarks.jpg

 

So it fairs about 3000 points less than almost 4 years old 6 core desktop CPU (8700k ) which was a refresh of a refresh it self 😛

 

r20 scores would be less flattering for sure and I believe that the same test I proposed for prime95 wouldnt fair that well either (it needs like a minute for 1 worker on the command line version he uploaded to do 4k stuff I doubt it would finish in 7 minutes doing 36 195K test as the 8700k did) 

 

 

The conclusion is more or less as my initial post above. 

 

The M1 chip is a very power efficient laptop chip that is snappy and will cost apple less to produce while for the layman it will provide everything or almost everything of his needs in terms of compatibility and functionality. 

 

Aside from that everyone else doesnt need to worry you dont live in the stoneage the m1 chip is just a little bit slower than a 4 year old desktop CPU and never meant to be faster 

 

It is just that apple (a 2 trillion company) knows better than anybody else how to launch a marketing campaign and shape opinions that's all they did a great job in that (and to be fair they did a great job in implementing compatibility for this chip despite it lacking behind when one tries to run real programs with it the emulation is the best that exists or at least the best I have seen) 

 

But it wont take over desktop CPUs and will never match them unless as I again already mentioned the market shifts drastically and x86 chips stop being the focus of the industry in terms of RnD.

 

Magic does not exist an ARM chip is an ARM chip only by buying ARM chips and not buying x86 anymore things could plausibly change other than that traditional CPUs have nothing to fear. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, papajo said:

  

yea, no I will pass 😛 

 

by that time those two architectures were actually competing with each other and there is a reason x86 cisc CPUs became the norm.  Pentium 1 was released almost a decade after that publication. 

 

Anyway since NunoLava1998 didnt update his results yet for over an hour (I mean I expected them to slower but not that slow 😛 ) I take it he just doesnt want to bother. 

 

Which is a pitty because google is filled mostly with bestcase scenarios for Apple M1 shadowing the downsides and lighting the good points (e.g no r20 scores which is the industry standard r23 was mainly released exactly for supporting ARM cpus etc)  and a more head on head comparison of raw computer power would be an ideal contribution for this thread. 

 

 

So we dont know r20 scores, we dont know actually heavy prime 95 times (the few instances prime95 was used and published on the web was like just to showcase the CPU throttling of the apple x86 counterpart without showing exact numbers etc) 

 

but we have r23 scores 

 

Apple_M1_Cinebench_R23_Benchmarks.jpg

 

So it fairs about 3000 points less than almost 4 years old 6 core desktop CPU (8700k ) which was a refresh of a refresh it self 😛

 

r20 scores would be less flattering for sure and I believe that the same test I proposed for prime95 wouldnt fair that well either (it needs like a minute for 1 worker on the command line version he uploaded to do 4k stuff I doubt it would finish in 7 minutes doing 36 195K test as the 8700k did) 

 

 

The conclusion is more or less as my initial post above. 

 

The M1 chip is a very power efficient laptop chip that is snappy and will cost apple less to produce while for the layman it will provide everything or almost everything of his needs in terms of compatibility. 

 

Aside from that everyone else doesnt need to worry you dont live in the stoneage the m1 chip is just a little bit slower than a 4 year old desktop CPU and never meant to be faster 

 

It is just that apple (a 2 trillion company) know better than everybody else how to launch a marketing campaign and shape opinions that's all they did a great job in that (and to be fair they did a great job in implementing compatibility for this chip despite it lacking behind when one tries to run real programs with it the emulation is the best that exists or at least the best I have seen) 

 

But it wont take over desktop CPUs and will never match them unless as I again already mentioned the market shifts drastically and x86 chips stop being the focus of the industry in terms of RnD.

 

Magic does not exist and ARM chip is an ARM chip only buy buying ARM chips and not buying x86 things could plausibly change other than that traditional CPUs have nothing to fear. 

Your post has a ton of text I’m going to skip over—the problem is your ignorance of fundamental computer architecture, which the article I mentioned addresses specifically. Feel free not to read it, I’ll feel free to point out how incorrect your statements about RISC are. 

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Blade of Grass said:

Your post has a ton of text I’m going to skip over—the problem is your ignorance of fundamental computer architecture, which the article I mentioned addresses specifically. 

Ok let me and the rest of the industry be ignorant and made the bad decision of going on with x86 all those decades ....

 

You can't change history and you cant change results. 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, papajo said:

Ok let me and the rest of the industry be ignorant and made the bad decision of going on with x86 all those decades ....

 

You can't change history and you cant change results. 

Funnily enough if you read the article it talks about why CISC computers are prevalent in industry and RISC is prevalent in academia (historically, but it is also something which remains to this day). 
 

It’s not like modern x86 computers use RISC architecture internally anyways, right?

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/26/2020 at 6:13 AM, Ankh Tech said:

no matter what, arm will not, never, replace x86. Arm is optimised for low end efficient systems. High end will never be achieved by arm. Whatever the case. x86 however. Does both. Get a new windows system. If arm gets more advanced than now. x86 would've probably reached the to good to be true stage

This is false. The Cortex A series of processors that everyone puts in smartphones are optimized for this purpose. ARM has technologies that are designed for server workloads and for HPC. We will see how long it is before Apple (and others) take before they start implementing things like the V2 cores and N1 cores from ARM. The V2 is a refreshed design of the V1 core that was designed for efficient server workloads. The N1 chip is straight up designed to be a performance monster. No performance per watt stuff. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blade of Grass said:

RISC is prevalent in academia

Same reason why Python is more prevalent in academia than C++ its easier to teach because it is simpler doesnt make it better. 

 

Also ARM/risc architecture is more open popular CISC architectures have a ton of secret(to the public aka patented) parts 

 

RISC cpus cant even handle floating point calculations they need a separate unit for that, their main advantage (reduced, and also more primitive, instructionset)is also the byproduct for their  shortcomings

 

Since certain  functions which require only one instruction on a CISC machine require  two, three, or more  RISC instructions, the resulting program code will be longer. More memory will have to be  allocated to RISC programs. The instruction traffic between the memory and the CPU will be increased. A RISC program is about 30% longer than a CISC program  for  the  same  function. Although the combined execution time of  the multiple RISC instructions may be shorter than the single CISC instruction, the question of program size is nonetheless a valid one. Larger programs not only require more memory for storage but are also more susceptible to page faults in virtual memory systems and decrease in the hit ratio for instruction caches. (The apparent amount of main memory is increased by  storing “pages” in secondary storage and swapping them in and out as needed.) Since RISC uses  a large number of registers, a complicated register address decoding is involved.

 

Furthermore one reason for RISC to exist back then is that CISC CPUs didnt have cahces at the time and needed to load everything from memory directly which is not the case nowadays + modern x86 CPUs when needed use basic idea of the RISC advantages by implementing microcode when needed. 

 

Other disadvantages on the top of my head: 

It has a hard-wired unit of programming. (no microporgramming unit) 

highly pipelined,Instructions can take several clock cycles (hence Code expansion may create problems) therefore suffering of heavy use of RAM (can cause bottlenecks especially if RAM is limited)

Limited addressing modes

 

Long story short, if you ever wondered why an "app" (not a program programs run on x86 CPUs) despite being well funded/paid/popular on your phone is shitty does weird stuff or has limited capabilities it is because it runs on an ARM chip. 

 

ARM/risc chips are very good in doing simple stuff (depended heavily on software optimization, extra software optimization compared to cisc) while needing less power to do so due to their simplicity.  

 

+They are also cheaper and easier to manufacture since Intel last time I checked doesnt give out licenses like ARM does and in general there are not many patents involved other than the main ARM license.  

 

Hence phones,chromebooks and small mobile devices are manufactured with such chips. 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×