Jump to content

Is ARM the future?

curiousmind34
37 minutes ago, pmle6 said:

AMD64 is more of an extension to IA-32 than anything. IA-32 is still alive inside AMD64.

I'm know that. I am referring to x86-32 CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

This were an interesting thread to read.... (A lot of Rosetta 2, Apple, and general back and forth about RISC-CISC misconceptions...)

 

First of, my opinion if ARM is the future.

ARM isn't really the future, and I will get into why.

X86 is largely going to remain in the Desktop and Laptop scene, and most likely going to stay relevant in servers. (Not that ARM is the main competitor here...)

RISC-V is currently taking a stranglehold of ARM's embedded applications. (Change is though exceptionally slow in the embedded world, I know plenty of embedded applications that use 30 year old processors...)

 

The reason for why ARM isn't really the future is similar to why Linux isn't the go to OS for people in general.

ARM isn't really "an ISA", but rather a family of somewhat similar instruction set architectures, though their differences makes it fairly hard to just port software from one to another.

For an example, SoC used on the early Raspberry Pi's can't run Android, despite that Android runs on "ARM" and the Raspberry Pi has an "ARM" processor.

 

ARM is fairly fragmented, similar to how Linux is fragmented between many different distros having their own pros and cons and compatibilities.

 

X86 on the other hand isn't. One can take code written for it in the late 70's and "just run it" on modern processors. (not really for "security" reasons.)

Similar story for PowerPC IIRC...

RISC-V haven't been around long enough yet to show fragmentation, but given its open source nature and generally scatterbrained approach to targeting applications and development, then it will likely fragment like a fine vase falling off the Empire state building. (Just my 2 cents though. Though, RISC-V is currently focusing on the "embedded side" with laser focus, currently worming its way to replace the ESP32. The IoT chip of choice.)

 

Next we have the RISC/CISC misconceptions.

Higher end ARM processors aren't really RISC in their implementations these days.

Though, some people think that microcode on X86 makes it a "RISC" architecture. It is like saying that Burj Khalifa is a tiny house (a building with less than 25m^2 of floor area) just because one removed its antenna. (the antenna makes up 30% of it's height.) It wouldn't be unfair to say that X86 is the most instruction rich architecture in common use, removing a bunch through microcode doesn't impact it much. (Not that instruction count is the only factor differentiating RISC and CISC architectures. It is also defined by overall system complexity and how the code is required to interact with various resources.)

 

In regards to power efficiency, RISC isn't a silver bullet, neither is CISC, it depends greatly on the application being used and how well optimized it is for the architecture one runs it on. (the software needs to be equally optimized on both platforms, otherwise it is an unfair comparison for logical reasons.)

In regards to performance, it is a similar story.

From a pure computer science standpoint on the other hand, then a well optimized CISC architecture will always offer more performance for less energy than a RISC architecture. But there is plenty of practical reasons for why this isn't a usable argument in practice.

 

Then we have the "RISC architectures are more memory intensive."
It depends a bit on the application, a RISC architecture does indeed have simpler instructions. But this doesn't prevent a RISC architecture from implementing instruction compression, commonly known as microcode. Since microcode extrapolation is done in the core itself, therefore the application can use a smaller amount of memory bandwidth due to not needing to specify each individual step in the larger procedure. Instruction compression is used in most architectures. Be it ARM, RISC-V, PowerPC, or even x86. (Using microcode to break down more complicated instructions into a string of simpler one's is another topic, technically the same thing though.)

 

At least I haven't seen anyone in the thread say: "RISC architectures can execute code with fewer instructions!" since that is incorrect in most cases, but not always. (Since RISC/CISC doesn't state much about the implementations ability to handle out of order execution, and how many ports it can send out instructions over in a given cycle, nor how long it takes for a task to complete, etc. In short, a RISC architecture has fewer different instructions to implement and can therefor usually spend more of its transistor budget on making more copies of the instructions it does have, giving particular performance increases for applications making heavy use of those simpler instructions. Though, a CISC architecture can usually outperform if its more complicated instructions are actually used. In short, RISC is good if one doesn't optimize one's code much. It is also usually easier for a compiler to make code for a RISC architecture than a CISC one, but this post is already fairly long so I will spare you the details.)

 

For higher performance applications, X86 will remain fairly uncontested. Even if an ARM based system is currently the leader on the Top500 list. (But here the PowerPC architecture is already competing heavily with x86 as is. But this is for other reasons outside of the RISC/CISC debate. PowerPC has a lot of interesting HPC oriented features that is a topic in and off itself...)

 

That Microsoft has tons of software using x86 isn't actually an all that major reason for X86's dominance on the market. Microsoft products makes up a sizable section of the X86 market, but very far from all of it, especially in the server section.

 

I somehow wants to go into more detail, but I don't think a deeper dive into architectures is really needed here, since it would just go off topic and open up more questions than answers to how nuanced the RISC/CISC topic is.

 

In the end.
I don't suspect ARM to actually be all that competitive in the PC space.
One can't really look at overall market share as indicative of the future either. The system requirements for a phone/tablet or even a TV isn't the same as in a desktop computer. And the same applies for both the embedded (this is though a whole category of sub categories, saying "embedded" is about as specific as saying "food" when talking about different recipes) and HPC world. Different applications have different requirements. ARM isn't really the tool that the PC scene is looking for, especially if one pushes some more compute heavy applications.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Nystemy said:

This were an interesting thread to read.... (A lot of Rosetta 2, Apple, and general back and forth about RISC-CISC misconceptions...)

 

First of, my opinion if AMR is the future.

ARM isn't really the future, and I will get into why.

X86 is largely going to remain in the Desktop and Laptop scene, and most likely going to stay relevant in servers. (Not that ARM is the main competitor here...)

RISC-V is currently taking a stranglehold of ARM's embedded applications. (Change is though exceptionally slow in the embedded world, I know plenty of embedded applications that use 30 year old processors...)

 

The reason for why ARM isn't really the future is similar to why Linux isn't the go to OS for people in general.

ARM isn't really "an ISA", but rather a family of somewhat similar instruction set architectures, though their differences makes it fairly hard to just port software from one to another.

For an example, SoC used on the early Raspberry Pi's can't run Android, despite that Android runs on "ARM" and the Raspberry Pi has an "ARM" processor.

 

ARM is fairly fragmented, similar to how Linux is fragmented between many different distros having their own pros and cons and compatibilities.

 

X86 on the other hand isn't. One can take code written for it in the late 70's and "just run it" on modern processors. (not really for "security" reasons.)

Similar story for PowerPC IIRC...

RISC-V haven't been around long enough yet to show fragmentation, but given its open source nature and generally scatterbrained approach to targeting applications and development, then it will likely fragment like a fine vase falling off the Empire state building. (Just my 2 cents though. Though, RISC-V is currently focusing on the "embedded side" with laser focus, currently worming its way to replace the ESP32. The IoT chip of choice.)

 

Next we have the RISC/CISC misconceptions.

Higher end ARM processors aren't really RISC in their implementations these days.

Though, some people think that microcode on X86 makes it a "RISC" architecture. It is like saying that Burj Khalifa is a tiny house (a building with less than 25m^2 of floor area) just because one removed its antenna. (the antenna makes up 30% of it's height.) It wouldn't be unfair to say that X86 is the most instruction rich architecture in common use, removing a bunch through microcode doesn't impact it much. (Not that instruction count is the only factor differentiating RISC and CISC architectures. It is also defined by overall system complexity and how the code is required to interact with various resources.)

 

In regards to power efficiency, RISC isn't a silver bullet, neither is CISC, it depends greatly on the application being used and how well optimized it is for the architecture one runs it on. (the software needs to be equally optimized on both platforms, otherwise it is an unfair comparison for logical reasons.)

In regards to performance, it is a similar story.

From a pure computer science standpoint on the other hand, then a well optimized CISC architecture will always offer more performance for less energy than a RISC architecture. But there is plenty of practical reasons for why this isn't a usable argument in practice.

 

Then we have the "RISC architectures are more memory intensive."
It depends a bit on the application, a RISC architecture does indeed have simpler instructions. But this doesn't prevent a RISC architecture from implementing instruction compression, commonly known as microcode. Since microcode extrapolation is done in the core itself, therefore the application can use a smaller amount of memory bandwidth due to not needing to specify each individual step in the larger procedure. Instruction compression is used in most architectures. Be it ARM, RISC-V, PowerPC, or even x86. (Using microcode to break down more complicated instructions into a string of simpler one's is another topic, technically the same thing though.)

 

At least I haven't seen anyone in the thread say: "RISC architectures can execute code with fewer instructions!" since that is incorrect in most cases, but not always. (Since RISC/CISC doesn't state much about the implementations ability to handle out of order execution, and how many ports it can send out instructions over in a given cycle, nor how long it takes for a task to complete, etc. In short, a RISC architecture has fewer different instructions to implement and can therefor usually spend more of its transistor budget on making more copies of the instructions it does have, giving particular performance increases for applications making heavy use of those simpler instructions. Though, a CISC architecture can usually outperform if its more complicated instructions are actually used. In short, RISC is good if one doesn't optimize one's code much. It is also usually easier for a compiler to make code for a RISC architecture than a CISC one, but this post is already fairly long so I will spare you the details.)

 

For higher performance applications, X86 will remain fairly uncontested. Even if an ARM based system is currently the leader on the Top500 list. (But here the PowerPC architecture is already competing heavily with x86 as is. But this is for other reasons outside of the RISC/CISC debate. PowerPC has a lot of interesting HPC oriented features that is a topic in and off itself...)

 

That Microsoft has tons of software using x86 isn't actually an all that major reason for X86's dominance on the market. Microsoft products makes up a sizable section of the X86 market, but very far from all of it, especially in the server section.

 

I somehow wants to go into more detail, but I don't think a deeper dive into architectures is really needed here, since it would just go off topic and open up more questions than answers to how nuanced the RISC/CISC topic is.

 

In the end.
I don't suspect ARM to actually be all that competitive in the PC space.
One can't really look at overall market share as indicative of the future either. The system requirements for a phone/tablet or even a TV isn't the same as in a desktop computer. And the same applies for both the embedded (this is though a whole category of sub categories, saying "embedded" is about as specific as saying "food" when talking about different recipes) and HPC world. Different applications have different requirements. ARM isn't really the tool that the PC scene is looking for, especially if one pushes some more compute heavy applications.

A great deal of this jibes with what I have read earlier.  There are a few things I have read that are slightly different though.  For one while some early x86 stuff will run on modern processors not all of it will, and source for arm can often be compiled to run on other arm systems.  So perhaps less forceful than written. I don’t know if that disqualifies the premise or not though.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Bombastinator said:

A great deal of this jibes with what I have read earlier.  There are a few things I have read that are slightly different though.  For one while some early x86 stuff will run on modern processors not all of it will, and source for arm can often be compiled to run on other arm systems.  So perhaps less forceful than written. I don’t know if that disqualifies the premise or not though.

There is a reason for why I add additional comments in the parenthesis.

 

15 minutes ago, Nystemy said:

X86 on the other hand isn't. One can take code written for it in the late 70's and "just run it" on modern processors. (not really for "security" reasons.)

Both the " " and the parenthesis indicates that this isn't really the case in practice for various reasons. But in theory, the code can still run, the instructions are still there and handled in the same way. But getting it to run isn't always trivial since boot up procedures and surrounding software tends to interfere.

A lot of code written pre IBM-PC days tends to not expect a BIOS among other things to exist, ie. The code just uses the processor in a way that typically conflicts with other software and features that we find on modern systems.

Even software made for the IBM-PC and later systems can also stumble into issues when they try to do things that modern systems either don't support, or do differently. Compatibility on a software level isn't really there in most cases. (And sometimes an old piece of software can run head first into some memory mapped resource that it didn't know were a thing...)

Link to comment
Share on other sites

Link to post
Share on other sites

I think if anyone can next to Apple it's actually Linux and its community. Which is weird, but it's probably much more ARM ready than Windows every will be. Oddly enough.

 

Pricing schemes are also not in ARM's favor either. Whole thing is totally immature yet everyone charges like it's most polished thing ever. Which is weird. Who's gonna buy ARM based laptop for 1200€ or 1500€ ? You have to be mad or super into exotic stuff with no regard for performance. Especially when you can get a Ryzen 5000 series powerhouse that will just unconditionally work.

Link to comment
Share on other sites

Link to post
Share on other sites

Please refer to

If the question is about efficiency...

Otherwise then I see a world where all kinds of architectures (e.g. quantum, ARM, x86, etc.) will serve their own purposes in the future, with some more optimized for certain tasks (e.g. mobile vs desktop) than others.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Nystemy said:

Then we have the "RISC architectures are more memory intensive."
It depends a bit on the application, a RISC architecture does indeed have simpler instructions. But this doesn't prevent a RISC architecture from implementing instruction compression, commonly known as microcode. Since microcode extrapolation is done in the core itself, therefore the application can use a smaller amount of memory bandwidth due to not needing to specify each individual step in the larger procedure. Instruction compression is used in most architectures. Be it ARM, RISC-V, PowerPC, or even x86. (Using microcode to break down more complicated instructions into a string of simpler one's is another topic, technically the same thing though.)

I know what you meant, but I believe most people here used the term microcode in the sense of a ROM with your µOPs that can be updated later on, which is not common at all in most ARM/RISC-V due to their hardwired control unit. It is a thing on POWER tho, so it's not really something inherent to RISC or CISC designs.

 

Most people here only think about the ISA and ignore everything from the decoder & bellow.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, igormp said:

I know what you meant, but I believe most people here used the term microcode in the sense of a ROM with your µOPs that can be updated later on, which is not common at all in most ARM/RISC-V due to their hardwired control unit. It is a thing on POWER tho, so it's not really something inherent to RISC or CISC designs.

 

Most people here only think about the ISA and ignore everything from the decoder & bellow.

Yes, I know a lot of people do consider "microcode" and a lot of things in general from a different point of view.


I have just studied computer architecture design for over a decade and tend to be subject to the Dunning Krüger effect and overestimate what is considered common knowledge.

Though, the microcode doesn't have to be a hardwired matrix in the CPU itself.
It can read it in during system boot. I have seen some architectures do it a bit more on the fly as if it were just program functions. (I though don't want to consider the implications of thread switching in that case, it is already a can of worms as is....)

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Nystemy said:

Though, the microcode doesn't have to be a hardwired matrix in the CPU itself.
It can read it in during system boot. I have seen some architectures do it a bit more on the fly as if it were just program functions

And that's the microcode people are referring to in this thread, the one Intel and AMD uses, and the one IBM refers to as "firmware" for their POWER CPUs.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

ARM is the future honestly and its going to happen with windows themselves having a project of a new os that drops the x86 structure entirely along with alot of other legacy code. So once windows gets closer to dropping there new os in the future thats when ARM will probably take off majorly 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Cinterz said:

closer to dropping there new os in the future thats when ARM will probably take off majorly 

What new OS? Windows 10 is the only OS Microsoft plans to use. They stated that at the beginning. They intend to do a rolling release. 

 

7 hours ago, Cinterz said:

x86 structure entirely along with alot of other legacy code.

Probably over their enterprise customers dead bodies. Too many businesses use older software thats the reason why Microsoft keeps legacy support in Windows. Because big businesses pay Microsoft subscription fees and that leads to lots of money.  Forcing big business to change all that software might force big businesses on to another platform. Remember MacOS and Linux are also competing platforms. Microsoft was only successful because it catered to businesses back in the day. 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

I think it's a very real possibility; power efficiency is more important than a lot of people give it credit for. I think it's very well within the realm of possibility that we see a complete shift towards ARM in consumer PCs by the end of the decade--though enterprise and business use would be slower to change. Remember when Apple threw the mobile market into chaos by releasing the 64-bit iPhone 5S before anyone else was ready to go 64-bit?

 

A ColdFusion video made a good point in that the Apple M1 is showing the traits of disruptive innovation; disruptive innovation tends to start at the entry level before creeping up to power users, and eventually reaching its full potential through applications at the very forefront of the industry.

 

Of course we'll have to see, but right now I estimate there's a good 35% chance that we'll be seeing ARM as the dominant consumer PC architecture by 2030. The efficiency gains over x64, especially in laptops, are hard to ignore.

they/them

my friends call me sod

Laptop (Main): MacBook Pro 14-inch "Iris" - M2 Max | 30-core GPU | 32GB DDR5-6400

Desktop: "Memoria Mk. 3.1" - Ryzen 9 5900X | RX 6800 XT (XFX MERC 319) | Strix X570-F | 64GB DDR4-3200

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Sodapone said:

think it's a very real possibility; power efficiency is more important than a lot of people give it credit for. I think it's very well within the realm of possibility that we see a complete shift towards ARM in consumer PCs by the end of the decade--though enterprise and business use would be slower to change. Remember when Apple threw the mobile market into chaos by releasing the 64-bit iPhone 5S before anyone else was ready to go 64-bit?

One thing, No one care about power efficiencies on Desktop systems. They just dont. Now Laptops on the other hand, ARM will have a bright future. The fact is there is not going to be ONE type of CPU, x86 and ARM will coexist. Because ARM will be used for Phones, Tablets and some Laptops. While x86 will likely still be in use for desktops. 

 

Also Enterprise users might be interested in ARM for power efficiency on Servers. BUT it will come at a cost of software support. So in applications where its easy to switch, likely ARM will succeed in the Enterprise Market. BUT like I said, Enterprise users are cheap and likely will continue using older software till it no longer makes businesses sense. Like the US Navy using Windows XP long after Microsoft ended support for normal people like you and I. 

 

Even on the consumer side, where Microsoft has more play, they have had a hard time getting ARM and Windows where it needs to be. I dont think many people buy their ARM Surface Tablet. Because they see what it CANT do, not what it could potentially do. Microsoft will need something comparable to Apple's Rossetta 2, but unlike Apple, they will need to support it for YEARS and YEARS. 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Cinterz said:

ARM is the future honestly and its going to happen with windows themselves having a project of a new os that drops the x86 structure entirely along with alot of other legacy code. So once windows gets closer to dropping there new os in the future thats when ARM will probably take off majorly 

ARM not all that great and to be fair it no better them x86.

 

When you talk about legacy code, I think here you are deep do not understand how OS or software are made. There is no so call "legacy code" in Windows at all. Just become the code is old does not mean it not good or been used a lot. Software and OS are made on top of older code base that was use in the older version, rewrite all that code in case of Windows would take years and made be worse them before and cost in the millions, easily. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Donut417 said:

One thing, No one care about power efficiencies on Desktop systems. They just dont.

The thing is that while people don't care about power efficiency per se, they do care about maximizing performance. A less power-efficient desktop processor is going to need more robust cooling, and will be prone to throttling if that requirement is not met, losing whatever performance gains it could make over an ARM processor. For enterprise and business use that's a big deal; look at how many enterprise computer solutions try to make the cooling as passive as possible, because replacing fans, reapplying thermal paste, replacing pumps, water, all of that is expensive at scale. And not to mention that power consumption does matter in these applications too, since they want to minimize the power bills as they're often running these machines 24/7.

 

ARM then has an argument in that it can do more with less, producing less heat, and thus making these passive solutions easier to implement.

 

As for support, yes, that is the crux of ARM's barrier to adoption. But seeing how quickly software companies scrambled to make native ARM versions of their programs for M1 Macs, I think that could be resolved in due time.

they/them

my friends call me sod

Laptop (Main): MacBook Pro 14-inch "Iris" - M2 Max | 30-core GPU | 32GB DDR5-6400

Desktop: "Memoria Mk. 3.1" - Ryzen 9 5900X | RX 6800 XT (XFX MERC 319) | Strix X570-F | 64GB DDR4-3200

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Sodapone said:

I think it's a very real possibility; power efficiency is more important than a lot of people give it credit for.

Mentioning power efficiency.

 

ARM isn't inherently more power efficient than x86 in all workloads.

Same story for RISC and CISC architectures in general.


I'll refer to my post over here for more depth into the topic:

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nystemy said:

Mentioning power efficiency.

 

ARM isn't inherently more power efficient than x86 in all workloads.

Same story for RISC and CISC architectures in general.


I'll refer to my post over here for more depth into the topic:

 

Huh... Point made. I stand corrected.

they/them

my friends call me sod

Laptop (Main): MacBook Pro 14-inch "Iris" - M2 Max | 30-core GPU | 32GB DDR5-6400

Desktop: "Memoria Mk. 3.1" - Ryzen 9 5900X | RX 6800 XT (XFX MERC 319) | Strix X570-F | 64GB DDR4-3200

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Sodapone said:

Huh... Point made. I stand corrected.

I wouldn't blame you.

Power efficiency of RISC is one of the biggest misconceptions in regards to computer architectures. It is highly application dependent, on both what application the architecture and implementation is targeting, as well as how far the software is optimized for the platform.

It is on par with "Microcode turns X86 into a RISC architecture under the hood." This is also false in practice. Even though Intel/AMD/other could make a RISC core virtualizing x86 on top, but the performance deficit on its more compute heavy instructions would make it unattractive for most larger workloads.

 

Or the even more incorrect, "RISC architectures execute code with less instructions." This is though in some extreme edge cases true for ARM vs X86, but most real workloads this doesn't remain true for. X86 is in short a bit flamboyant in how it desires one to talk with it, so at times one can need to instigate a few more extra instructions to get simple stuff done. But this is an edge case and usually a sign of poorly optimized software. (or a programmer that ask their compiler to not use most instruction set extensions for various reasons. Usually backwards compatibility for supporting older platforms.) A well optimized CISC architecture won't have this deficit. (X86 is though not the most well optimized architecture. And ARM is mostly CISC in its higher performance implementations and to a degree a bit nicer in some regards, but it also has its own quirks.)

These three are the biggest misconceptions about RISC architectures that I see out in the wild.
Having studied and designed computer architectures for a long time myself, I do find these misconceptions a bit annoying.

 

RISC is though having advantages for battery powered operation, since it is usually easier to ensure that it has low idle power consumption. Since RISC typically has less control logic to run, and more copies of each instruction type, allowing us to simply power down all but one copy of each instruction when going into low power operation. One can do the same with CISC, but CISC architectures typically has 30+ to 100's of different instructions, where only a few of these instructions have actual copies for improved out of order performance, so there is less one can turn off. (One can though not cycle the transistors in a given instruction, this will save in on dynamic power, but the leakage will still remain since power is still applied.)

 

ARM will likely eats its way into the mobile computing platform as it has done for years. After all, there used to be X86 based PDAs (smartphone equivalent) back in the day.

But as fast as low idle power consumption (battery life) isn't an issue, and power efficiency at high performance is taking higher importance, then X86 has major advantages.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, A51UK said:

ARM not all that great and to be fair it no better them x86.

 

When you talk about legacy code, I think here you are deep do not understand how OS or software are made. There is no so call "legacy code" in Windows at all. Just become the code is old does not mean it not good or been used a lot. Software and OS are made on top of older code base that was use in the older version, rewrite all that code in case of Windows would take years and made be worse them before and cost in the millions, easily. 

 

 

Actually windows 10 does use legacy code, hence why in windows 10 you have the functionality to run a program in compatability operating system settings such as windows 7, windows 8.1, and so forth. Windows has always been known to contain legacy code. However Microsoft is truly working on a project that dumps the x86 architecture, along with any previous code from old operating systems, which means compatibility mode won't exist when they finish there project and do a public announcement. And these arn't rumors either as to what Microsoft is working on, and the thing with technology is in order to move forward and to progress to more efficient ways, then one must let go of the old ways. I would say in the next 5 or maybe 6 years windows will be ready to drop there new OS announcement though they have already said this big overhaul will be more windows update, and of course the industry isn't ready for Microsoft though to drop x86 anytime soon hence why 5 to 6 years is my best guess for a realistic time window though windows does like to rush there designs so who knows.

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/20/2021 at 11:38 PM, Cinterz said:

Actually windows 10 does use legacy code, hence why in windows 10 you have the functionality to run a program in compatability operating system settings such as windows 7, windows 8.1, and so forth. Windows has always been known to contain legacy code. However Microsoft is truly working on a project that dumps the x86 architecture, along with any previous code from old operating systems, which means compatibility mode won't exist when they finish there project and do a public announcement. And these arn't rumors either as to what Microsoft is working on, and the thing with technology is in order to move forward and to progress to more efficient ways, then one must let go of the old ways. I would say in the next 5 or maybe 6 years windows will be ready to drop there new OS announcement though they have already said this big overhaul will be more windows update, and of course the industry isn't ready for Microsoft though to drop x86 anytime soon hence why 5 to 6 years is my best guess for a realistic time window though windows does like to rush there designs so who knows.

I think you may have missed the point of his statement. This seems to be about the definition of the term legacy code and it’s uses.  Almost everything uses what might be termed legacy code. Linux’s stability comes from the cleaning up of Unix.  Sh became shell became cornshell became bourneshell (Bourne was a person) became bourneagainshell and even that is old.  to a not so small extent all unixes benefit from work from 1960’s and 70’s implementations of various UNIXes.   Old code tends to be cleaner not because it was better written, but because it has already undergone bug removal several times. 
 

P.S I may have the lineage of Cornshell in that one wrong.  I might have come later in that list.  It would be checkable though.  There’s a bunch of other shells as well. Seashell and Cshell come to mind.  I forget where they fit.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×