Jump to content

would "compatibility" cores be possible/usefull

currently, X86 has over 1000 instructions,  many of the instructions are not in use today and are only kept for backward compatibility with old software. I want to know whether it would be feasible to cut many of the obsolete instructions from a CPU's main cores and have any programs that use the cut instructions moved to a couple of "compatibility" cores that keep the full X86 instruction set. this should be much faster than instruction set emulation, while still cutting down on die space and bringing some of ARM's advantages to X86

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, thermal compound drinker said:

I want to know whether it would be feasible to cut many of the obsolete instructions from a CPU's main cores and have any programs that use the cut instructions moved to a couple of "compatibility" cores that keep the full X86 instruction set.

I'm not going to say this isn't possible, but I have my doubts. The main issue I see is scheduling, a CPU like this would need its own special scheduler designed for it, and given how rough the introduction of big-Little on Windows where all the cores do have the same instruction set and just different speeds, I see getting this working to be an absolute nightmare. 

 

I could be wrong, I don't design CPUs for a living, but it is definitely something to consider. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, RONOTHAN## said:

I'm not going to say this isn't possible, but I have my doubts. The main issue I see is scheduling, a CPU like this would need its own special scheduler designed for it, and given how rough the introduction of big-Little on Windows where all the cores do have the same instruction set and just different speeds, I see getting this working to be an absolute nightmare. 

 

I could be wrong, I don't design CPUs for a living, but it is definitely something to consider. 

Agreed, the overhead for the scheduler would likely cause a decrease in responsiveness unless there was a dedicated scheduler IC that filtered all incoming instructions and then sorted to the correct core designation, which then went through another scheduler on die for core preference in BIG.little design. 

 

Now if there was a RISC island that handles all non-legacy tasks, or maybe a RISC island that ONLY handles legacy instruction set tasks, maybe?

But yeah, likely massive overhead for little real world return likely.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, thermal compound drinker said:

currently, X86 has over 1000 instructions,  many of the instructions are not in use today and are only kept for backward compatibility with old software. I want to know whether it would be feasible to cut many of the obsolete instructions from a CPU's main cores and have any programs that use the cut instructions moved to a couple of "compatibility" cores that keep the full X86 instruction set. this should be much faster than instruction set emulation, while still cutting down on die space and bringing some of ARM's advantages to X86

Intel are already ditching 32bit OS compatibility with X86S, which retains 32bit software compatibility (on a 64bit OS) due to 64bit having a layer of backwards compatibility.

Its basically not necessary to have full native x86 compatibility except in some very niche industrial cases where they are stuck on a legacy OS with legacy software.  But that also would need specific motherboard configurations too which wont use typical desktops CPUs.  I believe there are still companies who make Intel compatible CPUs based on their older designs for this very purpose, as some of those things need specific clock timings to work correctly so it has to function identical to the original.

 

There's also the whole legacy boot vs UEFI.  How the hardware itself is initialised differs between the two, legacy boot as I understand it sets up all the base hardware ready for the OS with the expectation that the OS will need to ask the BIOS to handle certain things (which AFAIK no modern OS does, they take over all duties on boot), UEFI does the bare minimum then hands over control to the OS.  I'd imagine legacy boot may be the next thing to go once 32bit booting is removed from the CPU.

 

At the end of the day, they only kept these old instructions because its practically free to do so.  Implementing a fallback for the rare situation they are needed would be a huge expense compared to the die space they would gain.  So it makes sense they waited until they are basically not needed at all.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BiotechBen said:

Agreed, the overhead for the scheduler would likely cause a decrease in responsiveness unless there was a dedicated scheduler IC that filtered all incoming instructions and then sorted to the correct core designation, which then went through another scheduler on die for core preference in BIG.little design.

You could probably keep the overhead fairly small by using something similar to core affinity. When a process is first started, it could indicate whether it requires legacy cores or not. If no indication is present, assume it's a legacy process and bind it to legacy cores. Otherwise restrict it to the cores it indicates.

 

But if you had say eight cores and two of these were also able to process legacy code, that would mean a lot of code would be restricted to only two cores during the transition. While the rest would effectively be limited to six cores, because those two cores are clogged with all the legacy stuff. Combined with the additional scheduler complexity, I doubt you'd gain much of anything.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, BiotechBen said:

Agreed, the overhead for the scheduler would likely cause a decrease in responsiveness unless there was a dedicated scheduler IC that filtered all incoming instructions and then sorted to the correct core designation, which then went through another scheduler on die for core preference in BIG.little design. 

 

Now if there was a RISC island that handles all non-legacy tasks, or maybe a RISC island that ONLY handles legacy instruction set tasks, maybe?

But yeah, likely massive overhead for little real world return likely.

thx for the explanation, I guess that if making X86 RISC was that easy it would have already been done.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×