Jump to content

This time it'll definitely catch on - Intel publishes spec for 64bit only x86

BachChain
50 minutes ago, manikyath said:

if this finally scrapes some bloat out of x86 architecture and makes it at least a bit better in competing with RISC (that's both ARM and RISC-V) power efficiency, i'm all down for a half-emulated legacy 'support-ish' for 32-bit stuff. make it an OS_level software layer for all i care

This will surely remove some bloat, but I doubt it'll have any impact on power efficiency. And power efficiency isn't correlated with the ISA, so that's a moot point.

 

50 minutes ago, manikyath said:

i was recently delving into some docs for microcontrollers, and apparently the powerful stuff is all moving towards RISC-V because it's cheap to implement, and apparently gets 250MHz out of milliwatts.

It's cheap to implement indeed, since you don't have to pay royalties to ARM, but an M0 can achieve the same - if not better- power efficiency.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, manikyath said:

if this finally scrapes some bloat out of x86 architecture and makes it at least a bit better in competing with RISC (that's both ARM and RISC-V) power efficiency, i'm all down for a half-emulated legacy 'support-ish' for 32-bit stuff. make it an OS_level software layer for all i care.

 

i was recently delving into some docs for microcontrollers, and apparently the powerful stuff is all moving towards RISC-V because it's cheap to implement, and apparently gets 250MHz out of milliwatts.

That is not how silicon works at all or instructions work at all. 

The wattage required to do a clock speed is down to how many transistors you are clocking, and the node. If you really wanted to make a version of an 80486 on 7nm you would be able to clock it to multiple ghz in "milliwatts" And calling out watts to hz is a misleading way to measure power efficiency. (see avx-512). An AVX 512 SIMD command may take more watts per clock than a jmp command, but its doing a LOT in that clock (well more then one, pipeline from start to finish being whatever it is). not just moving to a new address, more transistors have to be clocked.  That is the point of those SIMD instructions. 

Generally, 16bit and 32bit are still using the 64-bit registers for most commands, this is more about handling it on the control logic side, not the execution logic side. 
Like when you ask to use Register AX, instead of EAX
Guide to x86 Assembly

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, starsmine said:

The wattage required to do a clock speed is down to how many transistors you are clocking

exactly my point, though...

getting rid of obsolete hardware features <should imply> getting rid of transistors => improving efficiency.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, manikyath said:

exactly my point, though...

getting rid of obsolete hardware features <should imply> getting rid of transistors => improving efficiency.

the bottleneck on clocks and what uses up most of the power on an x86 CPU is not the front end. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, starsmine said:

the bottleneck on clocks and what uses up most of the power on an x86 CPU is not the front end. 

but anything that reduces the (idle) power consumption on x86 helps, and if this step ends up catching on, it might be a sign for intel/AMD that there is space for more cuts.

 

and if it has no benefit, this change will just go the way of the itanic and be forgotten.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, WolframaticAlpha said:

Most of these OSes have gone EOL, so I doubt if there will be many clients using them. As for the legacy users, I suspect, intel won't necessarily stop selling some of their x86-64 processors to major legacy users.

You would be surprised at how much legacy shit is out there. I develop software/firmware for embedded devices (bare-metal MCUs, and Linux systems) and the amount of support calls I get for guys running versions of Windows that are 20-30 years old trying to run our software is astounding. Or my other favorite is companies that only permit Internet Explorer because they have some crusty web system that only works with IE. Mind you these will be modern laptops running Win10 or Win11 but only have IE installed.

 

You can basically count on the entire industrial sector to be affected by any breaking compatibility changes. All of your oil & gas, power generation, factories, utilities, you name it run on systems that are 20 - 40+ years old. 

 

I've been on O&G sites where their control systems were a room filled with cabinets that contain relays that perform relay logic (boolean logic with relays) from the 1940s. Not exactly "computer" related (At least not in the modern sense) but it's a damn good example of legacy shit that will never be replaced. Computer systems suffer the same problem. It performs a business-critical function but it is no longer maintained so you can't run it with the latest technologies. 

 

The only time any of it is replaced is when it truly gives up the ghost and there is no way to bring it back. I for one welcome getting rid of all of this crusty garbage but it would cause some serious problems for the industries that stop society as we know it from descending into darkness and anarchy.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

Apple did it a long time ago, it’s for the better. People will make new software. You shouldn’t love abandonware. 

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, manikyath said:

exactly my point, though...

getting rid of obsolete hardware features <should imply> getting rid of transistors => improving efficiency.

I can't imagine removing this "saves" very much die space, and is more about removing opcodes and things from microcode to improve security.

 

If you remove the 16-bit and 32-bit boot modes, then it goes directly to 64-bit boot and the security involved with that, instead of the weakness in the boot chain at the 16-bit level.

 

What this means is that older OS's that need real mode (Eg DOS and Windows 3.1) or protected mode (Windows NT/2K/XP, Linux kernel before 4.19 and FreeBSD before 10.1) will not be able to boot. These OS's have all moved to 64-bit mode and there is still a tonne of software that was made before 2019 that doesn't compile cleanly to 64-bit, or at all.

 

A lot of this issue can be thrown at this inertia of not willing to remove features, or including the 32-bit/16-bit compatibility features by default. Those features need to go away and only be installed after-the-fact if needed. So Apple lead the charge, by basically cutting off 32-bit support in iOS and MacOS first. Windows is next. You'll probably see whatever comes after Windows 11 drop automatic 32-bit support. Then when AMD/Intel decide to drop 32-bit support, it'll only be available via emulation. Which will be easy to do on intel/amd CPU's since the actual registers are 1:1 already.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I blame apple for this.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×