Jump to content

Nvidia CEO Says Intel's Test Chip Results For Next-Gen Process Are Good

There is still code in the architecture that is so old and no longer relevant. 

an easy example of solutions to some minor issues here.

https://stackoverflow.com/questions/30938318/assembly-why-some-x86-opcodes-are-invalid-in-x64

I also noted that intel are about to drop 32bit completely which is a good move but just trying to run x86 without a major overhaul are eventually going to fail.

In my opinon a complete overhaul of the coding using x86 as a base is going to result in a different beast all together. simply trying to patch and move forward as has been done for so many years is not the solution looking forwards hence my comment regarding it as unwieldy.

tech does not stand still but x86 architecture as we understand it today is not going to survive in my humble opinion unless there are major changes made. I do not lean toward arm as being better but the manner in which arm has been leveraged by apple and its ability to marry 2 chips and theoreritcally as many chips as they wish is a clearer indication in my mind where the future could be moving. 

x86 cannot survive simply being updated along the way as faster chips better architectures and designs are implemented. X86 needs a revamp from the ground up but is in my humble opinion being held back by a need to operate on the plethora of intel products that exist in the world.

I would in no way be at all surprised if intel at a future point in time diversify and have a new branch of the code to support future iterations of intel silicon. 

Please do not see my comments as negative as I am a very optimistic individual and have for many years enjoyed intel apple and amd offerings.

today in my PC room I still have an 8700k system a 27" imac a mac mini and running a gaming rig with an AMD 5900x. and an MSI 3080. 

The future is bright but intel I think simply suffer from the issue of trying to maintain compatibility with generations of hardware and need to make a new branch which of course is not easy.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, johnno23 said:

but the manner in which arm has been leveraged by apple and its ability to marry 2 chips

which is possible in x86 too, it's called a "chiplet" design.

doing what apple did with the Mx Ultras is not because they are on ARM.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Above are labelled pre Intel node naming change. Just shows how much of a joke 7nm vs 7nm was going to be, also how easily Intel may become the undisputed industry benchmark again.

I think the next 1-2 years will be very interesting. I really hope everything goes well for TSMC, Intel and Samsung. Specifically Intel and Samsung though since they have been quite far behind recently. It would be fantastic for the industry if Intel's next process nodes arrive on time and are as good as estimated, and Samsung with their 3GAE process.

 

We really need more competition against TSMC, and the roadmaps from Intel and Samsung look promising.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, johnno23 said:

In my opinon a complete overhaul of the coding using x86 as a base is going to result in a different beast all together. simply trying to patch and move forward as has been done for so many years is not the solution looking forwards hence my comment regarding it as unwieldy.

Sure but you haven't actually stated in what way is it unwieldy and how it's actually a problem. ARM too has deprecated status instructions and extensions still in it and still present in ARM based products so why don't we equally apply the same critiques there?

 

4 hours ago, johnno23 said:

marry 2 chips and theoreritcally as many chips as they wish is a clearer indication in my mind where the future could be moving. 

x86 did it before Apple ARM. You're treating one singular way to do Multi Chip Modules (MCM) as the only way or the right way or the best way regardless of other factors like cost. AMD has gone through two iterations of MCM designs, full chip MCM and chiplet MCM.

 

EPYC current has up to 13 total chips in a single package, Intel is looking at doing the same very soon in their own different way (EMIB and Foveros).

 

So if you are basing this on being the future then that is already x86 and they did it first, not ARM.

 

Even then I could bring up the distant past like the Pentium Pro (1995) or Core 2 Quad (2006) being other examples for multi chip (MCM) CPUs. So I'm honestly not seeing your logic here because it has happened with x86, is current now and there are upcoming products and architectures featuring this.

 

All I'm doing is critiquing your assessments and reasonings because I think you aren't really looking at things very well or evenly, I'm just not hearing anything that is actually applicable or really a real problem with x86.

 

As far as what Apple did with the Ultra products, that was and is the most expensive way to bond chips with the lowest yields and simply cannot be implemented on product families as wide as what Intel and AMD have without making it cheaper. Intel Foveros is a next generation method of this that is more cost effective. We got it from Apple because it was paired with an end product and return on investment was gained through a holistic and complete product cycle. An $80 boxed retail CPU just couldn't feature such a costly technology.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, leadeater said:

ARM too has deprecated status instructions and extensions still in it and still present in ARM based products so why don't we equally apply the same critiques there?

Not as many, and Arm has actually started "fresh" and gutted a lot of backward compatibility in ways that x86 hasn't done. Try running old Arm assembly on new Arm processors like the X3 or A715. It won't work. Try running old x86 assembly on a new x86 processor and it will most likely work.

 

There are also some things that are better with Arm than x86. The amount of SIMD instructions for x86 for example is kind of insane.

 

I don't think it is a coincidence that Arm processors are taking over everywhere except in scenarios where legacy code that can't easily be recompiled is needed (mostly Windows environments). 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, LAwLz said:

I don't think it is a coincidence that Arm processors are taking over everywhere except in scenarios where legacy code that can't easily be recompiled is needed (mostly Windows environments). 

Well I wouldn't phrase it as taking over since most of the ARM market growth outside of mobile is new rather than replacing. If you actually try and name areas where ARM has taken what was x86 then you'll get stuck very quickly trying to make that list. A lot of areas where it has are actually custom ASICs like in some switches, routers and firewalls that were never x86 at all. Then you have HPC and AI/ML which are still almost entirely x86 for the platform and GPU or ASIC accelerators which aren't ARM and the few that are ARM platforms just as likely replaced PowerPC as it could have been x86. When something is mostly about the accelerators then the platform ISA is sort of superfluous which means ease of deployment is a big factor which is a negative aspect in this area for ARM.

 

Take the latest Top 500 list for example

 

image.png.10147a6f7344e54566b273aeaaba4438.png

https://www.top500.org/statistics/list/

 

Statistically ARM is insignificant. 

 

Basically what I'm saying is reality isn't actually matching marketing and articles. Actual usage isn't anywhere near what some may be thinking, certainly not taking away traditional x86 market either. I can certainly see basic web servers being the first to go but even that gets tricky quickly, basic web servers can easily be a misnomer. Looks and sounds easy, often is not. But that's my hatred for having to support poorly maintained web servers and web sites speaking there.

 

Is the demise of x86 really signaled by areas that were never x86 in the first place? I'd say no.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

EPYC current has up to 13 total chips in a single package, Intel is looking at doing the same very soon in their own different way (EMIB and Foveros).

Intel have been doing it for decades:

Pentium D had two core chips on one package ~2005

 

Broadwell-C had a core and cache chip on same package ~2015 https://www.anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5675c/2

 

Lakefield was their first attempt at stacking and was productised ~2020 https://www.anandtech.com/show/15877/intel-hybrid-cpu-lakefield-all-you-need-to-know/3

 

Ponte Vecchio went absolutely crazy with tiles. I haven't turned up the official count but it is in the high tens. 

And if you prefer a current CPU, there's Sapphire Rapids in server and workstation offerings: https://www.anandtech.com/show/16921/intel-sapphire-rapids-nextgen-xeon-scalable-gets-a-tiling-upgrade I have confirmed that the workstation CPUs are openly sold and in stock right now. At a silicon level, I think this is closer to the Apple way of doing it than the AMD way.

 

For consumer tier CPUs, it is expected to arrive later this year with Meteor Lake.

 

 

33 minutes ago, LAwLz said:

I don't think it is a coincidence that Arm processors are taking over everywhere except in scenarios where legacy code that can't easily be recompiled is needed (mostly Windows environments). 

Depends on the use case. x86 is mostly a take it or leave it offering, outside of large scale semi-custom like the console SOCs AMD make. Arm has more flexibility if you want to customise it. At the lower end RISC-V is making inroads that Arm probably wont compete with in cost. Longer term I can see RISC-V eating up what is currently ARM market share, while x86 continues to do its own thing.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

Intel have been doing it for decades:

Pentium D had two core chips on one package ~2005

I hope you didn't miss two lines down from what you quoted 😃

 

But yea there are tons of Intel and non Intel stuff that have done it, even far back as the 70s which is quite crazy to think about. Funny how old technology comes back, just better.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I hope you didn't miss two lines down from what you quoted 😃

I decline to answer that on the grounds it may incriminate me 😄 

 

Still gives some visual context.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, johnno23 said:

I also noted that intel are about to drop 32bit completely which is a good move but just trying to run x86 without a major overhaul are eventually going to fail.

They are not, you got it wrong. They're only removing legacy boot stuff, your regular 32-bit software is still going to work fine.

 

11 hours ago, johnno23 said:

x86 cannot survive simply being updated along the way as faster chips better architectures and designs are implemented

You know that the actual CPU architecture is decoupled from the ISA, right? The ISA is just a "protocol" for the CPU to read instructions, how it deals with those instructions is totally transparent. Apple could have gone with x86 and still have made an excellent CPU, the problem lies with the license to use x86.

 

11 hours ago, johnno23 said:

I do not lean toward arm as being better but the manner in which arm has been leveraged by apple and its ability to marry 2 chips and theoreritcally as many chips as they wish is a clearer indication in my mind where the future could be moving. 

That's also doable by x86 and has nothing to do with anything specific to ARM.

11 hours ago, johnno23 said:

X86 needs a revamp from the ground up but is in my humble opinion being held back by a need to operate on the plethora of intel products that exist in the world.

What would you change in the instruction set to make this possible? Add js instructions like ARM did? lol

 

 

IMO it seems that you're just surprised by what Apple did with their µArch and is now crediting this to ARM instead. Apple did a good µArch, and which ISA they used is pretty much irrelevant. Apart from theirs, no other µArch for general purpose usage is able to beat current ones from either Intel or AMD when it comes to performance, and at best can outperform in power efficiency for some tasks, like ARM's own server cores (Neoverse and Ampere's One).

 

Manufacturers are going for ARM because they want to build their own CPUs, which is not doable with x86 due to licensing, whereas you can just pay a fee to either buy an existing core design for ARM or get a license to build your own (like Apple and Nvidia did). 

With RISC-V it's even better since you don't need to pay anything to make use of the ISA, but it's still not mature enough for high performance computing.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

 

8 hours ago, igormp said:

What would you change in the instruction set to make this possible? Add js instructions like ARM did? lol

The ISA is already 50 years old 

I get the feeling people think I am am anti intel which is so far from the truth it makes me wonder what people read.

I also specifically stated that just updating and adding code was holding intel back

intel is so strong due to 50years of development and released products. the instruction set will only run on x86 processors and yet apple translate on the fly which is cool but far from perfect. 

Intel has been thinking for many years to cut non-64-bit in x86 to create what might be named x86S architecture?. Would this not result in better performance and efficiency, not having 32-bit support was a bad idea many years ago today almost all hardware is 64-bit.

The ISA needs to be revamped and looking to the future I can see intel doing this as just power bills to run x86 on todays high costs of power is only going to become more important to people not just businesses.

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, johnno23 said:

The ISA is already 50 years old 

ARM is only 7 years younger, so what's your point?

20 minutes ago, johnno23 said:

I get the feeling people think I am am anti intel which is so far from the truth it makes me wonder what people read.

This has nothing to do with Intel, you're confusing what an ISA is with the actual µArch behind a CPU, that's why everyone is disagreeing with you: you simply don't have proper knowledge about what an ISA is and what a CPU architecture entails, and is solely blaming the ISA for stuff that has nothing to do with it.

20 minutes ago, johnno23 said:

Intel has been thinking for many years to cut non-64-bit in x86 to create what might be named x86S architecture?

As I said before, that's only related to boot-time stuff. 32-bit stuff would still work fine.

21 minutes ago, johnno23 said:

The ISA needs to be revamped

Repeat after me: "An ISA has only minor impact on your µArch".

 

What exactly would you change on that "revamp"? Simplify addressing modes? Make the instructions fixed length? Remove some extensions?

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, johnno23 said:

Would this not result in better performance and efficiency

Not really no. You are aware the only part of a CPU that has anything to do with x86 is the decoders and instruction caches? Which are small part of the die. After that it's all decoded instructions not related to x86 at all.

 

image.thumb.png.e33813ecceba746cc4d2847334ca4698.png

See inside the labelled Golden Cove Core? Next to the Display Control on the right. That small pink area, that and for each CPU core. That's what you're talking about and saying could increase power efficiency by changing that, when the execution units are probably greater than 80% of the total chip power followed by memory controller then PCIe. (Ignored iGPU since a lot don't use it but put that like second if one does)

 

Do you know what areas of a CPU die use power? Do you know how those do or do not relate to the ISA?

 

What we're querying for is specifics. Name actual things that are holding it back or causing a problem. Do the decoders use too much power? Then how much do they use?

 

10 minutes ago, johnno23 said:

The ISA is already 50 years old 

As is ARM 🤷‍♂️

 

10 minutes ago, johnno23 said:

the instruction set will only run on x86 processors

x86 can be emulated on anything, even on x86. Current ARM has pretty much everything in it to do x86 translation, Apple just go round to doing it since they needed it. But either way x86 has been emulated before, just for different reasons.

 

All I heard again was it's old so it must be crap and filled with rubbish and therefore terribly inefficient. Is that actually how it works? Do redundant unused instruction extensions actually use power? Do they take up much die area? Bugs and security issues I can definitely see as being a potential problem, you never know what can be abused so if it's not used get rid of it.

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Not really no. You are aware the only part of a CPU that has anything to do with x86 is the decoders and instruction caches? Which are small part of the die. After that it's all decoded instructions not related to x86 at all.

 

image.thumb.png.e33813ecceba746cc4d2847334ca4698.png

See inside the labelled Golden Cove Core? Next to the Display Control on the right. That small pink area, that and for each CPU core. That's what you're talking about and saying could increase power efficiency by changing that, when the execution units are probably greater than 80% of the total chip power followed by memory controller then PCIe. (Ignored iGPU since a lot don't use it but put that like second if one does)

Just a better quality image for Zen 3 and Zen 4:

https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F553cf263-224b-475d-8949-f97d2d13a187_2000x1604.jpeg

 

The area dedicated to the decoder is getting proportionately smaller as times goes and more executions units and caches are added.

 

Also to be fair, the µCode ROM space should be taken into consideration, but it's not like it demands much power to begin with.

 

image.png

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×