Jump to content

NVIDIA Made a CPU.. I’m Holding It. - Computex 2023

James

I'm a bit confused about how LTT handles the computex stuff, why isn't more of it on the same channel?
Also wish they pointed out the negatives for consumers too, not just writing something off as "just cool", but I guess there might be future videos on it to go more into it, and if thats one of the reasons to split it?

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, ImorallySourcedElectrons said:

The only thing Apple has demonstrated that you can make a very locked down platform with such a CPU, Apple does not have to contend with people running other operating systems or trying to hook up the latest meanest graphics card to their SoC, they designed a SoC to go along with very specific hardware that they also designed. Same for NVidia's ARM applications, that's a locked down system with zero modularity and no real third party manufacturers involved. They have absolutely no need to compromise to support future or former extension options, folks will specifically design applications for that very system and nothing else. You're basically comparing an ASIC to a general purpose chip and are now complaining that the general purpose chip can't beat the ASIC's performance for specific tasks under conditions that favour the ASIC, so obviously everything must be solved with the ASIC. That is the part you're not grasping here.

This is a snarky, but basically correct version of what I'm saying. Except that you insist on pretending that I'm arguing these ARM workstations would replace x86 in everything, no matter how many times I repeat that I think x86 won't go away. I don't think ARM workstations need to completely replace x86 in order to succeed, just like I don't think ARM servers need to replace all x86 servers in order to succeed.

 

Obviously you're bothered by the locked down nature of these machines, but I think there's a big section of the market that won't care. The original Microsoft Surface didn't fail because it was locked down, it failed because it sucked.

 

I think that Nvidia should be looking at Apple and saying "hey, they're ARM desktops and laptops are both powerful and profitable, lets get in on that", and the fact that the platform is super locked down won't hurt them. 

 

21 hours ago, ImorallySourcedElectrons said:

Well, you already wanted to get rid of the LPC interface that supports adding ISA. So, no more PS/2 keyboards and mice, bye-bye TPMs, good luck if you want a serial link to configure something like an on-board data modem. Get rid of SMB? Bye-bye temperature sensors, fan controllers, etc. Maybe we should just kick out the programmable interrupt controller, I mean it's basically an 8259 from the 70s, and while we're at it we can also throw out the legacy DMA controllers that are still in there that are most definitely never used. And no, you cannot necessarily replace all these things with USB or PCIe. For example, USB does not support the same level of interrupts those legacy interfaces support. And yes, most of the ARM SoCs you're referring to miss all these features, because they never even try to support this much hardware. For example, does M1 support folks randomly tacking on fans and temperature sensors onto some management bus? Because that's the sort of stuff we've been doing to x86 CPUs. 

Okay, so none of those standards are 1-5 years old so then that number was just some exaggerating for effect on your part. Fair enough.

21 hours ago, ImorallySourcedElectrons said:

And you are now most definitely going to say Intel and AMD should then just remove these things for laptop CPUs in lieu of power efficiency, but that's removing core functionality that's often used for manging system internals that you're often not even aware of, and we're at major hardware redesign once more.

Literally, yes. AMD and Intel constantly redesign their laptop platforms. Just last gen Intel redesigned their core layout and thread schedular from the ground up in order to accommodate a big.LITTLE design. If they can make their laptop CPUs more efficient by dropping support for serial modems, fan controls their not using, or anything unused I/O then their hardware design teams should be looking into that.

 

The whole point of having separate laptop SKUs is that they make different trade-offs from the desktop CPUs.

21 hours ago, ImorallySourcedElectrons said:

That does not consider the economics of the manufacturing and the volume market when you're manufacturing chips this complex.

This is absolutely something Nvidia would need to be looking at. Apple's plan was do basically design up from phone to workstation and Nvidia would be looking to scale down from super computer to workstation. It's of course not a given that this will work, but there's probably a better chance to scale down from Grace Hopper to a workstation than there ever was to scale up from Tegra.

21 hours ago, ImorallySourcedElectrons said:

And this is by no means about computational resources and power usage alone, it's also about software compatibility. You think switching doesn't cause major issues, even with modern virtualisation and emulation technology? M1 came out in 2020, it's safe to say Adobe most likely got half a year to a year of heads-up given how popular they are on the Mac platform. Three to four years later they still haven't gotten SVG export to work in Photoshop: https://helpx.adobe.com/photoshop/kb/photoshop-for-apple-silicon.html  Those sort of issues are why good legacy support is so important, it means you don't have your users and software developers chasing down odd bugs for several years.

"It might be hard, therefore we shouldn't do it" is a pretty weak argument. I'm glad that line of reasoning didn't win the day at Apple, or with Intel's 13th gen, and I hope that it wouldn't hold much sway with AMD and Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, ImorallySourcedElectrons said:

Which is application specific and high volume, so worth customizing for. But what @maplepants is talking about would literally compete with the tablets and chromebooks, a saturated low margin market.

lol a theoretical 36 core Nvidia Grace 2 workstation wouldn't be competing with Chromebooks my dude. Unless Chromebooks where you like are way different than they are in Germany.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, maplepants said:

This is a snarky, but basically correct version of what I'm saying. Except that you insist on pretending that I'm arguing these ARM workstations would replace x86 in everything, no matter how many times I repeat that I think x86 won't go away. I don't think ARM workstations need to completely replace x86 in order to succeed, just like I don't think ARM servers need to replace all x86 servers in order to succeed.

You have a solution that's looking for a problem. You are expecting there's market demand for what's basically a over-powered tablet that's incompatible with most legacy software. You cannot do what Apple did, Apple succeeded in porting it because they had a limited set of hardware they had to emulate on a limited set of hardware platforms with a very locked down software eco system, and they modified their SoC to carry the necessary peripherals to make that emulation relatively painless. But for a workstation you'll want engineering and graphical design software, so you want Windows to run on that thing if you want it to catch on commercially for the portable workstation market, so now you also need those software vendors to go along with your plan. Alternatively, you carry an x86 instruction decoder along, at which point you're dealing with a multi-ISA CPU - which is a resource management nightmare all on its own from a design perspective.

 

20 hours ago, maplepants said:

Obviously you're bothered by the locked down nature of these machines, but I think there's a big section of the market that won't care. The original Microsoft Surface didn't fail because it was locked down, it failed because it sucked.

No, the locked down nature is why they managed to do the port, your plan would entail moving from a relatively open system to a relatively locked down system.

 

20 hours ago, maplepants said:

I think that Nvidia should be looking at Apple and saying "hey, they're ARM desktops and laptops are both powerful and profitable, lets get in on that", and the fact that the platform is super locked down won't hurt them. 

Now convince anyone to write software for that platform without getting NVidia to do the ports themselves.

 

20 hours ago, maplepants said:

Okay, so none of those standards are 1-5 years old so then that number was just some exaggerating for effect on your part. Fair enough.

This is just malicious ignorance from my point of view... Those TPMs you need for Windows 11, definitely quite new, 5G modems we certainly had those in the 90s *cough*, and it's definitely not the case that pretty much all modem chips manufactured in the last two decades use fairly similar serial interfaces for configuration purposes (and no one is going to change those because you want to prune some functionality from your silicon, they'll just tell you to bugger off). Heck, want to know a little dirty secret? A surprising number of laptops still used PS/2 to hook up their keyboard until quite recently, and I'm fairly certain some still do. And it's not just the hardware that'd have to be redesigned, you'd have to redesign a lot of default device drivers. How do you think the likes of Windows and Linux manage to talk to so much hardware using a default driver subset? It's because these interfaces became a de facto standard in industry, leading to devices that are compatible with generic software and each other. This generic interfacing also led to second-sourcing becoming very common, which is what enables competition and the current low prices for some spectacularly complicated hardware. Your proposed "let's just switch to ARM" kind of upsets all of that, so it just doesn't make sense from an engineering and manufacturing perspective. Those legacy modes and backwards compatibility are there for a reason, we wouldn't invest effort in those things if we didn't think we'd need it.

 

20 hours ago, maplepants said:

Literally, yes. AMD and Intel constantly redesign their laptop platforms. Just last gen Intel redesigned their core layout and thread schedular from the ground up in order to accommodate a big.LITTLE design. If they can make their laptop CPUs more efficient by dropping support for serial modems, fan controls their not using, or anything unused I/O then their hardware design teams should be looking into that.

You misunderstand where the effort would come from, it wouldn't come from the CPU manufacturers, it's that the entire rest of the hardware and software industry would have to bend to the will of your little plan to make it work. Intel and AMD by no means the high and mighty kings of the PC food chain, if they make major design decisions they talk to major software and hardware vendors. Like when talking about APM, you can be pretty much guaranteed that Intel, AMD, and Microsoft got around the table and talked about how they'd go about implementing that. We do the same thing in other parts of the electronics industry, I've been to plenty of standardization meetings with both competitors and suppliers present to talk about interfacing mechanisms or measurement metrics.

 

21 hours ago, maplepants said:

"It might be hard, therefore we shouldn't do it" is a pretty weak argument. I'm glad that line of reasoning didn't win the day at Apple, or with Intel's 13th gen, and I hope that it wouldn't hold much sway with AMD and Nvidia.

See above, it was relatively easy for Apple - they had to deal with a very limited hardware and software subset, and they still ran into massive compatibility issues. Now try and do the same for x86 applications designed to run on a Windows system with random compatible hardware, and make them work on ARM. It ain't due to lack of trying either, Microsoft invested a lot of effort into WOW64, hence why running 32 bit programs on modern Windows 10/11 ain't that troublesome, but it's by no means perfect. But the other part of WOW64, that takes care of the more difficult translation of x64 to ARM64, is far less capable and I would even refer to it as quite problematic for things like CAD software.

 

21 hours ago, maplepants said:

lol a theoretical 36 core Nvidia Grace 2 workstation wouldn't be competing with Chromebooks my dude. Unless Chromebooks where you like are way different than they are in Germany.

It would be competing against chromebooks unless if you get the software vendors to work along, and they have absolutely no incentive to invest thousands of hours of work to port over their software. You might want to read up on what actually killed Itanium and the various other CPU architectures that popped up over the years, there's a reason we've mostly stuck with x86/x64, PowerPC, IBM z (which stuck around because it's entirely backwards compatible with System/360), and SPARC (until Oracle killed it off): they had mature software stacks. Lest we forget about the likes of intel's 80860, AMD's am29k, MIPS (which kicked the bucket virtually when SGI went the way of the dodo, and for real a couple of years ago), and DEC's PDP11/PRISM/VAX/Alpha conglomerated mess (I ain't explaining that one, just remember: killed by HP because no one wanted to buy it). There were even some very admirable attempts to get around these things, like multiple attempts at software defined instruction sets combined with a hybrid RISC/CISC core, enabling the CPU to run x86 natively alongside its own optimized instruction set with hardware level virtualisation separating the two while permitting the exchange of data.

 

Heck, the only reason ARM is managing to make any inroads into the more general server market is because the Linux and BSD software support exists due to mobile phones, with that support moving upstream into the mainstream software stacks over time. For example, until the Android-era came along it took major effort to run NetBSD on an embedded ARM CPU. And I wouldn't be surprised if the only reason Windows on ARM exists is because Microsoft wanted to try and capture part of the tablet and phone market with Windows Mobile, so most of the work was already done. If none of that had been done by third parties, you wouldn't be seeing ARM in the datacentre.

 

So considering the history of the last fourty to fifty years, please do tell me, which advantage does Grace have over any of those hundreds of challengers that came along? And the entire RISC vs CISC thing was deemed pointless by most in the industry, because the optimal is somewhere in between for most applications, and is mostly kept going by outdated textbooks and enthusiasts. And the instruction length issue some people here focus on was solved quite quickly in the late 80s/early 90s, which led to some funky VLIW architectures in some applications(in fact one of the most common DSP processor architectures is a VLIW RISC monstrosity that'd probably make Cray spin in his grave). The 80s RISC push was mostly down to some fears about technology scaling become impossible, it led to some really wild CPU designs, and the proposed RISC architectures were the sanest of the bunch and stuck around a bit longer than most as a result, though most died out over the years. ARM got lucky pulling in a few big contracts in the 90s and managing to coast through the dot-com implosion, they then made some smart moves in terms of how they provided IP - beating out on competitors with insane business ghoul licensing strategies, but it could just as well have been MIPS or something else that became popular.

 

And maybe fun to know as well, Intel has another major CISC instruction set that's still in widespread use since the late 70s: 8051 for microcontrollers and embedded systems. Pretty much every single peripheral controller (USB host, keyboard, interrupt, ...) has an 8051 clone onboard.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, ImorallySourcedElectrons said:

You have a solution that's looking for a problem

In the same way that an RTX 4070 is a solution looking for a problem, so would the workstation class Grace CPU be. 

 

13 hours ago, ImorallySourcedElectrons said:

for a workstation you'll want engineering and graphical design software, so you want Windows to run on that thing if you want it to catch on commercially for the portable workstation market, so now you also need those software vendors to go along with your plan.

I disagree that these things would have to run CAD or Photoshop on day 1 to be any good. But I do agree that if Nvidia is seriously looking into doing this, they should be talking to Microsoft. The reason why I think Microsoft would be listening is that Microsoft wants very badly to make their Surface X line a success, but the current best setup for developing Windows on ARM software is a Mac running Windows in a VM. 

 

The Microsoft Dev Kit 2023 isn't great and if Unity truly is going to make a Windows on ARM version they and any devs who use it will need better hardware. 

 

13 hours ago, ImorallySourcedElectrons said:

This is just malicious ignorance from my point of view...

You claimed that a transition to ARM would require dropping 1-5 year old standards. I didn't make you claim that dude, and so it's not my fault it wasn't true.

13 hours ago, ImorallySourcedElectrons said:

So considering the history of the last fourty to fifty years, please do tell me, which advantage does Grace have over any of those hundreds of challengers that came along?

Most of your post is just whinging that adopting ARM would be hard, and hard things aren't worth doing. But I can say why I think that at this moment, it's worth it for Nvidia to be looking into making an ARM workstation.

 

As you already pointed out, the work for porting Linux has already been done and that's a huge advantage for these things. When Nvidia is testing internally what this workstation would look like, they can already install Ubuntu on it and have a working operating system. That's huge, and something that if it didn't exist then would probably change the calculus here substantially.

 

The other factors that make it worthwhile for Nvidia to look into this have to do with the rest of the industry. Microsoft is trying to make ARM tablets, the gaming market on Android is finally expanding beyond casinos for children, and AI / ML is becoming significantly more important to a lot of developer's workflows and apps. 

 

Consider just Microsoft for a minute, if they want their Surface Pro X line to have any hope of competing against the iPad then they need a better developer story than "buy a Mac and run a VM". Qualcomm wants to convince MS to do this by scaling up their Snapdragon line. But Nvidia might have a better shot at scaling down their Grace line. The best outcome for Microsoft would be if both turned out to work. Scaling up the Snapdragon to something like an i5 level of performance, but with incredible battery life, and scaling down the Grace Hooper SOC to something like a Threadripper Pro in terms of price and performance.

 

Laptops are the largest market for "desktop" computers by far. Laptop buyers are a huge portion of Microsoft's Windows licenses and Office 365 subscriptions. Right now, any laptop buyer who cares about battery life should be looking at Macbooks and almost nothing else. The "people who care about battery life" segment of the laptop market isn't one that Microsoft should just completely cede to Apple due the legacy requirements of x86 CPUs. And, if Microsoft wants to have any hope of convincing developers to port their software to their ARM tablets / laptops, then they've got to have an ARM workstation worth running. 

 

Even when something sounds hard, it can still be worth doing.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, ImorallySourcedElectrons said:

How do you think the likes of Windows and Linux manage to talk to so much hardware using a default driver subset? It's because these interfaces became a de facto standard in industry, leading to devices that are compatible with generic software and each other. This generic interfacing also led to second-sourcing becoming very common, which is what enables competition and the current low prices for some spectacularly complicated hardware. Your proposed "let's just switch to ARM" kind of upsets all of that, so it just doesn't make sense from an engineering and manufacturing perspective.

ACPI my beloved.

And the already existing workstation/server ARM CPUs do support ACPI for most cases, but that's a different use case anyway.

23 hours ago, ImorallySourcedElectrons said:

But for a workstation you'll want engineering and graphical design software, so you want Windows to run on that thing if you want it to catch on commercially for the portable workstation market

For the kind of market nvidia is targeting, they'd only need Linux support for a grace workstation since all it'd be doing is ML-related stuff, they could just bolt ACPI and build a really regular-ish workstation out of that without much issues.

 

9 hours ago, maplepants said:

But I do agree that if Nvidia is seriously looking into doing this

I don't think they are, the market that Grace targets is mostly linux-based.

 

9 hours ago, maplepants said:

and scaling down the Grace Hooper SOC to something like a Threadripper Pro in terms of price and performance.

Grace Hopper is not meant to compete with a Threadripper in any way. The CPU alone would suck for that. It's just a glorified co-processor to the GPU with a blazing fast interconnect that's not achievable with any existing x86 CPU.

 

9 hours ago, maplepants said:

And, if Microsoft wants to have any hope of convincing developers to port their software to their ARM tablets / laptops, then they've got to have an ARM workstation worth running. 

lol Windows is a second-class citizen to Nvidia when it comes to ML. It actually takes ages for the drivers on Windows to catch up to the Linux one when it comes to performance for ML.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, maplepants said:

In the same way that an RTX 4070 is a solution looking for a problem, so would the workstation class Grace CPU be. 

No, this is an entirely different thing. You're talking about a plugin card that's compatible with the existing rolled out systems and lining it up with an entirely new architecture that needs to be developed for.

 

10 hours ago, maplepants said:

I disagree that these things would have to run CAD or Photoshop on day 1 to be any good. But I do agree that if Nvidia is seriously looking into doing this, they should be talking to Microsoft. The reason why I think Microsoft would be listening is that Microsoft wants very badly to make their Surface X line a success, but the current best setup for developing Windows on ARM software is a Mac running Windows in a VM. 

If you don't have software support on day one it ain't going to be adopted. This is literally one of the things that ended up killing Itanium.

 

And no, Microsoft has plenty of options if they want a powerful ARM processor, there are plenty of design houses and semiconductor companies that'd happily take them up on the offer if they put the money down. It's just that they don't have the volume to run an ASIC like Apple did, Apple managed to make M1/M2 because they were guaranteed to have a large market for it. How many surface laptops do you think Microsoft stands to sell? If it's less than a couple of million I wouldn't even bother looking at the ASIC route, your mask set for a chip like that in a modern process node is around €15 million, throw in design costs, yield on such a complex chip, baseline cost, fab time, etc. and we're looking at something that'll most likely cost more than sourcing a latest generation chip from Intel or AMD. I fear you really misunderstand the economics of doing this, Apple most likely did it because they realised they could increase their profit margin significantly by merging their phone/tablet SoC design into their laptop CPUs, Microsoft does not have that incentive. Simultaneously, companies like TI and Qualcom have little reason to make the economic investment to make a CPU with such a little prospective market share. For example, Texas Instruments (TI) literally left the CPU market for the aforementioned economic reasons quite early on, and they had a moderately successful line of x86 CPUs that were in fact performance competitive.

 

10 hours ago, maplepants said:

The Microsoft Dev Kit 2023 isn't great and if Unity truly is going to make a Windows on ARM version they and any devs who use it will need better hardware. 

Alternatively, the Unity ARM compatibility is mostly derived from the fact that they're probably aiming for the portable console market.

 

10 hours ago, maplepants said:

You claimed that a transition to ARM would require dropping 1-5 year old standards. I didn't make you claim that dude, and so it's not my fault it wasn't true.

Stop twisting my words, I said this: "You would literally end up breaking support for hardware and software that's sometimes only a year old at the time of release. 

 

10 hours ago, maplepants said:

Most of your post is just whinging that adopting ARM would be hard, and hard things aren't worth doing. But I can say why I think that at this moment, it's worth it for Nvidia to be looking into making an ARM workstation.

 

As you already pointed out, the work for porting Linux has already been done and that's a huge advantage for these things. When Nvidia is testing internally what this workstation would look like, they can already install Ubuntu on it and have a working operating system. That's huge, and something that if it didn't exist then would probably change the calculus here substantially.

 

The other factors that make it worthwhile for Nvidia to look into this have to do with the rest of the industry. Microsoft is trying to make ARM tablets, the gaming market on Android is finally expanding beyond casinos for children, and AI / ML is becoming significantly more important to a lot of developer's workflows and apps. 

 

Consider just Microsoft for a minute, if they want their Surface Pro X line to have any hope of competing against the iPad then they need a better developer story than "buy a Mac and run a VM". Qualcomm wants to convince MS to do this by scaling up their Snapdragon line. But Nvidia might have a better shot at scaling down their Grace line. The best outcome for Microsoft would be if both turned out to work. Scaling up the Snapdragon to something like an i5 level of performance, but with incredible battery life, and scaling down the Grace Hooper SOC to something like a Threadripper Pro in terms of price and performance.

 

Laptops are the largest market for "desktop" computers by far. Laptop buyers are a huge portion of Microsoft's Windows licenses and Office 365 subscriptions. Right now, any laptop buyer who cares about battery life should be looking at Macbooks and almost nothing else. The "people who care about battery life" segment of the laptop market isn't one that Microsoft should just completely cede to Apple due the legacy requirements of x86 CPUs. And, if Microsoft wants to have any hope of convincing developers to port their software to their ARM tablets / laptops, then they've got to have an ARM workstation worth running. 

 

Even when something sounds hard, it can still be worth doing.

You just see it as a cost for NVidia, but the actual cost is for hundreds of software and hardware developers around the world, that's what killed all those alternative CPU architectures I listed. Some of them were way better than x86, but they required everyone to port over, and even the major vendors like Microsoft were pushing along. You know that Microsoft had whole teams dedicated to rewriting parts of NT 4.0 for the Alpha ISA with direct support from DEC/Compaq until it got the axe?

 

And no, they can't just "install Ubuntu on it", they're going to spend quite some time writing drivers and low level code. Porting Linux or BSD to another flavour of ARM processor isn't as difficult as porting to an entirely new architecture, but it's by no means a snap of the finger job. And for the love of all that's holy, why would you run any machine learning task on a laptop? Look at the efficiency numbers of laptop CPUs, this thing ain't going to be that much better when facing the same constraints. This would be IBM's cell processor line-up all over again... And NVidia doesn't have any economic incentive to throw what'd basically be €50 million down the drain with no guarantee of any successful outcome. And Qualcom is dragging their feet for a reason, they've been part of enough idiotic ventures like this that almost killed the company to know when to cut their losses.

 

And since you're focusing so much on battery life, even my ThinkPad workstation with a 12th gen i7, a dedicated GPU, power hungry DDR5, and a high brightness screen manages upwards of ten hours on battery if I restrict myself to browsing, word and emails. 

 

1 hour ago, igormp said:

ACPI my beloved.

That's part of it, but ACPI only solves part of the issue (to grossly simplify what's going on: discovery and power management), it doesn't magically enable functional hardware support, nor does it magically enable you to talk to a device without the hardware interface. For things like modem chips (sorry, easy example) you have the quasi-standardized AT commands provided through a serial interface. Even if I have no clue which modem I'm talking to, I'll probably be able to send some data at a low data rate. Dump the serial interface and you're in trouble, requires either a work around or a redesign of the peripheral chip.

 

1 hour ago, igormp said:

For the kind of market nvidia is targeting, they'd only need Linux support for a grace workstation since all it'd be doing is ML-related stuff, they could just bolt ACPI and build a really regular-ish workstation out of that without much issues.

I just don't think it makes much sense to put it in a laptop or desktop. We rarely if ever run heavy computational loads on our workstations these days, it's not cost effective to give everyone the power to do it since it's only necessary for short time periods. So centralizing the (quite expensive) computational resources is significantly cheaper.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, ImorallySourcedElectrons said:

Porting Linux or BSD to another flavour of ARM processor isn't as difficult as porting to an entirely new architecture, but it's by no means a snap of the finger job.

Tbqh, it is, just write the appropriate DT for it and you should be good to go. The problem is to upstream this DT, which is the problem with most vendors and specific kernel versions.

 

14 minutes ago, ImorallySourcedElectrons said:

That's part of it, but ACPI only solves part of the issue (to grossly simplify what's going on: discovery and power management), it doesn't magically enable functional hardware support, nor does it magically enable you to talk to a device without the hardware interface. For things like modem chips (sorry, easy example) you have the quasi-standardized AT commands provided through a serial interface. Even if I have no clue which modem I'm talking to, I'll probably be able to send some data at a low data rate. Dump the serial interface and you're in trouble, requires either a work around or a redesign of the peripheral chip.

It was mostly a joke, but having PC-compatible ARM systems is not an issue, but goes against what the other guy you're discussing with is preaching (and has nothing to do with the ISA to begin with).

15 minutes ago, ImorallySourcedElectrons said:

I just don't think it makes much sense to put it in a laptop or desktop. We rarely if ever run heavy computational loads on our workstations these days, it's not cost effective to give everyone the power to do it since it's only necessary for short time periods. So centralizing the (quite expensive) computational resources is significantly cheaper.

Laptop? Heck no, that'd be dumb af.

 

But as a workstation it kinda makes sense. Think of Nvidia's DGX Station, but meant to someone who is developing for a GH cluster. Having a small-scale version of your cluster to iterate and develop stuff on is amazing before deploying it into your cluster.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, ImorallySourcedElectrons said:

NVidia doesn't have any economic incentive to throw what'd basically be €50 million down the drain with no guarantee of any successful outcome.

I think this is part of the divide here, and that's why I want to start with it. I look at Nvidia's profits and think that a €50 R&D project like this is absolutely worthwhile. We're talking about 0,002% of their yearly revenue for this project.

 

It's hard for me to imagine that Nvidia wouldn't look seriously at an investment this minor. 

13 hours ago, ImorallySourcedElectrons said:

And no, they can't just "install Ubuntu on it", they're going to spend quite some time writing drivers and low level code. Porting Linux or BSD to another flavour of ARM processor isn't as difficult as porting to an entirely new architecture, but it's by no means a snap of the finger job.

The workstation project would be taking advantage of the work that was already done to get this into the data centre. Remember the reason why this is even a thing is because of the potential to piggy back off of their servers, just like Apple did it by piggy backing off of their phones.

13 hours ago, ImorallySourcedElectrons said:

Microsoft has plenty of options if they want a powerful ARM processor, there are plenty of design houses and semiconductor companies that'd happily take them up on the offer if they put the money down.

No argument here. The difference is how much would Microsoft need to contribute to different partners. We have to remember the context here, Nvidia's already got these super computer chips which are starting at 72 cores. For other potential MS partners in this space the question is whether they can build something powerful enough, and here the question is whether by scaling down the power Nvidia can build something cheap enough. 

On 6/5/2023 at 8:52 PM, ImorallySourcedElectrons said:

 

And since you're focusing so much on battery life, even my ThinkPad workstation with a 12th gen i7, a dedicated GPU, power hungry DDR5, and a high brightness screen manages upwards of ten hours on battery if I restrict myself to browsing, word and emails

I added some emphasis here because surely you can see that "my laptop has great hardware, and as long as I never use the compute potential the battery is fine" is not a good argument. A ThinkPad like that won't even give you the full power of your CPU and GPU on battery because the system just can't handle it.

 

The M1 and M2 give you the exact same amount of power on battery as they do plugged in, and still get longer battery life than your ThinkPad.

 

This is what makes me think Microsoft should be doing something so that they don't cede nearly the entire "people who care about using their laptop on battery" portion of the market to Apple.

On 6/5/2023 at 8:52 PM, ImorallySourcedElectrons said:

Alternatively, the Unity ARM compatibility is mostly derived from the fact that they're probably aiming for the portable console market.

This is Unity *on* ARM, not just builds that target ARM. You've been able to target ARM for iOS and Android builds in Unity for ages. And Unity 3.0 already added support for Apple silicon. This announcement was about Unity adding support for building on Windows ARM and presumably Ubuntu ARM.

 

If they're actually going to end up shipping that, then there needs to be a non-Apple ARM workstation worth running because I can't imagine they'd care about supporting people with an Apple Silicon Mac but prefer to do their Unity work inside a Windows or Linux VM.

On 6/5/2023 at 7:27 PM, igormp said:

Grace Hopper is not meant to compete with a Threadripper in any way. The CPU alone would suck for that. It's just a glorified co-processor to the GPU with a blazing fast interconnect that's not achievable with any existing x86 CPU.

It's entirely possible that these CPUs are too specialized, and don't translate well to a general purpose computer. We'll have to see what actual reviews of the NVIDIA HGX and OVX servers are like. 

 

The part of the datasheet that set off my "a scaled down version could make a decent workstation" alarm bells was

Quote

The New Standard for Software Infrastructure
The NVIDIA Grace CPU follows mainstream CPU design principles, is programmed
just like any other server CPU, and is backed by the full NVIDIA ecosystem. All
major Linux distributions and the vast collections of software packages they
provide work perfectly and without modification on the NVIDIA Grace CPU.
To enable developers to jump-start their work, the NVIDIA Grace CPU Superchip
is supported by the full NVIDIA software stack, including NVIDIA HPC, NVIDIA AI,
and NVIDIA Omniverse™.

and they way they bragged about microservices, which are pretty generic workloads.

On 6/5/2023 at 7:27 PM, igormp said:

lol Windows is a second-class citizen to Nvidia when it comes to ML. It actually takes ages for the drivers on Windows to catch up to the Linux one when it comes to performance for ML.

No question. And they'll probably stay that way as for the medium to near term models will probably be exclusively trained on huge servers in the cloud. 

 

But Microsoft is adding a bunch of AI / ML enhancements to their software stack so the possibility of selling developers workstations which are even better at AI assisted workloads is definitely something they're looking into.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, maplepants said:

The part of the datasheet that set off my "a scaled down version could make a decent workstation" alarm bells was

Quote

The New Standard for Software Infrastructure
The NVIDIA Grace CPU follows mainstream CPU design principles, is programmed
just like any other server CPU, and is backed by the full NVIDIA ecosystem. All
major Linux distributions and the vast collections of software packages they
provide work perfectly and without modification on the NVIDIA Grace CPU.
To enable developers to jump-start their work, the NVIDIA Grace CPU Superchip
is supported by the full NVIDIA software stack, including NVIDIA HPC, NVIDIA AI,
and NVIDIA Omniverse™.

and they way they bragged about microservices, which are pretty generic workloads.

They had to include this to make it clear that you can easily migrate from your previous A100 server (be it on the cloud or locally, x86 or ARM) without issues.

 

You don't even need to scale down anything, why not a single workstation with a single Grace Hopper unit? Should be priced similarly to the 4x A100 workstations they sell currently, and 72 cores is actually even less than current top-end workstation offerings, so it's max ram available.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/7/2023 at 10:20 AM, maplepants said:

I think this is part of the divide here, and that's why I want to start with it. I look at Nvidia's profits and think that a €50 R&D project like this is absolutely worthwhile. We're talking about 0,002% of their yearly revenue for this project.

 

It's hard for me to imagine that Nvidia wouldn't look seriously at an investment this minor. 

The workstation project would be taking advantage of the work that was already done to get this into the data centre. Remember the reason why this is even a thing is because of the potential to piggy back off of their servers, just like Apple did it by piggy backing off of their phones.

No argument here. The difference is how much would Microsoft need to contribute to different partners. We have to remember the context here, Nvidia's already got these super computer chips which are starting at 72 cores. For other potential MS partners in this space the question is whether they can build something powerful enough, and here the question is whether by scaling down the power Nvidia can build something cheap enough. 

I added some emphasis here because surely you can see that "my laptop has great hardware, and as long as I never use the compute potential the battery is fine" is not a good argument. A ThinkPad like that won't even give you the full power of your CPU and GPU on battery because the system just can't handle it.

 

The M1 and M2 give you the exact same amount of power on battery as they do plugged in, and still get longer battery life than your ThinkPad.

 

This is what makes me think Microsoft should be doing something so that they don't cede nearly the entire "people who care about using their laptop on battery" portion of the market to Apple.

This is Unity *on* ARM, not just builds that target ARM. You've been able to target ARM for iOS and Android builds in Unity for ages. And Unity 3.0 already added support for Apple silicon. This announcement was about Unity adding support for building on Windows ARM and presumably Ubuntu ARM.

 

If they're actually going to end up shipping that, then there needs to be a non-Apple ARM workstation worth running because I can't imagine they'd care about supporting people with an Apple Silicon Mac but prefer to do their Unity work inside a Windows or Linux VM.

It's entirely possible that these CPUs are too specialized, and don't translate well to a general purpose computer. We'll have to see what actual reviews of the NVIDIA HGX and OVX servers are like. 

 

The part of the datasheet that set off my "a scaled down version could make a decent workstation" alarm bells was

and they way they bragged about microservices, which are pretty generic workloads.

No question. And they'll probably stay that way as for the medium to near term models will probably be exclusively trained on huge servers in the cloud. 

 

But Microsoft is adding a bunch of AI / ML enhancements to their software stack so the possibility of selling developers workstations which are even better at AI assisted workloads is definitely something they're looking into.

 

 

I think you really overestimate the cost-value proposition for this type of product, I just see massive costs at a very high risk. Gearing up for production, training all the personnel to provide support, keeping all those SKUs in stock for warranty replacements, rewriting large software stacks, etc. For what's basically a product with a very limited market.

 

And what do you think people do on an M1/M2 laptop that's any other than browsing/word processors/watching a video/...? For a lot of those activities I'd hazard a guess that the main power consumption is the screen, and that on both the Macbook and other laptops. The advantage Apple has is that they have a very limited hardware-software ecosystem, so they can optimize things far further than any non-mac laptop vendor ever could.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, ImorallySourcedElectrons said:

And what do you think people do on an M1/M2 laptop that's any other than browsing/word processors/watching a video/...? For a lot of those activities I'd hazard a guess that the main power consumption is the screen, and that on both the Macbook and other laptops. The advantage Apple has is that they have a very limited hardware-software ecosystem, so they can optimize things far further than any non-mac laptop vendor ever could.

That was just as true before they switchted to Apple Silicon as it is now. And yet, before the transition their Intel laptops did not get substantially better battery life than other Intel laptops running the same CPUs. Optimization in their display driver, or wifi chip, was just as possible on the 2020 Mackbook Pro line with Intel chips as it was with the 2021 Macbooks running M1 Pro and M1 Max. 

 

If you were right about what drains a laptop battery, then we wouldn't have seen such a massive jump between the 13" 2020 Macbook Pro and 14" 2021 Macbook Pro.

 

The truth is that the move to ARM had a huge impact on Apple's battery life. And if Microsoft wants Windows laptops to compete on battery life it's going to take some investment. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, maplepants said:

That was just as true before they switchted to Apple Silicon as it is now. And yet, before the transition their Intel laptops did not get substantially better battery life than other Intel laptops running the same CPUs. Optimization in their display driver, or wifi chip, was just as possible on the 2020 Mackbook Pro line with Intel chips as it was with the 2021 Macbooks running M1 Pro and M1 Max. 

 

If you were right about what drains a laptop battery, then we wouldn't have seen such a massive jump between the 13" 2020 Macbook Pro and 14" 2021 Macbook Pro.

 

The truth is that the move to ARM had a huge impact on Apple's battery life. And if Microsoft wants Windows laptops to compete on battery life it's going to take some investment. 

This has absolutely nothing to do with it being ARM or x86, please get that idea out of your head.

 

Can't find the Intel paper that goes into detail about it for their CPUs, but this one seems about right for an x86 CPU: https://www.usenix.org/system/files/conference/cooldc16/cooldc16-paper-hirki.pdf 

 

Some relevant images from said paper:
image.png.fb6f20c014413fe64fc93965e0d09d38.pngimage.png.0d1cefabd40c4858c0e28312c37c576c.png

 

They also (correctly) point out that introduction of things like AVX will increase power consumption, but that's an acceptable trade off given the performance gain per energy expenditure SIMD provides overall (see CPU resource allocation techniques to understand why long lists of SIMD instructions are a godsend), which is also why you're seeing such instruction set extensions in many mobile processors. And now with machine learning becoming more prevalent you're most likely going to see the bigger brother of AVX-512 pop-up in the next couple of years. So you can't exactly hold that against the x86 architecture...

 

Also, a fairly clear statement from the author's end, since you don't want to believe any of us here:

Quote

We conclude that the x86-64 instruction set is not a major hindrance in producing an energy-efficient processor architecture.

 

What Apple did is make an ASIC which minimizes the amount of dark silicon for their application and removes redundant hardware interfaces, which brings down power consumption, hoorah. If they had put a couple of billion on the table and told Intel to do it for them instead of doing it in-house, Intel would have gladly designed a just as capable x86 CPU for them. Apple had ulterior motives to take it in-house, but they can only afford to do that due to the volume they run. Meanwhile, the rest of the market doesn't work with a locked-down ecosystem. So the reduced power consumption has absolutely nothing to do with the ARM architecture, and everything to do with closing down the ecosystem and tight-knit integration between hardware and software.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, ImorallySourcedElectrons said:

If they had put a couple of billion on the table and told Intel to do it for them instead of doing it in-house, Intel would have gladly designed a just as capable x86 CPU for them

In 2022 Intel made over $63 billion is revenue. Your argument is that all this time they were only ever one $2 or $3 billion project away from making laptop CPUs just as good as the M1 Max but just weren't in the mood to do so?

 

I think if Intel or AMD could laptop CPUs that are as good efficient as Apple's, then we would have seen them by now.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, maplepants said:

I think if Intel or AMD could laptop CPUs that are as good efficient as Apple's, then we would have seen them by now.

Not sure Intel, but AMD's 5nm CPU are pretty much as efficient as Apple's ones now. The problem lies with peripherals and other stuff not really related to the ISA/CPU arch, but rather the entire system architecture.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, maplepants said:

In 2022 Intel made over $63 billion is revenue. Your argument is that all this time they were only ever one $2 or $3 billion project away from making laptop CPUs just as good as the M1 Max but just weren't in the mood to do so?

 

I think if Intel or AMD could laptop CPUs that are as good efficient as Apple's, then we would have seen them by now.

You really fail to grasp the difference between general purpose hardware and ASICs.

 

And companies are beholden to their share holders, you don't just get to spend millions on vanity projects because you can. If that were allowed I'd have done some pretty wacky stuff by now at work.

 

45 minutes ago, igormp said:

Not sure Intel, but AMD's 5nm CPU are pretty much as efficient as Apple's ones now. The problem lies with peripherals and other stuff not really related to the ISA/CPU arch, but rather the entire system architecture.

Intel is at a major disadvantage, they use monolithic dies which result in very high power densities, which causes an increase in leakage current, which causes more power usage, which ... (You get the idea.) Overall, AMD and Intel are quite evenly matched when it comes to the amount of work they can perform in a single clock cycle for a given amount of die area for practical applications. If you go for theoretical limits, Intel is usually a little bit ahead in this aspect, but part of that is down to them being their own fab - PDKs under NDA terms are good, but never as good as the unrestricted access you can get if your company owns the process node in the first place.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, igormp said:

Not sure Intel, but AMD's 5nm CPU are pretty much as efficient as Apple's ones now. The problem lies with peripherals and other stuff not really related to the ISA/CPU arch, but rather the entire system architecture.

from what I can remember, AMD still might have worse idle power and not the some deeplink* system as of yet.

7 hours ago, ImorallySourcedElectrons said:

Intel is at a major disadvantage, they use monolithic dies which result in very high power densities, which causes an increase in leakage current, which causes more power usage, which ... (You get the idea.) Overall, AMD and Intel are quite evenly matched when it comes to the amount of work they can perform in a single clock cycle for a given amount of die area for practical applications.

Seems like intel might begin with powervia?

Quote

Intel is the first in the industry to implement backside power delivery on a product-like test chip, achieving the performance needed to propel the world into the next era of computing. PowerVia, which will be introduced on the Intel 20A process node in the first half of 2024, is Intel’s industry-leading backside power delivery solution. It solves the growing issue of interconnect bottlenecks in area scaling by moving power routing to the backside of a wafer.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Quackers101 said:

AMD still might have worse idle power

For their chiplet-based products, yes, and mostly caused by the IO die. For their monolithic ones (such as the ones found in mobile devices), I believe that not an issue.

 

2 minutes ago, Quackers101 said:

some hyperlink system as of yet.

WDYM by hyperlink? Something to link multiple dies together? That'd be their infinity fabric, but that's not really needed for their smaller chips found in mobile parts.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, ImorallySourcedElectrons said:

You really fail to grasp the difference between general purpose hardware and ASICs.

 

And companies are beholden to their share holders, you don't just get to spend millions on vanity projects because you can. If that were allowed I'd have done some pretty wacky stuff by now at work.

I know your argument depends on pretending the M1 is an ASIC and not a general purpose SOC, but I just don't see it like that. Check out the writing of some non-Apple people writing software for this thing (https://asahilinux.org/2023/06/opengl-3-1-on-asahi-linux/) and it's nothing like the guy a few years ago who ported Doom to his graphing calculator.

 

And if AMD and Intel consider making competitive laptop chips a "vanity project" then their shareholders should be pissed. It's the biggest section of the "desktop" market by a landslide and I wouldn't really call trying to compete in that sector "wacky".

13 hours ago, igormp said:

Not sure Intel, but AMD's 5nm CPU are pretty much as efficient as Apple's ones now. The problem lies with peripherals and other stuff not really related to the ISA/CPU arch, but rather the entire system architecture.

And that's the area they should be going after. I don't really think that the instruction set has much impact on battery life, as much as an architecture move gives you cover for dropping lots of old cruft. But for AMD and Intel, they should just be at least looking into laptop CPUs without the cruft like AMD did for the Playstation. 

 

Because even if AMD and Intel decide the data centre and high end workstation / gaming PC are the most profitable slices of the market; Microsoft doesn't want to cede the entire laptop market to Apple. This is why Windows on ARM exists, so that they can explore options beyond AMD and Intel. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Quackers101 said:

from what I can remember, AMD still might have worse idle power and not the some deeplink* system as of yet.

Yes, but what matters more on chips is power density. Using 1W in a square centimetre is far less problematic than 1W in a square millimetre. That being said, AMD is massively disadvantages in this sense because they have to include line drivers on all the dies for their chip interconnects.

 

6 hours ago, Quackers101 said:

Seems like intel might begin with powervia?

That solves an entirely different problem, you got to shove quite a lot of power into these chips and the metal layers and bond pads being used for this are running into their limits. Just making the metal thicker or switching out aluminium for something more conductive (e.g., copper) has issues, either electrical, mechanical or on a more basic physics level (e.g., electromigration). If you can route power over the back off the chip you can get it to where it needs to be without interfering with your signal routing.

 

56 minutes ago, maplepants said:

I know your argument depends on pretending the M1 is an ASIC and not a general purpose SOC, but I just don't see it like that. Check out the writing of some non-Apple people writing software for this thing (https://asahilinux.org/2023/06/opengl-3-1-on-asahi-linux/) and it's nothing like the guy a few years ago who ported Doom to his graphing calculator.

"Pretending", it's literally the definition of an ASIC, they made their own chip for their own specific applications. It's not because a chip is programmable or can execute instructions that it ain't an ASIC. Meanwhile, Intel and AMD release chips without really knowing what they're going to be used in. I've seen plenty of Intel chips (never AMD surprisingly after the mid-2000s Geode fiasco 🤔) that were pushed into service on custom PCBs doing all sorts of tasks. You'd never see the M1 chip pushed into that role, because it misses the intentional design features that make it usable for such applications, while that's where Intel's Atom line really shines: NAS, media servers, etc.  And that's also where Intel is competing with the ARM SoCs, not on their main CPU line where only AMD and sort of IBM (if they can be arsed to work on Power PC in one of their flurries of activity) compete.

 

1 hour ago, maplepants said:

And if AMD and Intel consider making competitive laptop chips a "vanity project" then their shareholders should be pissed. It's the biggest section of the "desktop" market by a landslide and I wouldn't really call trying to compete in that sector "wacky".

And that's the area they should be going after. I don't really think that the instruction set has much impact on battery life, as much as an architecture move gives you cover for dropping lots of old cruft.

Yes, because you clearly know more about market demand than the companies who actually sell these things... The ones who tried what you propose are all in the business graveyard, I listed up several of them a couple of posts ago, check their history to see why what you're proposing is a very bad idea. I'd rather not see Intel or AMD end up in HP Enterprise's business graveyard like most of the others. But then again, you seem hell-bent on comparing the M1/M2 SoC with a general purpose CPU, which it very much is not. 

1 hour ago, maplepants said:

But for AMD and Intel, they should just be at least looking into laptop CPUs without the cruft like AMD did for the Playstation. 

Both also make SoCs for embedded applications, but there's no real market for a SoC with the performance of the M1/M2 unless if you have a very specific customer (e.g., Sony) that's willing to purchase a high volume and funds the development cost upfront.

 

1 hour ago, maplepants said:

Because even if AMD and Intel decide the data centre and high end workstation / gaming PC are the most profitable slices of the market; 

Both are doing way more than that, please actually check which products they're selling for which applications before making such statements.

 

1 hour ago, maplepants said:

Microsoft doesn't want to cede the entire laptop market to Apple.

They don't have to, almost none of the major commercial software vendors is going to bother to port their software over to Mac, and compatibility layers have massive issues that enterprises don't want to deal with. And on the consumer market both have to fight for market share against tablets at this point.

 

2 hours ago, maplepants said:

This is why Windows on ARM exists, so that they can explore options beyond AMD and Intel. 

lol, no. You have heard about Windows CE/Mobile/Embedded/Phone, right? How that grew is a messy messy story, so I leave that to Wikipedia. But what I'm trying to say here is that, Windows's support for ARM dates back to the late 90s and is nothing new. Microsoft then pushed it into mainstream development to try and compete with Android tablets and phones, and everyone kind of expects them to release some sort of mobile XBox handheld at some point in the (near) future based on rumours of who they've been working together with.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, ImorallySourcedElectrons said:

lol, no. You have heard about Windows CE/Mobile/Embedded/Phone, right? How that grew is a messy messy story, so I leave that to Wikipedia. But what I'm trying to say here is that, Windows's support for ARM dates back to the late 90s and is nothing new. Microsoft then pushed it into mainstream development to try and compete with Android tablets and phones, and everyone kind of expects them to release some sort of mobile XBox handheld at some point in the (near) future based on rumours of who they've been working together with.

I think this is the core of the disagreement. You're committed to the idea that modern Windows on ARM must exist for the same reasons as, and be in every important way the same as, Windows CE.

 

I just don't think that's the case. Sure previous ARM versions of Windows were bad, and Windows phone was bad. But I don't see how the existance of Windows CE dooms all future ARM versions of Windows to be terrible. 

 

You just can't see beyond the failures of other companies. The fact that some companies have tried, and failed, to make power efficient laptops is not really a reason to give up on the idea. There are lessons to learn from previous failed attempts, but your take away is basically this: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, maplepants said:

I think this is the core of the disagreement. You're committed to the idea that modern Windows on ARM must exist for the same reasons as, and be in every important way the same as, Windows CE.

 

I just don't think that's the case. Sure previous ARM versions of Windows were bad, and Windows phone was bad. But I don't see how the existance of Windows CE dooms all future ARM versions of Windows to be terrible. 

 

You just can't see beyond the failures of other companies. The fact that some companies have tried, and failed, to make power efficient laptops is not really a reason to give up on the idea. There are lessons to learn from previous failed attempts, but your take away is basically this: 

 

No, you're grossly misunderstanding what I'm saying at every single turn, and I genuinely hope it ain't an elaborate trolling attempt.

 

What I'm saying is that Windows on ARM is nothing new, it dates back to Windows CE, and at some point the support for it was merged into the mainstream Windows based on the NT-kernel. Windows CE was aconglomerate between multiple things, which then had multiple offshoots (e.g., CE for Embedded or whatever it was called, which was also used for some early smartphones), several of which were then sort of merged into what became Windows Phone which was somewhat aligned with Windows 7/8, and eventually merged into 8/10. Windows for ARM literally grew from the Windows CE heritage, it's a follow-up product with direct heritage! The mess is in the naming and the various kernel variants that covered the ARM platform over the years, and then we haven't even gotten into the binary executable compatibility issues many of those platforms had (which was part of their downfall for the phone and tablet market), which is also part of the reason why they extended WOW64 the way they did.

 

You're so focussed on energy efficient laptops that you fail to grasp the market dynamics and the economics of what you're proposing. That degree of energy efficiency requires extremely tight integration between hardware and software, and unless if it's made by the same manufacturer you're just not going to hit those numbers you're hoping for. This is also why Microsoft Surface laptops actually do hit pretty good battery life in some circumstances, Microsoft cherry picked the hardware and the Windows APM features know exactly what to do for those laptops. But a big part in what kills their battery life, once you start using non-Microsoft applications, is that they can't control what third party software does. Apple meanwhile pretty much told third party software developers to go and screw themselves and comply with whatever changes they came up with, or risk having their software not work. If Microsoft took that approach with Windows they'd very quickly lose the market dominance they established.

 

No amount of repeating that energy efficient laptops are nice and that you like ARM will change the following facts:

  • Windows has market share because it's backwards compatible and runs on pretty much any hardware in common use with very little fuzz. And the few times Microsoft broke backwards compatibility slightly they did their utmost best to make it as painless as possible. See Win 98 SE/Win ME -> XP, XP/2003 -> XP 64 bit/Vista 64 bit initially, the compatibility modes and WOW received a lot of TLC to make everything work well. And to this very day Microsoft is still putting time into improving those things. Breaking that backwards compatibility is a high risk endeavour for them.
  • x86 and x64 have market share because they're backwards compatible in a somewhat sane manner to somewhere in the late 70s, all the competitors meanwhile broke compatibility every few years. Everyone who tried with various work arounds or to simplify by removing features died off or quit the market (e.g., TI) before they approached a sunken cost fallacy point, even the ones who were doing mostly the right things died off (see VIA for example). The aforementioned AMD Geode is a good example of what happens if you prune a product too much, by doing so they broke a lot of compatibility in peculiar ways and that's one of the (many) reasons it died.
  • The hardware eco-system relies on a lot of legacy interfaces that became quasi standards.
  • Most ARM SoCs are a high volume low profit market with cut-throat competition, x86/64 CPUs are a high volume market with a duopoly.
  • The development and production costs for high-performance CPUs are so prohibitive that no one in their right mind is going to make a highly specific one without a guarantee that they'll be able to sell enough units to offset the development cost.

Apple is an outlier because they are selling a lot of units of a single SKU, pretty much no one else does that. The closest thing you'll find in Intel's catalogue is the NUC.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, maplepants said:

And that's the area they should be going after. I don't really think that the instruction set has much impact on battery life, as much as an architecture move gives you cover for dropping lots of old cruft. But for AMD and Intel, they should just be at least looking into laptop CPUs without the cruft like AMD did for the Playstation. 

I agree. And while we're at it, mobile devices, mostly of which already have soldered LPDDR RAM, should just go for a different memory controller that has more channels in order to give more bandwidth to the CPU/iGPU. Having a 256~512-bit bus shouldn't be an issue at all in a laptop and would already provide a 2~4x increase in bandwidth.

 

However, it seems that both manufacturers are not really caring much about this space. AMD has some really competitive chips now but can't seem to make a proper partnership to have an actual market impact.

9 hours ago, maplepants said:

Because even if AMD and Intel decide the data centre and high end workstation / gaming PC are the most profitable slices of the market; Microsoft doesn't want to cede the entire laptop market to Apple. This is why Windows on ARM exists, so that they can explore options beyond AMD and Intel.

Well, then I guess it's up to MS to make a worthwhile chip, or at least back the R&D for one, but there are no other really competitive chips apart from the M1 in that market. The second best you can get in both power consumption and being actually useful for daily usage will be an x86 chip.

6 hours ago, maplepants said:

You just can't see beyond the failures of other companies. The fact that some companies have tried, and failed, to make power efficient laptops is not really a reason to give up on the idea. There are lessons to learn from previous failed attempts, but your take away is basically this: 

A company can make a really good, laptop-focused SoC, but then you have the problem that most users will want to use Windows, and Windows on ARM sucks hard. Linux wouldn't really be useable for most people.

ChromeOS is an actual working option with great support and devices made for it have great battery life, however most users see it as a toy and still want to use Windows anyway.

 

Apple could both just make the proper adjustments to their OS and also release great hardware to go with it. The only scenario this could happen on the Windows world would be if MS themselves make a great laptop and got Windows to make proper use of the hardware found on said laptop, be it x86 or ARM.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, igormp said:

And while we're at it, mobile devices, mostly of which already have soldered LPDDR RAM, should just go for a different memory controller that has more channels in order to give more bandwidth to the CPU/iGPU. Having a 256~512-bit bus shouldn't be an issue at all in a laptop and would already provide a 2~4x increase in bandwidth.

Very wide memory busses make for large and expensive dies because you need the pads to get those signals out. So you kind of run into a limit there.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, ImorallySourcedElectrons said:

Very wide memory busses make for large and expensive dies because you need the pads to get those signals out. So you kind of run into a limit there.

I'd understand if we were talking about a 5120-bit bus, but 256-512 is far from "very wide".

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×