Jump to content

NVIDIA Made a CPU.. I’m Holding It. - Computex 2023

James

I'm at the Gigabyte booth at Computex 2023 where they're showing of bonkers new hardware from Nvidia!

 

Immersion tank 1 A1P0-EB0 (rev. 100) :https://www.gigabyte.com/Enterprise/Accessory/A1P0-EB0-rev-200#Specifications
Immersion tank 2 A1O3-CC0 (rev. 100): https://www.gigabyte.com/Enterprise/Accessory/A1O3-CC0-rev-100
Big AI server (h100) - G593-SD0 (rev. AAX1): https://www.gigabyte.com/Enterprise/GPU-Server/G593-SD0-rev-AAX1

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

What next? AMD have good launch drivers? The 4060 is actually gonna be competitive? Or is the 5090 gonna be the size of an air conditioner? (one of these is true)

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

but can it run crysis? sad to hear meditek + nvidia going a lot on cars. trendy green bills not of this nature.

The cool tech they might do, like they do for stadiums and others, like the GPU setup and hopefully not everything software locked.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

So with apple going ARM, high compute going ARM, mobile is ARM, etc. I think it's only a question of time before ARM starts displacing x86, especially since translation seems to be going really well in a lot of use cases, eg intel ARC, steam Proton and Apples x86 to ARM things. I would guess there is more than one hungry company out there, including your Nvidias and Qualcoms that are more than ready to break the x86 duopoly's hold on the PC and workstation market.

Link to comment
Share on other sites

Link to post
Share on other sites

Boi he thicc. Those heatspreaders look massive and the die is also really large with no chiplets. I wonder how that affects the yield and how it compares to EPYC.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Anfros said:

I think it's only a question of time before ARM starts displacing x86

Well, both can still co-exist without problems.

25 minutes ago, Anfros said:

steam Proton

Proton has nothing to do with x86 on arm, only about running windows stuff on windows.

 

The thing with nvidia building their CPU is because POWER9 (previous arch that had nvlink integrated) is not that efficient and they don't have an x86 license to build their own CPUs with nvlink built-in. The compute of those Grace chips is okay-ish, it's better to just think as those as a fast middleman between your storage servers and actual compute units (GPUs), hence why the extremely fast memory and interconnects. You just can't reach that bandwidth with regular PCIe that's available on x86.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Anfros said:

So with apple going ARM, high compute going ARM, mobile is ARM, etc. I think it's only a question of time before ARM starts displacing x86, especially since translation seems to be going really well in a lot of use cases, eg intel ARC, steam Proton and Apples x86 to ARM things. I would guess there is more than one hungry company out there, including your Nvidias and Qualcoms that are more than ready to break the x86 duopoly's hold on the PC and workstation market.

There's a lot more nuance to this than most folks lead you to believe, and there's not really much point to switching to ARM for general purpose desktop processors, because what most of the marketing wank forgets to mention is just how much of that power consumption is due to the I/O and flexibility a general-purpose CPU displays. Additionally, Intel and AMD tune their x86 CPUs for performance, which comes at a heavy cost since power consumption is exponentially related to the junction temperature for *reasons (tm)*. (Sorry, not going to get into descriptions of why this is because this forum has too many idiots citing papers as evidence while they know jackshit about device architecture and physics.) But basically, if you were to build an ARM CPU with the same peripheral requirements as a modern x86 desktop CPU, and then clocked it at frequencies where it hit similar performance, it would use just as much power. Add onto this that modern CPUs don't just take the instructions and literally execute them like the 80s-era CPUs that most of the books and articles folks read are based on, and the picture is a lot more nuanced.

 

Maybe also interesting to mention is that networked RISC CPUs were framed as replacement for monolithic CISC CPUs in the late 80s/early 90s, in 2023 we're still using monolithic CISC instruction set CPUs. The reasons are quite complicated, but whenever you try to network large quantities of computational elements you run into bandwidth or physical interconnect design issues. You either have a bus, which gets saturated, or you run point-to-point links which require consecutively more line drivers (which is expensive power and die area wise) and PCB/interposer/... layers. Then there's latency to handle, this might seem simple but the increased distance leads to waveguide effects, and you're dealing with flip-chipped devices where the pillar/bump/ball/spring/... cause variations that are unpredictable. As a result you're now crossing clock domains and you have to introduce additional buffering or even programmable delay line circuits, etc.

 

Basically, none of either solution is ever going to work, and we should go back to hitting things with a hammer. But the reality is that both have their advantages for specific applications, and it's unlikely x86 will ever truly disappear.

 

Also for added fun, NVidia is one of the few companies that holds/held a license to the x86 instruction set, so they could make x86 CPUs and compete with both Intel and AMD. That'd be hilarious, but unlikely to happen I suspect.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Anfros said:

So with apple going ARM, high compute going ARM, mobile is ARM, etc. I think it's only a question of time before ARM starts displacing x86, especially since translation seems to be going really well in a lot of use cases, eg intel ARC, steam Proton and Apples x86 to ARM things. I would guess there is more than one hungry company out there, including your Nvidias and Qualcoms that are more than ready to break the x86 duopoly's hold on the PC and workstation market.

server linux is huge, the web runs on linux, our phones run on linux, our network infrastructure runs linux, even f*cking cow milkers run linux... yet, on the desktop it's just not happening.

 

it's not because something works for one market, that it'll work for another.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, manikyath said:

yet, on the desktop it's just not happening.

desktop will move there, it will start out as kiosk and other thin client deployments in companies.  Remember the consumer DIY market is a rounding error compared to enterprise supply.  Many companies are moving to thin client deployments were the users `work` runs in a VM and at thier desk all they have is a dumb thin client that remote desktops into the citrix etc system.  MS are also pushing this hard with windows 365 (windows running in a VM on azure with Remote Desktop using the same low latency as xbox-game-pass streaming). 

Once the market for x86 desktops is just DIY consumers for most companies it will not be worth it, and for consumers that want to stick with x86 they will need to pay a lot more to cover all of the R&D themselves. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, hishnash said:

desktop will move there, it will start out as kiosk and other thin client deployments in companies.  Remember the consumer DIY market is a rounding error compared to enterprise supply.  Many companies are moving to thin client deployments were the users `work` runs in a VM and at thier desk all they have is a dumb thin client that remote desktops into the citrix etc system.  MS are also pushing this hard with windows 365 (windows running in a VM on azure with Remote Desktop using the same low latency as xbox-game-pass streaming). 

 

and all of those TC's in the business space are running windows, because that way you can deploy your RDP client to them along with the security settings, from the same place the rest of the infrastructure gets managed.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, ImorallySourcedElectrons said:

Additionally, Intel and AMD tune their x86 CPUs for performance

*on desktop.

 

There are embedded x86 options that are really power efficient and better than arm, just look at some x86 router boards, really capable and with a really reasonable TDP with no ARM alternative.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, ImorallySourcedElectrons said:

if you were to build an ARM CPU with the same peripheral requirements as a modern x86 desktop CPU, and then clocked it at frequencies where it hit similar performance, it would use just as much power.

When that IO is all under load the power draw will mostly IO that is not in use does not use power, just die area (unless you skewed up your arc a LOT).  Most of the time your systems (even in a server) is not using all the IO at less than 1% capacity, with some short random spikes.  When the IO is not in heavy use the perf benefits of ARM do shine through. The fixed instruction width and explicit register usage by compilers makes both decode and branched predicting easier and lower power. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, igormp said:

really capable and with a really reasonable TDP with no ARM alternative.

banana pi would like a word.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, manikyath said:

and all of those TC's in the business space are running windows, because that way you can deploy your RDP client to them along with the security settings, from the same place the rest of the infrastructure gets managed.

Many are running windows, but some these days are running very locked down linux forks from the likes of HP and Citrix. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, hishnash said:

Many are running windows, but some these days are running very locked down linux forks from the likes of HP and Citrix. 

i havent seen HP specific OS TC's for like a decade.. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, igormp said:

There are embedded x86 options that are really power efficient and better than arm, just look at some x86 router boards, really capable and with a really reasonable TDP with no ARM alternative.

The use of x86 in router boards ironically is a cost factor, it is cheaper to get last gen low end x86 chips than it is to get similar performing (thus high end) ARM chip.   The high end ARM chips are either not sold to that market (Apple, NV etc) or are very very expensive semi custom affairs that only make sense to deploy if you are putting in an order of 1mill units+ 

Link to comment
Share on other sites

Link to post
Share on other sites

In general as Linus hinted at large parts of the data centre are moving to ARM platforms.  

Even if your companies VMs that it rents are x86 all the auxiliary services within the data centre that you depend upon are moving to ARM to save the data centre $$$$. 

Be that managed databases, cloud storage, CDNs, video transcode services, API Gateways, or any number of other services they are all moving over to ARM, and this applies to all the major cloud provides (even vendors like IBM who have thier own arc so are not doing this for licensing reasons) 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, igormp said:

*on desktop.

 

There are embedded x86 options that are really power efficient and better than arm, just look at some x86 router boards, really capable and with a really reasonable TDP with no ARM alternative.

Yeah, and you can even get an x86 desktop CPU to behave quite well if you run it bare metal and clock it at the sweet spot and disable some of the features. When I was trying to write my own OS back in the day I even got a Pentium 4 to run at something ridiculously low. 

 

1 minute ago, hishnash said:

When that IO is all under load the power draw will mostly IO that is not in use does not use power, just die area (unless you skewed up your arc a LOT).  Most of the time your systems (even in a server) is not using all the IO at less than 1% capacity, with some short random spikes.  When the IO is not in heavy use the perf benefits of ARM do shine through. The fixed instruction width and explicit register usage by compilers makes both decode and branched predicting easier and lower power. 

I think you heavily underestimate the power cost of running something like DDR5 or a wide PCIe bus, it's truly a case of death by a thousand cuts. Each one of those drivers takes up a small bite of power. Most of these ARM CPUs rarely have to deal with such things, and they're not that much more efficient, please read up on how the instruction set is handled in modern CPUs, I think you're basing your conclusions on 90s era textbooks. Everything you're mentioning is pretty much moot at this point in time, the actual hardware implementation of the CPU parts that actually run the computations is quite similar between CISC and RISC these days. What might differ between CISC and RISC is that you generally use statistical analysis of the likely tasks you'll be running to optimize the available resources for said tasks. And RISC instruction sets tend to have a massive disadvantage once you're consider branching more than one level, but that's a whole other can of worms to open. But in short, modern instruction decoding is really quite different from what most older books describe, and neither has any real particular advantage there.

 

But to get back to the original point, if you want to see the actual power cost for a PCIe or DDR5 interface in modern technologies, check FPGA documentation, but realise it's death by a thousand cuts.

 

2 minutes ago, hishnash said:

The use of x86 in router boards ironically is a cost factor, it is cheaper to get last gen low end x86 chips than it is to get similar performing (thus high end) ARM chip.   The high end ARM chips are either not sold to that market (Apple, NV etc) or are very very expensive semi custom affairs that only make sense to deploy if you are putting in an order of 1mill units+ 

While I'm somewhat tied by NDAs on this one - preventing me from citing actual numbers - I do believe I am allowed to tell you that you're horribly horribly wrong on this one. Plenty of generic ARM MCUs going around that are plenty to build cheap routers, supply ain't that much of an issue either beyond the standard market issues. You most definitely do not need a high-end chip to build a router, you just go for whatever has the right peripherals that you have an existing design and code base for. I would imagine that they're going for x86 chips since the datacentre segment often runs a full blown x86 CPU, meaning they can recuperate parts of those designs and code bases with minimal effort.

 

6 minutes ago, hishnash said:

In general as Linus hinted at large parts of the data centre are moving to ARM platforms.  

I know of moves going both ways within large organisations, no specific preferred direction.

6 minutes ago, hishnash said:

Even if your companies VMs that it rents are x86 all the auxiliary services within the data centre that you depend upon are moving to ARM to save the data centre $$$$. 

Nope, they're moving to whatever is most cost effective, and that's not necessarily ARM.

8 minutes ago, hishnash said:

Be that managed databases, cloud storage, CDNs, video transcode services, API Gateways, or any number of other services they are all moving over to ARM, and this applies to all the major cloud provides (even vendors like IBM who have thier own arc so are not doing this for licensing reasons) 

Databases it heavily depends on the actual workload, storage it depends on the I/O, CDNs sure, transcoding is moving towards dedicated hardware that is neither ARM or x86, which APIs?, etc. You're grossly simplifying the actual choice of hardware and presenting ARM as some sort of one-size-fits-all solution, but it very much has a limited set of applications for which it works well.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, this is probably why Nvidia is pushing as small of chips as they can get away with for consumer cards, so as to sell more of these monsters for many thousands apiece. 
 

As far as migration to ARM goes, realistically, there’s really no sign as of now that ARM will go into the enthusiast, or pc gaming segments, and given the use of x86 in the current consoles, and recent advancements, it’s still going strong.
 

That said, without highly accurate, and fast x86 emulation available (games are performance sensitive, which is hard to do under emulation), a forcible migration could pretty much Thanos Snap pc gaming, as much of the back catalogue would be rendered inaccessible (without developer intervention, which is far from a given). 

 

Most commonly used production software already have ARM compatible binaries (courtesy of Apple Silicon), and other less performance sensitive software (such as those used for highly expensive machines) can be emulated, potential compatibility issues notwithstanding. Really, PC gaming would probably be the segment worst hit, as it’s vast library is its greatest strength. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, manikyath said:

banana pi would like a word.

A banana pi gets heavily beaten by any N3050/J4125 or even some Ryzen embedded options at a similar power draw (~5W).

1 hour ago, hishnash said:

The use of x86 in router boards ironically is a cost factor, it is cheaper to get last gen low end x86 chips than it is to get similar performing (thus high end) ARM chip.   The high end ARM chips are either not sold to that market (Apple, NV etc) or are very very expensive semi custom affairs that only make sense to deploy if you are putting in an order of 1mill units+ 

Many are not even last gen, just newer Alder lake low end chips, those are pretty good (not sure about low power raptor lake cores being a thing yet).

 

You can't find any high-end ARM core that won't cost an arm and a leg. You either get crappy A72~A76 cores, or pay a heavy price for a Cortex X1/X2 that still gets outmatched by a similarly-priced (or even cheaper) x86 CPUs.

1 hour ago, hishnash said:

In general as Linus hinted at large parts of the data centre are moving to ARM platforms.  

Even if your companies VMs that it rents are x86 all the auxiliary services within the data centre that you depend upon are moving to ARM to save the data centre $$$$. 

Be that managed databases, cloud storage, CDNs, video transcode services, API Gateways, or any number of other services they are all moving over to ARM, and this applies to all the major cloud provides (even vendors like IBM who have thier own arc so are not doing this for licensing reasons) 

They are just going to co-exist. I don't get why people are so fixated on one thing replacing another. Google/Azure/Oracle both have Ampere offerings along with their x86 ones, so does AWS with their Graviton chips. People like options, the more the merrier.

52 minutes ago, ImorallySourcedElectrons said:

Plenty of generic ARM MCUs

MCUs are not the topic of this discussion, but rather CPUs based on the A-cores.

53 minutes ago, ImorallySourcedElectrons said:

You most definitely do not need a high-end chip to build a router

Entirely dependent on what you're doing. Many things are moving to be software-defined, hence why we now have somewhat generic accelerators such as bluefield. Heck, even my crappy RK3568 (4x A-55) router can barely handle my network at full load, anyone with more demanding needs won't be satisfied by the current low-end arm offerings, nor will be likely to afford the expensive ones, so an x86 solution ends up being a perfect middle ground, ironically.

 

9 minutes ago, Zodiark1593 said:

Most commonly used production software already have ARM compatible binaries (courtesy of Apple Silicon)

This has been the case for years before the M1 due to servers. Clang and GCC already had an amazing ARM support before the M1 became a thing, so Apple just took advantage of that.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, filpo said:

one of these is true

I'm not certain what will be true but...

2 hours ago, filpo said:

AMD have good launch drivers?

I damn sure know it ain't this.

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ImorallySourcedElectrons said:

I think you heavily underestimate the power cost of running something like DDR5 or a wide PCIe bus, it's truly a case of death by a thousand cuts.

Server ARM absolutly do, the domain oh high bandwidth DDR deployments does not belong to x86, both Power and ARM tend to ship the latest generation DDR servers 2 to 3 years before there are x86 options on the market.  

 

 

1 hour ago, ImorallySourcedElectrons said:

please read up on how the instruction set is handled in modern CPUs, I think you're basing your conclusions on 90s era textbooks. Everything you're mentioning is pretty much moot at this point in time, the actual hardware implementation of the CPU parts that actually run the computations is quite similar between CISC and RISC these days.

I am well aware that post decode it is very simlare, but the decode cost of x86 is MUCH higher than a fix width (you need to decode your instructions to runtime) and the widest decode stage of x86 currently is 6 wide and this is very very complex (and only runs in 6 wide mode in optimal situations most of time it is more like 2 wide). Building a 8 wide ARM decode stage is trivial in comparison.  If you want to build a very wide core (that does lots at once) you need to feed it with instructions, an instruction cache etc can help but in the end (unless you have a very very tight loop that fits entirely in cache... not at all general purpose real world at all) modern x86 cpus cores have a limit in that if they make them wider in most cases they will end up instruction starved (waiting for the decode) so they get to a point were they must increase clock speed to match the perfomance. (and increasing clock speed costs POWER).

The other aspect were RISC like instruction sets (like ARM) benefit the cpu design is in offloading some complexity to the compiler.  Since the compiler has a LOT more registers it can directly address in a typical every day application when compiled to ARM you will see a lot fewer stores and load options than you would in the same application compiled to x86.  Sure the cpu internal optimisations on both platforms attempt to be smart about store and load and skip them were they are not needed etc but they are trying to be smart about it and that being smart is not perfect and adds complexity (power draw) in the end the same code base compiled for x86 on a modern cpu will still end up with more writes to L1 and reads than when running on a ARM and those reads and writes cost power.  
 

1 hour ago, ImorallySourcedElectrons said:

But to get back to the original point, if you want to see the actual power cost for a PCIe or DDR5 interface in modern technologies, check FPGA documentation, but realise it's death by a thousand cuts.

PCIe only draws power when under load, unless you skewed up massively!  Same with DDR as well. 

 

1 hour ago, ImorallySourcedElectrons said:

You most definitely do not need a high-end chip to build a router, you just go for whatever has the right peripherals that you have an existing design and code base for. I would imagine that they're going for x86 chips since the datacentre segment often runs a full blown x86 CPU, meaning they can recuperate parts of those designs and code bases with minimal effort.

Yes most routers are ARM or other MIPS etc,  those that are x86 are normally much higher level, routers that run a load of stuff like traffic analysis etc. 
 

1 hour ago, ImorallySourcedElectrons said:

Nope, they're moving to whatever is most cost effective, and that's not necessarily ARM.

yes and for manager services all the current major cloud providers are moving to running these on ARM. 
 

1 hour ago, ImorallySourcedElectrons said:

Databases it heavily depends on the actual workload, storage it depends on the I/O, CDNs sure, transcoding is moving towards dedicated hardware that is neither ARM or x86, which APIs?, etc. You're grossly simplifying the actual choice of hardware and presenting ARM as some sort of one-size-fits-all solution, but it very much has a limited set of applications for which it works well.

Yes the services depend on dedicated hardware but that high bandwidth IO, GPUs or video encoding paths but the CPUs that data centres are opting to use to manager these dedicated HW tend to be ARM (at least AWS, Google and even MS Azure, I have not used IBMs offerings recently). 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, igormp said:

This has been the case for years before the M1 due to servers. Clang and GCC already had an amazing ARM support before the M1 became a thing, so Apple just took advantage of that.

Yes but apple have been using ARM for a lot longer than M1, ever since the Neuton apple have been pushing ARM compiler support and as one of the major LLVM contributors a good fraction of the ARM optimisations in there (also x86 optimisations) are from apple. 
 

25 minutes ago, igormp said:

They are just going to co-exist. I don't get why people are so fixated on one thing replacing another. Google/Azure/Oracle both have Ampere offerings along with their x86 ones, so does AWS with their Graviton chips. People like options, the more the merrier.

Yes as I said customer facing options (were you can rent the VM) are a mix and will stay as such, were data centres are moving to explicitly using ARM and the primary option are the services that are managed, were your not running your code just calling thier services run running your own code.   

 

 

38 minutes ago, Zodiark1593 said:

That said, without highly accurate, and fast x86 emulation available (games are performance sensitive, which is hard to do under emulation), a forcible migration could pretty much Thanos Snap pc gaming, as much of the back catalogue would be rendered inaccessible (without developer intervention, which is far from a given). 

 

For consoles this is not so much of an issue as they only update them every 6+ years, a high end ARM S0C will be fast enough. with a offline re-transompiler of games (like Rossta2, this is not emulating x86 it re-compiles the x86 core to sub-optimal ARM and runs that). 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, hishnash said:

Server ARM absolutly do, the domain oh high bandwidth DDR deployments does not belong to x86, both Power and ARM tend to ship the latest generation DDR servers 2 to 3 years before there are x86 options on the market.  

>power 

lol, even power10 is still stuck with ddr4.  Power11 isn't a thing yet. We are only seeing DDR5 servers this year, Intel and AMD were the first ones to make use of it. Only place you'll see ARM using ddr5 first is in the mobile space with the LPDDR5 devices.

6 minutes ago, hishnash said:

yes and for manager services all the current major cloud providers are moving to running these on ARM. 

[citation needed]

13 minutes ago, hishnash said:

Yes the services depend on dedicated hardware but that high bandwidth IO, GPUs or video encoding paths but the CPUs that data centres are opting to use to manager these dedicated HW tend to be ARM (at least AWS, Google and even MS Azure, I have not used IBMs offerings recently). 

5 minutes ago, hishnash said:

Yes as I said customer facing options (were you can rent the VM) are a mix and will stay as such, were data centres are moving to explicitly using ARM and the primary option are the services that are managed, were your not running your code just calling thier services run running your own code.   

You just repeat that without giving any source. Most of their serverless stuff still run on x86, just spin up a lambda or any other compute offering and it'll be x86. Even stuff like their non-compute services still run on x86, at least for GCP and AWS (only ones I have proper knowledge of, can't comment for other providers).

 

As an example, Google's TPU and VCU stuff still run on top of Intel x86 machines. For AWS, even though they are indeed using their Graviton chips a lot internally, it still pales in comparison to their x86 usage. Graviton's availability is far behind x86, and I say this as someone who worked in a company who exhausted an entire Zone of instances.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

I really don't understand why some companies like Apple and Nvidia try to make memory on CPUs so difficult..."Oh I need to put faster memory but you couldn't change overtime" Why this faster memory isn't is a layer before the slow ram?

I even don't understand why nobody did this on graphics cards...like put some DDR5 slots on a card and a layer of GDDR6X between to caching the DDR5 before reach the GPU.

 

Memory is easy, companies are complicated. 🙄

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×