Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nvidia announces better ARM support and new ARM CPU - x86 not the main player anymore

13 hours ago, RedRound2 said:

It's funny how like 2-3 years ago, people were swearing on themselves than ARM PCs are never going to be a thing. CPUs have just become really interesting and it's gotten even more spiced up after M1 along with ARM's big entrance. Pretty excited to see what's coming up in the next 5 years in the CPU space

ARM will stay relegated to Phone, the Macbook, and Surface Pro X. You won't find and ARM based PC short of some AIO unit.

 

Due to licensing, I'm thinking RISC-V will explode in adoption in China and India. That momentum will be carried over with possibly Intel and AMD creating a RISC-V chip for the PC market.

 

PC will jump from X86 to RISC-V over the next 10 years. That's my prediction.

Link to post
Share on other sites
1 hour ago, StDragon said:

ARM will stay relegated to Phone, the Macbook, and Surface Pro X. You won't find and ARM based PC short of some AIO unit.

 

Due to licensing, I'm thinking RISC-V will explode in adoption in China and India. That momentum will be carried over with possibly Intel and AMD creating a RISC-V chip for the PC market.

 

PC will jump from X86 to RISC-V over the next 10 years. That's my prediction.

I doubt so. I predict that nothing substantial will change in the next 10 years (I predict still 70+% x86) and a couple of cool ARM CPUs for Linux / Mac users simply because of backwards compatibility.

 

RISC-V is awesome because coding assembly for it is super easy compared to how horrible other ISA are. And as you said, free licensing will probably make it explode in the ultra embedded market (IIRC Western Digital is already using it for their microcontrollers).

 

However it's simplicity will be the end of it. ARM has too many vector / matrix extensions which speeds up programs a lot (their new SVE2). ARM also has memory tagging for protection against use-after-free and overflows. Windows will be able to use all of it simply by making an ARMv9 variant. On the other hand RISC-V binaries for desktop uses will have to work with the lowest common denominator and use very little of it's non standard extensions. 

 

That's the saddest part of modern day x86 operating systems. Even though we have all of these awesome modern instructions in our CPUs, we're not using them (on x86, everything gets compiled to match the instruction set of the first x86_64 processor). At least some Linux distros are waking up with upping the CPU base requirements, but that's not something Windows can do.

Link to post
Share on other sites
1 hour ago, StDragon said:

Due to licensing, I'm thinking RISC-V will explode in adoption in China and India. That momentum will be carried over with possibly Intel and AMD creating a RISC-V chip for the PC market.

Really doubt so. I can see RISC-V being a thing on embedded as mentioned above, and this will force ARM to get further into the general purpose CPU space.

AMD already had projects with ARM cores, but shelved those due do the massive success that Ryzen was, so they may come back to it someday.

 

3 minutes ago, kvuj said:

At least some Linux distros are waking up with upping the CPU base requirements, but that's not something Windows can do.

I can't wait to get x86_64v3 packages on my system :old-grin:

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites
1 hour ago, StDragon said:

ARM will stay relegated to Phone, the Macbook, and Surface Pro X. You won't find and ARM based PC short of some AIO unit.

 

Due to licensing, I'm thinking RISC-V will explode in adoption in China and India. That momentum will be carried over with possibly Intel and AMD creating a RISC-V chip for the PC market.

 

PC will jump from X86 to RISC-V over the next 10 years. That's my prediction.

Makes some predictions about the future I’m not all that sure of. The weakest point I think is how far China/India momentum would carry.  Requires a reverse in momentum I haven’t seen yet.  I’m not saying I think you’re wrong necessarily but I’m less sure than you seem to be.

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
7 hours ago, RedRound2 said:

but these people were talking about performance and how an ARM CPU could never match up to x86.

ARM CPUs as fast as or faster than Intel Xeons have existed for many years. Apple only did consumer performance ARM first not actual performance ARM as a whole first.

 

It's never been an issue with ARM not having or being capable of having the performance, nobody wants a complete an utter software break in a proposed switch and not even Apple users want that either which is why Rosetta exists and has existed in the past for the same reason. So until that and ongoing software is seen as a viable ecosystem on Windows a switch simply won't happen.

Link to post
Share on other sites
15 hours ago, StDragon said:

ARM will stay relegated to Phone, the Macbook, and Surface Pro X. You won't find and ARM based PC short of some AIO unit.

wrong. Current performance ARMv8 chips are already encroaching both the enterprise and consumer space. One of the considerations as to why ARM will eventually replace x86 is power consumption. Not to mention, ARMv9 now supports 128-2048 bit blocks SVE2 which is important for data scientists doing matrix calculations.

Edited by captain_to_fire

There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites
3 hours ago, kvuj said:

At least some Linux distros are waking up with upping the CPU base requirements, but that's not something Windows can do.

This is most certainly happening on Windows, I forget who/which software it was but they announced recently that anything pre-dating Sandy Bridge (fairly sure it was that) meaning Core 2 etc would no longer be supported and the software wouldn't run at all.

 

Also a lot of software do make use of newer instruction sets on Windows, and Linux etc, as compilers put in code paths and detection at compile time. So if all your CPU supports is SSE2 then SSE2 will be used, however if your CPU supports AVX2 then AVX2 will be used. Better software will not rely on the compiler to do this and will do it themselves but either way unless you explicitly do not allow certain instruction sets at compile time or your compiler does not support them modern ones are utilized, and I say utilized in the sense of possibility as not all code can utilize say AX2 but I think you know that.

Link to post
Share on other sites
7 minutes ago, leadeater said:

ARM CPUs as fast as or faster than Intel Xeons have existed for many years. Apple only did consumer performance ARM first not actual performance ARM as a whole first.

 

It's never been an issue with ARM not having or being capable of having the performance, nobody wants a complete an utter software break in a proposed switch and not even Apple users want that either which is why Rosetta exists and has existed in the past for the same reason. So until that and ongoing software is seen as a viable ecosystem on Windows a switch simply won't happen.

I don’t know about specifically in windows, but in general, yes.  X86 is currently living on software compatibility.  Software compatibility is enough to make even vastly outdated systems live for a long while.  The appleIIe lived well beyond its sell by date because of that, and x86 doesn’t have many of the problems the appleIIe did.  Relying on that has its pitfalls though.  

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
21 minutes ago, leadeater said:

This is most certainly happening on Windows, I forget who/which software it was but they announced recently that anything pre-dating Sandy Bridge (fairly sure it was that) meaning Core 2 etc would no longer be supported and the software wouldn't run at all.

 

Also a lot of software do make use of newer instruction sets on Windows, and Linux etc, as compilers put in code paths and detection at compile time. So if all your CPU supports is SSE2 then SSE2 will be used, however if your CPU supports AVX2 then AVX2 will be used. Better software will not rely on the compiler to do this and will do it themselves but either way unless you explicitly do not allow certain instruction sets at compile time or your compiler does not support them modern ones are utilized, and I say utilized in the sense of possibility as not all code can utilize say AX2 but I think you know that.

That's not something that can be done for every piece of windows software since everything is compiled by each individual developers.

 

I'm only aware of games needing every last drop of performance querying CPU capabilities with cpuid(), but the majority of every day software lets the compiler handle auto vectorization.

 

Unless you're talking about the dynamic linker (like in glibc 2.33) but I'm pretty sure since it's runtime, it's not as fast and requires many implementations for different CPUs. I also haven't heard Microsoft brag about a similar implementation, though that could be possible.

 

EDIT: To be clear, I'm talking about C/C++. C# compiles to Microsoft bytecode and could uses an optimized interpreter to correctly use CPU instructions. 

"...the second one RyuJIT,[75] is a JIT (just-in-time) compiler, which is dynamic and does on-the-fly optimization and compiles the IL into native code for the front-end of the CPU."[1]

Link to post
Share on other sites
9 minutes ago, captain_to_fire said:

wrong. Current performance ARMv8 chips are already encroaching both the enterprise and consumer space. One of the considerations as to why ARM will eventually replace x86 is power consumption. Not to mention, ARMv9 now supports 128-2048 bit blocks SVE2 which is important for data scientists doing matrix calculations.

Lower power consumption matters more for portable stuff than non portable stuff consumer wise.  Also the energy savings of risc apply mostly to wait state stuff.  When both systems run flat out there is no energy savings.  Wait states are pretty common though.  For portable stuff they can even be the vast majority of the time. As a result risc has become very very common for portable electronics. How far that will leverage is unknown. You could be right.  It’s not a sure thing yet though. 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
5 minutes ago, kvuj said:

That's not something that can be done for every piece of windows software since everything is compiled by each individual developers.

 

I'm only aware of games needing every last drop of performance querying CPU capabilities with cpuid(), but the majority of every day software lets the compiler handle auto vectorization.

 

Unless you're talking about the dynamic linker (like in glibc 2.33) but I'm pretty sure since it's runtime, it's not as fast and requires many implementations for different CPUs. I also haven't heard Microsoft brag about a similar implementation, though that could be possible.

One word: dosbox

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
25 minutes ago, captain_to_fire said:

One of the considerations as to why ARM will eventually replace x86 is power consumption

I'd be careful there, all the current ARM HPC CPUs currently use the same/similar amount of power as Zen3/CascadeLake/IceLake and offer competing performance not better performance. These are after all also TSMC 7nm products so not unsurprising.

 

I think ARM has the biggest advantage in power and performance in like 65W and below (maybe slightly higher 🤷‍♂️), past that it all becomes very much the same with things like silicon process being the same.

 

One of the more practical advantages current ARM products have when it comes to power is workloads that do not require peak power. Since they don't have boost frequency the power usage acts and graphs very similar to x86 frequency boosting. So on an x86 platform the CPU will boost to fill the power target (until hitting maximum clocks) where as on ARM the power will reduce.

 

Ampere-Altra_1_2_575px.png

 

Ampere-Altra_1_3_575px.png

 

Quote

Because frequency is essentially fixed under most workloads, what actually fluctuates between different types of workloads is the power consumption of the processor. The figure described as TDP by Ampere here is the maximum peak small-period average power consumption by the processor.

https://www.anandtech.com/show/16315/the-ampere-altra-review/2

 

If ARM ever implements frequency boosting then power usage will become even more similar to x86. Of course even on x86 you can actually disable frequency boosting and therefore power will fluctuate like it does with ARM. It's more of a design and architecture advantage than it is a pure ISA advantage.

Link to post
Share on other sites

@kvujNot sure how I forgot who it was but it's Google Chrome. Plus I was wrong, it's pre Core 2.

 

Quote

In a policy document, the Chromium development team has announced that they are dropping support for all x86 CPUs which do not have a minimum of SSE3 (Supplemental Streaming SIMD Extensions 3) support, starting in Chrome 89.

https://mspoweruser.com/google-chrome-is-dropping-support-for-some-really-old-cpus/

https://www.techradar.com/news/google-chrome-will-no-longer-support-some-older-processors

Link to post
Share on other sites
5 minutes ago, leadeater said:

Pentium4?  I’m astounded they haven’t done that long ago. Wikipedia says that was introduced in 2004, so less than 20 years ago but still a long time.

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
5 hours ago, igormp said:

Having an ARM CPU on linux with a nvidia GPU is a no issue for most ML/Science tools, in fact it's even better than windows for that specific scenario.

Well I can attest to the fact that all our different sciences departments exclusively use Linux for their computation, with the exception of geology/geography data visualization. Mathematics is a bit more of a mix.

Link to post
Share on other sites
1 minute ago, gabrielcarvfer said:

Sidetracking here: Jensen seems to have aged quite a lot.

 

  Reveal hidden contents

GTC 2020

 

_20210414_001814.JPG.d77a74135d9cd29adb66956d375c2ea8.JPG

 

GTC 2021

 

_20210414_001829.thumb.JPG.4d22a290242f3b8e6a71deac019af088.JPG

 

Time is like that.  I am repetively horrified when I look in the mirror

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
11 hours ago, igormp said:

I wonder if they're not targeting the regular "gamer" market, but the scientific/prosumer one.

Having an ARM CPU on linux with a nvidia GPU is a no issue for most ML/Science tools, in fact it's even better than windows for that specific scenario.

Well, that's a very specific scenario for which Linux is probably already being used anyways.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3733MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES Edition | Altec Lansing MX5021 Nichicon/MUSES Edition

Link to post
Share on other sites
5 hours ago, gabrielcarvfer said:

Sidetracking here: Jensen seems to have aged quite a lot.

 

  Reveal hidden contents

GTC 2020

 

_20210414_001814.JPG.d77a74135d9cd29adb66956d375c2ea8.JPG

 

GTC 2021

 

_20210414_001829.thumb.JPG.4d22a290242f3b8e6a71deac019af088.JPG

 

I felt he looked older now, but wow what a difference, either the pandemic (i mean the social effect not him getting covid19) or the ARM acquisition got to him hard somehow, most likely the acquisition though.

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to post
Share on other sites
2 hours ago, RejZoR said:

Well, that's a very specific scenario for which Linux is probably already being used anyways.

Correct, which this entire thing is specifically targeted for. It's a bit like saying an A100 GPU isn't very good for gaming and costs too much, which while both parts of the statement are true and correct the A100 is not a gaming GPU.

 

Nvidia's announcement is for this and only for this, it's got nothing to do with general server workloads or consumer desktop/laptop, even those mentioned are in mind for scientific computing users.

 

Nvidia's much deeper in to this market than most people outside of it are aware. Nvidia is the ODM/OEM for all the server vendors that use the 8 GPU and 16 GPU NVLink configurations. So for example if you look at an HPE Apollo 6500 that board inside of it that has the SXM GPUs on it is designed, made and supplied by Nvidia and everyone designs around this. Only PCIe systems are fully designed by the server vendors.

 

HPE Apollo 6500 GPU tray

image.png.d5ec7befb2ce225b3ef391627043bdbb.png

 

Supermicro GPU tray

image.png.cc26ad82935d98a581c64f55f6c2db03.png

 

Note the actual board in the trays are exactly the same, that's Nvidia's.

 

So the long and short of it is Nvidia already has the market power to push ARM and can require it's adoptance if they so wish, it's not a choice you currently have.

Link to post
Share on other sites
8 hours ago, Bombastinator said:

 Also the energy savings of risc apply mostly to wait state stuff. When both systems run flat out there is no energy savings.

Where did you get this (incorrect) idea from?

Link to post
Share on other sites
2 hours ago, LAwLz said:

Where did you get this (incorrect) idea from?

This forum.  It has been stated repeatedly and not before disagreed with afaik.I also got what I thought was an independent confirmation from a programmer I know.  I may have misconstrued his statement though.

Edited by Bombastinator

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

Summary

Nvidia announced during the GPU Technology Conference, that they are going to be using their own ARM-based CPUs for their upcoming compute servers. Nvidia is licensing the ARM reference Neoverse core design, with custom I/O and interconnects to GPUs, providing up to 10x the performance of their existing DGX platforms. Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory said they'll be ordering some,  which is an indication of what kind of customer this is targeted for. These systems are target to ship by 2023.

Image%20-%20Grace_678x452.jpg

 

PCIe_575px.jpg

 

NVLink.jpg

 

My thoughts

Nvidia seems to be drawing a lot of comparisons with x86-based systems, and a lot of the comparison is based on GPU and memory bandwidth. This makes sense, as the target application of these systems are typically memory-bound.

 

They are doing this probably because on x86 systems, the peripherals and I/O of the CPU is defined by AMD/Intel; whereas when using an ARM core, you can design your own interface around the cores to fit your own specific needs (which only a large corporation with chip design expertise can manage, Nvidia being one of them). 

 

This seems to be another example of the trend of large corporations using their own ARM-based designs in their systems.

 

Sources

[Anandtech article]

Me: Computer Engineer. Geek. Nerd.

[Educational] Computer Architecture: Computer Memory Hierarchy

[Educational] Computer Architecture:  What is SSE/AVX? (SIMD)

Link to post
Share on other sites
12 minutes ago, Wander Away said:

Nvidia is licensing the ARM reference Neoverse core design

its actually been known for quite some time now

Link to post
Share on other sites
19 minutes ago, Wander Away said:

they are going to be using their own ARM-based CPUs for their upcoming compute servers.

From industry standard to proprietary stuff,Yeah,these CPU-GPU-RAM-VRAM cards are probably gonna cost more than top tier X86 or PowerPC solutions combined with PCI-E graphics cards.

 

You will probably need to buy more cards in order to add RAM,and pay for GPUs and a CPUs you don't need.

A PC Enthusiast since 2011
AMD Ryzen 5 2600@3.9GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
On 4/13/2021 at 7:55 AM, RedRound2 said:

It's funny how like 2-3 years ago, people were swearing on themselves than ARM PCs are never going to be a thing. CPUs have just become really interesting and it's gotten even more spiced up after M1 along with ARM's big entrance. Pretty excited to see what's coming up in the next 5 years in the CPU space

ARM CPUs are not new,even in the consumer space...

A PC Enthusiast since 2011
AMD Ryzen 5 2600@3.9GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×