Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Habana AI-processors challenging Nvidia

Summary

 Israeli chipmaker Habana Labs (currently owned by Intel which acquired it year ago for 2 billion)  has received order from Amazon AWS which plans on offering their Gaudi AI processors as alternative for Nvdias AI chips for their customers. Could this challenge Nvidias position as the AI compute king?

image.png.af99ed630d8eaf1c81745d302c8aeb75.png

 

 

Quotes

Quote

AWS says that Habana's Gaudi AI processors deliver up to 40% better price performance than current graphics processing chips (in other words Nvidia's). This is a dramatic improvement in the fast growing AI computer resources consumptions market in which every percentage improvement translates into a great deal of money.

 

My thoughts

Given that AWS controls 50% of the data center market, this is interesting move and possible sign that maybe.. just maybe. Team green might not be the only option when it comes to the world of AI computing. If AWS does something well, it's cost efficiency.

 

Sources

 https://en.globes.co.il/en/article-aws-to-offer-israeli-made-habana-ai-processors-1001351972

 

image.png

Link to post
Share on other sites
40 minutes ago, Furiku said:

Team green might not be the only option when it comes to the world of AI computing.

There are other options available, such as TPUs on GCP, which also boast better cost efficiency. The problem comes to ease of use, since, even though they are meant to work with the famous frameworks (tf, pytorch, etc), there are many underlying problems that are way harder to solve than your regular, well-known GPU.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites
2 hours ago, igormp said:

There are other options available, such as TPUs on GCP, which also boast better cost efficiency. The problem comes to ease of use, since, even though they are meant to work with the famous frameworks (tf, pytorch, etc), there are many underlying problems that are way harder to solve than your regular, well-known GPU.

The AI/NN stuff tends to not fit squarely into general purpose computing, so while TF and pytorch seem to be establishing themselves as middleware between the actual software and the hardware, a lot of the software is still designed to run on CUDA/Quadro systems with 24GB of video memory. Try running some of those NN's on an 8GB Geforce part and they will usually just roll over and die after the first sample. I've actually had very little success getting any of the training examples to work, though most of the inference examples will work, maybe once if they need more than 6GB, but won't work if anything else uses the GPU, like a web browser window.

 

And yeah, GPU's were never going to be the most efficient, an ASIC would. However we're nowhere near a stage where one can be implemented in hardware, and a ^8 increase in performance is still needed before much of these things can be portable and untethered from the cloud. 

 

Link to post
Share on other sites

Oh yeah, there's always been a lot of small manufacturers who wants to compete with GPU manufacturers in the AI space.

 

Problem is, many large techs could just design their own NPU cards if they don't want to buy from someone else, which means the demand is not that big in many cases.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 5 3600 @ 4.1Ghz          Case: Antec P8     PSU: G.Storm GS850                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition @ 2Ghz

                                                                                                                             

Link to post
Share on other sites

Right now Nvidia is far ahead because of their software. CUDA has great integration with popular platforms like Pytorch and Tensorflow. Until all these other hardware companies invest in their software and APIs, you can use their products unless you are Amazon who can develop those tools internally.

Link to post
Share on other sites
1 hour ago, Kisai said:

The AI/NN stuff tends to not fit squarely into general purpose computing, so while TF and pytorch seem to be establishing themselves as middleware between the actual software and the hardware, a lot of the software is still designed to run on CUDA/Quadro systems with 24GB of video memory.

Huh, you do know that TF and pytorch do use CUDA underneath, right? That's why they don't work out of the box with AMD cards. In the end it's all about matrices FMA.

1 hour ago, Kisai said:

Try running some of those NN's on an 8GB Geforce part and they will usually just roll over and die after the first sample.

Most models will work just fine, you just need to properly tune your parameters. The only network that I know for sure that won't run at all with less than 20gb or so is pix2pix, unless you use a batch size so ridiculously low that training would not be worth it. 

 

1 hour ago, Kisai said:

And yeah, GPU's were never going to be the most efficient, an ASIC would. However we're nowhere near a stage where one can be implemented in hardware, and a ^8 increase in performance is still needed before much of these things can be portable and untethered from the cloud. 

We already have TPUs and similar stuff that are solely meant to accelerate ML tasks, and those can be classified as ASICs since it's not a general purpose chip. Amazon also has cloud FPGAs available.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×