Jump to content

Nvidia Buys Mellanox [Updated]

Lurick

Rumors/reports have started surfacing that Nvidia might be in the market to buy Mellanox who are mostly known for their InfiniBand and Ethernet networking gear after earlier reports discussed rumors that Intel was looking to buy the company. It appears that Nvidia has since swooped in with an offer of $7 billion dollars for the company although exact terms are still unknown one person has said the deal will be done in cash. In my opinion this could be interesting to see how Nvidia integrates the mainly networking gear focused vendor into their existing data center portfolio but could also lead to some interesting solutions as well with high bandwidth GPUs coupled with low latency networking gear.

 

Quote

The deal would be Nvidia’s biggest-ever acquisition and boost its business of making chips for data centers, allowing it to reduce its reliance on the video game industry, for which it is best known as a major technology vendor.

 

Nvidia has outbid Intel Corp in the auction for Mellanox and could announce a deal as early as Monday, the person said. The source asked not to be identified because the negotiations are confidential. Intel and Mellanox did not immediately respond to requests for comment. Nvidia declined to comment. Financial news website Calcalist had reported earlier on Sunday that Nvidia had outbid Intel for Mellanox.

 

Link: https://www.reuters.com/article/us-mellanox-m-a-nvidia/nvidia-nears-deal-to-acquire-mellanox-technologies-source-idUSKBN1QR0QP

 

 

Edit:

It looks like the deal has been finalized with a final bid of $125/share or around $6.8 billion and the deal is expected to close by the end of 2019. There are some concerns as Mellanox has large customers in China and due to the recent US/China trade row.

 

New Link: https://www.reuters.com/article/us-mellanox-m-a-nvidia/nvidia-outbids-intel-to-buy-israels-mellanox-in-data-center-push-idUSKBN1QS197

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Networking stuff on GPUs?

 

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Lurick said:

Rumors/reports have started surfacing that Nvidia might be in the market to buy Mellanox who are mostly known for their InfiniBand and Ethernet networking gear after earlier reports discussed rumors that Intel was looking to buy the company. It appears that Nvidia has since swooped in with an offer of $7 billion dollars for the company although exact terms are still unknown one person has said the deal will be done in cash. In my opinion this could be interesting to see how Nvidia integrates the mainly networking gear focused vendor into their existing data center portfolio but could also lead to some interesting solutions as well with high bandwidth GPUs coupled with low latency networking gear.

 

 

Link: https://www.reuters.com/article/us-mellanox-m-a-nvidia/nvidia-nears-deal-to-acquire-mellanox-technologies-source-idUSKBN1QR0QP

YAASSS NETWORKING ON GPUS

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Lurick said:

In my opinion this could be interesting to see how Nvidia integrates the mainly networking gear focused vendor into their existing data center portfolio but could also lead to some interesting solutions as well with high bandwidth GPUs coupled with low latency networking gear.

Makes sense thinking about it, systems like the DGX-2 are rather complicated and use lots of expensive PCIe/NVLink switches and the system needs lots of power. I'm thinking Nvidia is looking to offboard the GPUs and run NVLink across IB type technology. I know PCIe rack switches also exist but having not used those I don't know how well they work.

 

Would be even more interesting if Nvidia can make the GPUs work independent of a server system and you can send job to them over network/fabric, that would be awesome. 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, leadeater said:

Would be even more interesting if Nvidia can make the GPUs work independent of a server system and you can send job to them over network/fabric, that was be awesome. 

Interestingly that was my second thought, my first thought was with ARM getting better in the power compute space that maybe Nvidia postulate some writing on the wall for Traditional GPGPU and simply want to diversify their product range.   

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

i thought the title said Matrox first and kinda panicked tbh

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Would be even more interesting if Nvidia can make the GPUs work independent of a server system and you can send job to them over network/fabric, that would be awesome. 

Welcome to systems-on-chip, where the embedded space has been enjoying it for years. Considering that  Xilinx and Intel-Altera have been strapping ARM CPUs beside their FPGAs (Zynq and Cyclone SoCs come to mind), I don't see a conceptual issue strapping a Linux-capable CPU right beside a full nVidia GPU through some proprietary interconnect. Heck, nVidia's Tegra CPU might even play a role in such a possible development.

 

(I am in a uni course where we play around with such SoCs; check the Terasic DE1-SoC evaluation board for more info).

Your resident osu! player, destroyer of keyboards.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, FPSwithaWacomTablet said:

Welcome to systems-on-chip, where the embedded space has been enjoying it for years. Considering that  Xilinx and Intel-Altera have been strapping ARM CPUs beside their FPGAs (Zynq and Cyclone SoCs come to mind), I don't see a conceptual issue strapping a Linux-capable CPU right beside a full nVidia GPU through some proprietary interconnect. Heck, nVidia's Tegra CPU might even play a role in such a possible development.

 

(I am in a uni course where we play around with such SoCs; check the Terasic DE1-SoC evaluation board for more info).

Shouldn't actually need to pair the GPU with anything that high in the stack though implementing that much, there's a huge amount of work in the industry to get around that. Have a look at Gen-Z, CCIX etc or NVMe over Fabrics. Products implementing those could still fit under the term SoC but you don't need a CPU or to run up an OS.

 

The only problem I can see for now is just how tied GPUs are to the operating systems, driver and API wise. Getting them to the point where they are standalone addressable entities on a fabric is likely rather challenging, but then it might not be or it won't take too long to make it happen. Would love to see it in the next 3 years but I wouldn't bet money on that time frame.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Would be even more interesting if Nvidia can make the GPUs work independent of a server system and you can send job to them over network/fabric, that would be awesome.  

 

I think that Nvidia has always been after something like that. If you look at some of their compute space marketing materials you'll see this idea of some blade servers with centrally located GPGPU units. With NVLink being embedded in some versions of PowerPC chips now, they may be looking to further sink their teeth into this space.

It may not even have anything to do with improving their GPU offerings, and just be about strengthening their position in the industry.

1 hour ago, mr moose said:

Interestingly that was my second thought, my first thought was with ARM getting better in the power compute space that maybe Nvidia postulate some writing on the wall for Traditional GPGPU and simply want to diversify their product range.    

I would tend to think that on-the-fly reconfigurable FPGA technology would be the threat they are worried about. AFAIK ARM chips aren't encroaching on the GPGPU space, they are encroaching on the traditional processor space.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia did have their own networking chip during the nforce 3 days.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, straight_stewie said:

 

 AFAIK ARM chips aren't encroaching on the GPGPU space, they are encroaching on the traditional processor space.

Maybe not now but I can see it in the future, and NVIDIA like most companies plan 30 years out.

 

I think had you said ARM would be encroaching on desktop processor space 10 years ago you would have been laughed off the forums for being ignorant,  but now we are looking at 1000 petaflop super computers being built around them.

 

https://www.datacenterdynamics.com/news/fujitsu-joins-linaro-to-develop-hpc-software-for-arm/

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

-snip-

actually this would be an excellent move for the server space, imagine a server with just GPU's, a simple arm CPU, and high speed low latency networking

render farms with this would be more efficient and faster since the workload is mostly GPU bound and don't require much CPU power and the data that needs to be processed can be sent directly to the GPU's VRAM and/or cache without requiring the CPU or any other device to get involved in the process, allowing fast transmission of data that needs to be processed quickly.

this would be a great move for nvida to acquire and use this tech to improve server render farm technologies

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Salv8 (sam) said:

a simple arm CPU

Why have the arm CPU at all ?. Once you have the fabric controller on the device and itself can communicate over the fabric there is no longer any need for a CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Why have the arm CPU at all ?. Once you have the fabric controller on the device and itself can communicate over the fabric there is no longer any need for a CPU.

i suggest arm since some server software already and OS's already exist for the architecture. due to it's low power and software support it could be an excellent option for CPU operations, RISC could also work as well although some existing software may need to be ported. from what i have read a fabric controller may not be applicable since all i can find on it is Microsoft's implementation of it in azure, it could work but it would again require custom software that doesn't exist as far as i know; that can interface with these custom devices. with my idea, existing solutions will work the same way, only the render node hardware would be changed and the software would be modified to work with the new hardware and architecture.

ya see where i'm getting at?

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Salv8 (sam) said:

i suggest arm since some server software already and OS's already exist for the architecture. due to it's low power and software support it could be an excellent option for CPU operations, RISC could also work as well although some existing software may need to be ported. from what i have read a fabric controller may not be applicable since all i can find on it is Microsoft's implementation of it in azure, it could work but it would again require custom software that doesn't exist as far as i know; that can interface with these custom devices. with my idea, existing solutions will work the same way, only the render node hardware would be changed and the software would be modified to work with the new hardware and architecture.

ya see where i'm getting at?

Well I'm more meaning Nvidia can make the GPUs addressable resources for a server. By that I mean you can have a rack chassis of GPUs attached to an IB/NVLink fabric that servers also connect to and you can send tasks directly to those GPUs, each piece can take care of themselves and doesn't require any extra handling. That's the basis of Gen-Z, every device has a Gen-Z controller and they can all talk Gen-Z and pass information between each other but nothing is reliant on each other to function.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Well I'm more meaning Nvidia can make the GPUs addressable resources for a server. By that I mean you can have a rack chassis of GPUs attached to an IB/NVLink fabric that servers also connect to and you can send tasks directly to those GPUs, each piece can take care of themselves and doesn't require any extra handling. That's the basis of Gen-Z, every device has a Gen-Z controller and they can all talk Gen-Z and pass information between each other but nothing is reliant on each other to function.

ah i get ya now...

anyways both of our solutions work to solve the same problem, how the industry proceeds is up to them.

i wanna see how this goes.

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Salv8 (sam) said:

ah i get ya now...

anyways both of our solutions work to solve the same problem, how the industry proceeds is up to them.

i wanna see how this goes.

Well both Gen-Z and CCIX are being worked on and there is actual working examples of both. I think CCIX is further along and it's 'rumored' to be in EPYC2.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, leadeater said:

Well both Gen-Z and CCIX are being worked on and there is actual working examples of both. I think CCIX is further along and it's 'rumored' to be in EPYC2.

cool

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

One day it's rumor from Intel, today Nvidia...

I guess that's good for Mellanox share rate ! ? 

 

While NVlink over IB sounds nice, they should focus on porting codes for supporting NVlink first. I don't know a single scientific software using it (even HPL does not !). 

Link to comment
Share on other sites

Link to post
Share on other sites

We could see a NVS 2000? 

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

The time change is really fucking with me, I thought the title was "Netflix to possibly buy mellanox"

 

I am too tired :D 

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

We should also not forget they partnered with Aquantia for autonomous car networking last year. They probably have fingers in multiple pies at the moment.

 

https://hothardware.com/news/nvidia-selects-aquantia-for-multi-gig-connectivity-on-drive-px-autonomous-vehicle-platforms

Current Build: SD-DESK-07

 

Case: Bitfenix Prodigy // PSU: SeaSonic SS-650RM // Motherboard: P8Z77-I DELUXE // CPU: Intel Core i5 3570k // Cooler: Corsair H80i // RAM: Patriot Intel Extreme Masters 2X8GB DDR3 1600MHz // SSD: Crucial M500 240GB // Video: EVGA GeForce GTX 660Ti SC 2GB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×