Jump to content

DELL Precision 7920 Quadro GV100 rack form factor NVLink possibility

Hello guys!

 

Seems like you are my only, kast hope. I've been searching for the answer for daysbut nothing yet.

 

There is a Precision 7920 rack form factor workstation, which has 2 CPU and 2 1600W PSUs equipped with a Quadro GV100 VGA.

The owner wants a second, same VGA in the system, which is fine, but there comes the problem/question: these cards have NVLink connector on the for greater performance, and if you build this config in a tower configuration, you just place the 2 cards above each other, pop in the NVLink bridge, and voilá, there is the doubled bandwidth.

But how can you connect them thru their NVLink connectors in a server, or rack form?

 

Hope at least somebody saw configuration like this before, and can help, if the internet couldn't. 😞

 

Thank you in advance!

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Fefo said:

Thank you in advance!

Based on the picture, the cards are on risers with only three slots, so there isn't room to stack the cards (and ergo, no NVLink bridge).

 

AJ9F_132125233258978459Wc4umhHsfB.jpg

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | Mellanox ConnectX-3 10G NIC | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4x 10TB WD Whites / 4x 14TB Seagate Exos / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X540-T2 10G NIC | LSI 9207-8i HBA | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / Seagate 1.5TB HDD | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ────────────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Switch Flex XG ═╦═ Veda (Proxmox Virtual Switch)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)           ╠═ Veda-NAS (HW Passthrough NIC)
╔═══════════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2.5G NIC)
║ ┌────── Closet ──────┐   ┌─────────────── Bedroom ──────────────────────────────────────────────────────┐
╚═ UniFi Switch Flex XG ═╤═ UniFi Switch Flex XG ═╦═ Byarlant
   (PoE)                 │                        ╠═ Narrative (Cable Matters USB-PD 2.5G Ethernet Dongle)
                         │                        ╚═ Jesta Cannon*
                         │ ┌─────────────── Media Center ──────────────────────────────────┐
Notes:                   └─ UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
═══ is Multi-Gigabit                                ├─ Sony Playstation 4 
─── is Gigabit                                      ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center       ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom      └─ Work Laptop** (Startech USB-PD Dock)

 

Retired/Other:

Spoiler

Laptop (Rozen-Zulu): Sony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Samsung 850EVO 250GB SSD | Blu-ray Drive | Intel 7260 Wifi (lived a good life, retired with honor)

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M | HooToo USB 3.0 PCIe Card | Osprey 230 Video Capture | NZXT H230 Case

TrueNAS Server (La Vie en Rose): Xeon E3-1241v3 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 1x Kingston 16GB SSD / Crucial MX500 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Fefo said:

these cards have NVLink connector on the for greater performance

Does he really need nvlink? Depending on the workload, the performance benefit would be around 3~10% tops.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, AbydosOne said:

Based on the picture, the cards are on risers with only three slots, so there isn't room to stack the cards (and ergo, no NVLink bridge).

 

AJ9F_132125233258978459Wc4umhHsfB.jpg

Yep, that is why i did think there is some special cable for servers to connect their NVLinks, because you can't do that due to their position in the rack... But seems like there's no connection at all that replaces the NVLink bridge in racks

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, igormp said:

Does he really need nvlink? Depending on the workload, the performance benefit would be around 3~10% tops.

Well, in theory the memory bandwidth is doubled with NVLink bridge, I guess they would like to enjoy this feature.

 

I saw Linus' few videos about NVLink, well, it makes a big difference tho! But as i read, NVLink is has to be supported by the application as well, I have to find out what program/workload we'll be used for this card

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Fefo said:

Well, in theory the memory bandwidth is doubled with NVLink bridge, I guess they would like to enjoy this feature.

 

I saw Linus' few videos about NVLink, well, it makes a big difference tho! But as i read, NVLink is has to be supported by the application as well, I have to find out what program/workload we'll be used for this card

Did you contact Dell? Normally there support is reasonable for the buiness grade stuff. Wouldn't be suprised if there is some weird adapter to make nvlink work.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Fefo said:

Well, in theory the memory bandwidth is doubled with NVLink bridge, I guess they would like to enjoy this feature.

 

It allows P2P communication between the GPUs, bypassing the PCIe (which is usually a bottleneck). What I'm asking is that if their application does make use of that bandwidth, because some don't.

 

As an example, machine learning only benefits from nvlink when you have over 8 GPUs in a single system, otherwise the gains are around the 3~10% I mentioned before.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Electronics Wizardy said:

Did you contact Dell? Normally there support is reasonable for the buiness grade stuff. Wouldn't be suprised if there is some weird adapter to make nvlink work.

Not yet, but that will be the next step on Monday I guess, bit seems like theres is no option for NVLink in these cases.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, igormp said:

It allows P2P communication between the GPUs, bypassing the PCIe (which is usually a bottleneck). What I'm asking is that if their application does make use of that bandwidth, because some don't.

 

As an example, machine learning only benefits from nvlink when you have over 8 GPUs in a single system, otherwise the gains are around the 3~10% I mentioned before.

I don't know the type of workload yet (it's for a server in some kind of ministry of agriculture), but I think it doesn't matter that much, seems like there is no option for this connection in rack form factors.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

So, I know that nv-link its supported for the Dell Rack servers, Ive deployed several times this type of products in specific DC solutions like VxRail or standalone R940 servers, also in some solutions forHPC like C4140.

 

Ive never done it for a dell precision with rack format, but again I normally dont deal with the client side products, only enterprise. Best bet would be to reach out to your sales exec, if they do a part drill down of any of our nvlink ready solutions they could maybe figure out which is the bridge part number and sell one of those to you. Just for reference here is a link you can provide to them so they know what to quote.

 

https://downloads.dell.com/manuals/common/dell-emc-dfd-nvidia-recommendation-servers-workloads.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×