Jump to content

I got a bunch of ConnectX-3 RDMA 10GbE cards. Now what?

maxtch

I am trying to upgrade my home metwork to 10Gig using retired datacenter stuff. For whatever reason the cheapest single and dual port 10Gig Ethernet cards with SFP+ ports are ConnectX-3 CX311A and CX312A for ~$25 each, which also supports RDMA over Ethernet (RoCE.) Well since that is the price tag all my desktop computers and servers, except that Hackintosh with its X520, will receive a ConnectX-3 card. The NAS gets a dual-port CX312A, and everyone else get a single-port CX311A. Now that I somehow got myself an RDMA-ready home network, how can I make use of that feature?

 

I know SMB Direct is a thing, and for my virtualized NAS I obviously will use the virtualization features on that CX312A to give the storage node two RDMA-capable network interfaces, one from each of the two ports. Now the problem is how to make Linux/Samba work with SMB Direct? Also since that is a NAS what other common protocols supports RDMA?

 

Other than dumping files over the network, what else can I do with RDMA? Can I, for example, pull in GPU power through RDMA? I did nab a Tesla K20X if such cards are necessary. Will it help with VM migration?

 

p.s. The 10Gig switch I got is a used Brocade 8000, a combined 24-port 10GbE + 8-port 8GFC switch. What can the FC parts do?

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

The SFP+ ports allow you to use DAC cables and/or fiber adapters.  These use less power than normal copper ethernet cabling and in the case of fiber can have much longer range than the 100m limit of copper cables.

You can of course also get adapters to convert those into 10Gbit copper ports too.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×