Jump to content

SFP28 VS QSFP+

reverseslayer

Im currently running an Infiniband QSFP+ link between 2 servers which both have 8 drive 2TB nvme raid arrays.

 

QSFP+ is just 4 link aggregated SFP+ connections, so when I use something like SMB or NFS on a file transfer I only get about 1.2 Gigabytes per second max. Which is the theoretical max of what a single SFP+ could do. I haven't tested SMB Multi-channel yet over Infiniband to see how much that helps but im not expecting much unless its hit from multiple angles.

 

What im truly curious about is should I upgrade to SFP28 Infiniband adapters?

 

The overall link is slower at 25Gbit vs 56Gbit but since its a single channel im thinking it might help the SMB transfers. If i had to guess it would be a theoretical x2 on transfer speed. A big plus would be the network adapters, transceivers, and fiber are all cheaper. The only issue I see is that SFP28 is still new so the switches havent come down in price on the used market yet.

 

If anyone has the capability can someone test to see if SFP28 or QSFP+ [is faster on SMB, NFS, Crystal disk mark speed test] is faster for single machine file transfers?

Link to comment
Share on other sites

Link to post
Share on other sites

I don't have a way of testing, but I doubt you're gonna get significantly more than 10gbps with 25 gbps with SMB.

 

You could set up a FTP server and transfer with simultaneous connections to get multiple SFP+ links active.

Link to comment
Share on other sites

Link to post
Share on other sites

QSFP is NOT link agg of 4x SFP+ connections, it's 4x electrical connections from the SERDES at ~10Gbps but the interface does 40Gbps natively. If it was link agg you would usually never get over 10Gbps on a single flow and that's not true with QSFP/etc. Same with with QSFP28 and QSFPDD, yes it's electrically 56 or 112Gbps electrical lanes each but it's still a single interface.

 

The only time the single lane comes into play is for breakout and down speeding.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

So then your saying that going with the sfp28 would not make any tangible difference to the speed of the network. I was also under the impression that Seralizers and Deseralizers were not used on high speed applications anymore, so that one is my fault for not looking too deep into it.

 

Do you think that not having the SERDES in the link would decrease latency? or should I just run the QSFP+ network that Icurrently have

 

because im running the Mellanox IS5022 and finding something thats SFP28 looks to be way more expensive. almost 10x more

 

I also looked it up and QSFP+ is MUX'ed on Single mode fiber and never serialized on Multimode fiber. Each of the 4 channels is its own lane through and through

 

For SMF

The working principle of 40GBASE-LR4 transceivers.jpg

 

MMF is Parallel MTP connectors having 8 strands, 4 up and 4 down

Link to comment
Share on other sites

Link to post
Share on other sites

SFP28 would make zero difference at this point in time, your issue is something else, likely at the server side.

 

SerDes is 100% used, it's the heart of what gets data from the ASIC to the physical interface itself. SERDES is sometimes also called the PHY controller (or SERDES PHY). It's not used on the SFP module itself though. Right now we're at 112Gbps SERDES x 512 lanes to drive 64x 800Gbps interfaces on a single switch. I'm no EE expert though but this should help explain it better if you want to dive deeper:

https://docs.broadcom.com/doc/56980-DG

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×