Jump to content

10GbE SFP+ Network Concerns

Go to solution Solved by leadeater,
1 minute ago, Nikolithebear said:

No I have no, I dont think SMB is the issue.

Mellanox ConnectX-2 SFP+ Nics x2 in each server.  I actually achieved 678 transfer speed to my nas with my RAM drive.  At this point I would think its a drive speed bottleneck.  What do you guys think?

 

NAS Drive: x3 WD 1TB Red 5400RPM RAID5

 

Connecting Server Drive: 10k Seagate SAS Cheata Drives Raid 1 OR a 4gb RAM Drive.

Yea sounds like a disk limit then, open resource monitor and go to the disk tab and check your disk activity and queue length.

Hey guys,

 

I'm connecting two of my servers with dedicated 10GbE SFP+ connections.  Both NICs are seen in OS and are transferring data successfully over the connection at near "max speeds".  However the connection speed is not what it should be.  I've enabled Jumbo Packets at max size, picture below. 

80f77b5994aef76c6e199eac1882f44d.png

"45 ACP because shooting twice is silly!"

Link to comment
Share on other sites

Link to post
Share on other sites

Have you tried sending something from one linux machine to another with a less intensive protocol like smb? This looks like a problem of overhead.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, StarPunk said:

Have you tried sending something from one linux machine to another with a less intensive protocol like smb? This looks like a problem of overhead.

Overhead would be a couple hundred Mb/s off the top, not 9Gb/s off, lol.

 

Have you tried disabling all other adapters or unplugging all other cables to make sure it's going through the right adapter?

What model NIC?

What cable?

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lurick said:

Overhead would be a couple hundred Mb/s off the top, not 9Gb/s off, lol.

 

Have you tried disabling all other adapters or unplugging all other cables to make sure it's going through the right adapter?

What model NIC?

What cable?

My fault. I read 6.6Gb/s .. you are totally right

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, StarPunk said:

Have you tried sending something from one linux machine to another with a less intensive protocol like smb? This looks like a problem of overhead.

No I have no, I dont think SMB is the issue.

3 minutes ago, Lurick said:

Overhead would be a couple hundred Mb/s off the top, not 9Gb/s off, lol.

 

Have you tried disabling all other adapters or unplugging all other cables to make sure it's going through the right adapter?

What model NIC?

What cable?

Mellanox ConnectX-2 SFP+ Nics x2 in each server.  I actually achieved 678 transfer speed to my nas with my RAM drive.  At this point I would think its a drive speed bottleneck.  What do you guys think?

 

NAS Drive: x3 WD 1TB Red 5400RPM RAID5

 

Connecting Server Drive: 10k Seagate SAS Cheata Drives Raid 1 OR a 4gb RAM Drive.

"45 ACP because shooting twice is silly!"

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, StarPunk said:

My fault. I read 6.6Gb/s .. you are totally right

Even then I've had SMB file transfers right on 10Gbps, very slightly less. SMB isn't SMB1 anymore, it's not horrific like it used to be.

 

43 minutes ago, Nikolithebear said:

Hey guys,

 

I'm connecting two of my servers with dedicated 10GbE SFP+ connections.  Both NICs are seen in OS and are transferring data successfully over the connection at near "max speeds".  However the connection speed is not what it should be.  I've enabled Jumbo Packets at max size, picture below. 

Check your actual link negotiated speed, looks like it's only at 1Gbps.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nikolithebear said:

No I have no, I dont think SMB is the issue.

Mellanox ConnectX-2 SFP+ Nics x2 in each server.  I actually achieved 678 transfer speed to my nas with my RAM drive.  At this point I would think its a drive speed bottleneck.  What do you guys think?

 

NAS Drive: x3 WD 1TB Red 5400RPM RAID5

 

Connecting Server Drive: 10k Seagate SAS Cheata Drives Raid 1 OR a 4gb RAM Drive.

Yea sounds like a disk limit then, open resource monitor and go to the disk tab and check your disk activity and queue length.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Yea sounds like a disk limit then, open resource monitor and go to the disk tab and check your disk activity and queue length.

Yup! Looks like Ive hit the max disk usage.  Ah well, now its an excuse for me to get some SSDs ;)  Thank you all for the help!  Im sure Ill be back soon with new issues as it always seems.  Networking is one letter away from Not working. :) 

"45 ACP because shooting twice is silly!"

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×