Jump to content

10 Gig Ethernet issues

grayson612

So I have recently upgraded to 10Gbps Ethernet and I am having issues.   I have my main workstation, which is a Windows 10 6800K/64GB ram, Intel 750 series Pcie NVMe SSD, and an Intel X540 T1 Ethernet adapter.  My server, is a Windows Server 2016 6900K/64GB ram, Intel 750 series Pcie NVMe SSD, and an Intel X540 T2 bonded connection to my new Netgear switch, a XS708E-V2.  I have jumbo frames enabled on all things but the switch, (I can not find the option) and I can get the full 10 Gbps speed in something like iperf and nttcp, well over 1100 Mbps.  BUT the transfer for a file from a windows gets really strange.  See the attached photo for what happens.  I checked the temps on all the cards and SSDs and even tried changing the target drive for the test to a 850 Pro raid 0 array, but the story is the same.  

 

Any help is appreciated! 

Capture01.PNG

"If a man does his best, what else is there?"
- General George S. Patton (1885-1945)

Link to comment
Share on other sites

Link to post
Share on other sites

First of all, what speeds are you getting in iperf? That is how we can rule out issues in your network. It also could be a buffer issue.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

Create a ram disk on both computers and do a copy between those.

 

Also is the Windows Server 2016 running the Active Directory role?

 

Unplug one of the network cables on the server so there is only a single connection.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, tt2468 said:

First of all, what speeds are you getting in iperf? That is how we can rule out issues in your network. It also could be a buffer issue.

 

33 minutes ago, grayson612 said:

I can get the full 10 Gbps speed in something like iperf and nttcp, well over 1100 Mbps.

:)

 

Edit:

@grayson612 Think you meant to say 1100 MBps not Mbps.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

 

:)

oops

 

Yeah @grayson your network is fucked up

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Create a ram disk on bother computers and do a copy between those.

 

Also is the Windows Server 2016 running the Active Directory role?

 

Unplug one of the network cables on the server so there is only a single connection.

I have not tried the ram disk yet, so I will try that here in a bit and do a test copy.  I doubt the result will change, since both of the 750 SSDs should be able to handle 1+ gb/s writes, but I honestly am out of ideas at this point...

 

The server is not runner Active Directory, it is just acting as a network file server.  

 

I did just re-run the ncttp test, and it showed almost identical numbers as before, so I really don't know what's up.  

 

8 minutes ago, tt2468 said:

oops

 

Yeah @grayson your network is fucked up

I'm beginning to agree...

"If a man does his best, what else is there?"
- General George S. Patton (1885-1945)

Link to comment
Share on other sites

Link to post
Share on other sites

It probably won't make much of a difference, but have you tried a direct connection between the machines? Also try disable the bond on the x540 maybe?

Link to comment
Share on other sites

Link to post
Share on other sites

The dips look like disk write out on the target host, if the disk is overwhelmed and has a queue the network speed will drop accordingly.


Run the test while you have perfmon running on each side of the 10Gbe network, open the disk tab and watch for disk queue.  It will give you some more information at least.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, speeddemon91 said:

It probably won't make much of a difference, but have you tried a direct connection between the machines? Also try disable the bond on the x540 maybe?

I have, and it is the same result.  

 

12 hours ago, Falconevo said:

The dips look like disk write out on the target host, if the disk is overwhelmed and has a queue the network speed will drop accordingly.


Run the test while you have perfmon running on each side of the 10Gbe network, open the disk tab and watch for disk queue.  It will give you some more information at least.

I did, and nothing  got even close to peak, both drives sat around 65% when the transfer was actually occurring.  

"If a man does his best, what else is there?"
- General George S. Patton (1885-1945)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×