Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Cyberpower678

Member
  • Content Count

    10
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Cyberpower678

  • Title
    Newbie
  1. Changing some adapter settings per that thread got me up to 6 Gbps Received and 3.2 Gbps send. So definite improvement there. I'm not even going to use the tunables mentioned in that thread as those are simply ridiculous and will cause massive packet loss.
  2. I found https://www.ixsystems.com/community/threads/10gbe-transfer-speed-issue-guide.55139/ Maybe it will help.
  3. Nothing closely related to VMQ found, but I disabled Packet Priority and VLAN, but no effect. I just don't get how Windows 10 could be so spectacularly misconfigured for something so widely used.
  4. So the Intel box is most definitely not being CPU throttled. I'm not noticing any changes in the cores when I start iPerfing with it. Changing to 9K jumbos did nothing as I expected, so something else is going on here. I'm not sure what's bogging this speed down. I'm using an Intel X540
  5. I will turn it down a bit then, but aside from that, it seems I just managed to push to the server at 9.9 Gbps. Not sure what I changed to get it work, but I'll take it. Now I need to fix the Intel box running an i9-9900K.
  6. You pretty much have identical numbers to me when I did my BSD to BSD testing and then Windows to BSD. What changes is the parallelization. No matter how many parallel streams I get, it will bog down at 6 Gbps MAX. After some additional tweaking I managed to get the receive stream to run up to 7 Gbps on a single stream for Windows, but send is still stuck at 3.5 Gbps, which is at least higher than it was before. The Aquantia controller supports 16K jumbos, so I cranked it all the way up. Windows is claiming that the MTU for the adapter is now 16334, but when I view the network properties it still says MTU of 1500. This part is confusing me.
  7. So I’ve made some very interesting observation. Your link thread did help a bit as I now got speeds up to 6 Gbps on a single stream now. It’s funny how my FreeNAS running on a Celeron barely breaks a sweat when transferring at high speeds. CPU usage barely breaks 15%. But my Aquantia box which is on Windows 10, running an Intel i7-7700K, maxes out a single core when it tries to do a transfer. Talk about insane inefficiency compared to FreeBSD. I also changed the jumbo packets property to 16M, on the adapter, but Windows is still negotiating the MTU to 1500. I’m wondering if fixing that might make it possible to finally max out the through put.
  8. I have the MTU set to 9000 so it should be. I only have intermediate skills in properly networking so I might be missing something. But CPU usage is pretty minimal during the test. When I tested my two NASes, with Celeron CPUs, the CPU usage was very low. No more than 10% and they achieved maximum throughput on a single stream. I'm inclined to believe this may be an issue with Windows 10 and how it handles windowing, but this can't be happening for everyone or Microsoft would be getting buried by a shit load of angry users complaining about it. So I'm wondering what I'm doing wrong. Happy to do any kind of testing and post the results.
  9. For some reason I wasn't notified of your response. I'm using iperf3 -c <server> -R and the same command without -R I also tried parallel connections but it really caps out 6 Gbps. My connections between the servers easily got 9.9 Gbps on a single thread, over the switch. I have run iPerf on windows and got the only 1.6 Gbps back and forth on a single thread and up to 5 Gbps on 10 threads.
  10. Hey guys, So I have a (probably not unique) problem regarding getting my 10G link to my FreeNAS server saturated. I have 2 Windows 10 PCs, one with an Aquantia NIC and the other with an Intel NIC, where both have problems iPerfing to FreeNAS. I only get up to 3.5 Gbps back and forth to the NAS. The NAS and the PCs are connected over a 10G switch. I have eliminated the NICs and the switch as the culprit as I built another crappy test NAS server with a Celeron CPU and successfully pushed and pulled 9.9 Gbps between the two NASes with iPerf3. This leads me to believe that Windows is the problem here and since both machines use an 7th gen i7 and a 9th gen i9, I don't think CPU is the issue either seeing as two Celeron driven serves do this quite easily. If anyone has a clue what might be the problem, I'm open to all suggestions. Also in case anyone is wondering, I'm using Cat 8 cables. All of them have been tested and can handle the bandwidth.
×