Jump to content

10Gig Networking in ESXI VMs

DrDrummer
Go to solution Solved by DrDrummer,

After reconfigure all virtual Network Connections from Scratch, it works (sort of..).
Gettin 500MB/s between two SSD-Clients and about 350MB/s to the RAID-Server.

 

Disable RSX and configure Jumbo Packets helps alot.

Thanks for your Support!
 

Hello there,

 

Iam from Germany and i will try my best to Write an understandable text here. 

 

We currently try to virtualize our Company Servers with VMware vSphere. I run into a Problem were my virtual machines dont use the bandwith of our 10Gig NICs. All VMs Show up a 10Gbit/s Connection But just transfer Data with a Maximum of Around 110MB/s. Same Thing from VM to VM our from VM to the Network.

 

Is there a Configuration i have to take?

We use Mellanox Connecxt-3 and Intel X540-T2 Cards.

The Servers are Windows Server 2016, 2012 and 2012R2.


I Hope you could understand my Problem and have an Idea to help me out.

 

Greets from Germany
Niklas

Link to comment
Share on other sites

Link to post
Share on other sites

What kind of files are you trying to transfer and about what kind of storage mediums are we talking about?
Maybe there's a bottleneck somewhere because the VMs would eat into the available storage resources. Just a thought... :s 

Btw, Yaaaay for Germanaay! :P 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I try to Transfers 4GB .iso and .rar files for testing.

 

The VM-Host is installed on a SATA-SSD RAID1 and Uses a RAID5 with 6 SAS Drives for Storage.

 

The Client for Testing is a Windows 10 PC with an SATA-SSD.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Use iperf to test network bandwidth between the the VMs and between a VM to another client or physical server. Most often it's not actually network bandwidth that is the issue.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, DrDrummer said:

I try to Transfers 4GB .iso and .rar files for testing.

 

The VM-Host is installed on a SATA-SSD RAID1 and Uses a RAID5 with 6 SAS Drives for Storage.

 

The Client for Testing is a Windows 10 PC with an SATA-SSD.

 

It's weird that your network speeds caps out at around 110MB/s. That is exactly what I'd expect from a 1Gbit network.

I think leadeater is correct and testing your network bandwith with iperf should help us to identify whether it's a network issue or not.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for your replys so far. This is the iperf result between two of the VMs

 

1E69085F-409E-4CB6-815C-A6FA908C1906.png

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DrDrummer said:

Thanks for your replys so far. This is the iperf result between two of the VMs

Which VM NIC type is being used? E1000 or VMXNET3?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Which VM NIC type is being used? E1000 or VMXNET3?

VMXNET3 for all of the virtual Adapters

Link to comment
Share on other sites

Link to post
Share on other sites

1) (you surely must have this but Ill put it anyways) make sure you are running latest virtual hardware version supported by your vmware version and latest vmware tools installed.

2)  You may need to disable LRO in a VMXNET3 network adapter. On Windows, the LRO technology is also referred to as Receive Side Coalescing (RSC). For more information search Microsoft help for how to Enable or Disable LRO on a VMXNET3 Adapter on a Windows Virtual Machine. https://kb.vmware.com/s/article/2129176

 

Let me know how it goes, Ive seen this before, mainly when dealing with SQL Server VMs.

Link to comment
Share on other sites

Link to post
Share on other sites

i just get about 40MBits/s after Disabling LRO ?

Edit: I added a second virtual NIC for Testing (VMXNET3 too) and now we are talking about 1.08 to 1.39 Gbits per second. Maximum here was 162Mbytes/s.
So we are over the 1gbit but dont nearly reach the 10Gbit mark.

 

Edit2: just connected two Clients via 10Gbit to Test the Overall Performance without VM-Magic.
After Set die Jumbo Packts to the Max Value i geht respectable 4.27Gbit/s with iperf. Transfer my testfiles Ende up with a very throttled speed. This Happens with Evers file i tested so far.
You can check this on the following pictures. 

 

Both Clients with SSD Storage and 10Gbit Network Cards (Intel 540-T2 and ASUS XG-C100C

 

Niklas

6B36E431-4933-4A44-8EBA-C17D0AEF4A65.jpeg

64A2DBEA-48F4-4C9D-80F0-E803036BC5A5.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

After reconfigure all virtual Network Connections from Scratch, it works (sort of..).
Gettin 500MB/s between two SSD-Clients and about 350MB/s to the RAID-Server.

 

Disable RSX and configure Jumbo Packets helps alot.

Thanks for your Support!
 

Link to comment
Share on other sites

Link to post
Share on other sites

From my experience, the VM layers limitation is 4Gbps. We couldn't get past it no matter what we tried. So we just gave the VM direct access to a 10Gbps network card using PCI passthrough and it started flying at full speed.

Link to comment
Share on other sites

Link to post
Share on other sites

If i understand this correctly, i would need a dedicated NIC for each VM then?

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, that's correct.

 

Our VM's couldn't even communicate with each other at 10Gbps (also due to the 4Gbps limit in the hypervisor) which was mandatory for our use case. We had to install two 2-port NICs on the machine, dedicate one to each VM and connect them to each other. Funny setup but it worked like a charm.

Link to comment
Share on other sites

Link to post
Share on other sites

That would be an Option but i think the actual Stats are ok for now. We dont have to transfer Lots of files in our Daily Business.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, vano411 said:

Yes, that's correct.

 

Our VM's couldn't even communicate with each other at 10Gbps (also due to the 4Gbps limit in the hypervisor) which was mandatory for our use case. We had to install two 2-port NICs on the machine, dedicate one to each VM and connect them to each other. Funny setup but it worked like a charm.

Our VMs can all do 10Gbps happily, ESXi. NIC settings do need optimizing though.

Link to comment
Share on other sites

Link to post
Share on other sites

You might want to try enabling "Receive Side Scaling" which will balance the CPU usage across the cores. 

You need to do this in vSphere on the VMXNET3 adapter, as well as in the guest OS on the adapter properties. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just a "control" question... if you have 2VMs in the same host and assigned to the same virtual switch,  if you do an iperf3 test...whats the speed you are getting? Is it superior to 10Gbps? 

 

Oh! and another thing! you did configue jumbo frames on your physical switches and virtual switches besides your host OS, correct? 

Link to comment
Share on other sites

Link to post
Share on other sites

I dont get this.. today, the maximum transferrate between the two machines is up to 180MB/s. The Iperf3 Test result is just 1.24Gbit/s

 

Correct. Did the Jumbo Frame Configuration in the VMs and to the host itsself.

Link to comment
Share on other sites

Link to post
Share on other sites

1) what about in your network switch? was the jumbo frames configuration applied there as well? 

 

2) the machines you used for testing, are they in separate hosts? are they in the same host? would you care to draw a simple diagram of your infrastructure and post it? would be easier to troubleshoot

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×