Jump to content

vCenter ESXi to FreeNAS 40Gb link

Hello, hope everyone is doing well. 
 

So I have been trying to redesign my current ESXi setup. I am currently running ESXi and it’s VMs on one server. I have about 20 VMs running
 

I want to move the VM data to a different server(FreeNAS) and run two ESXi hosts(Active/Standby) managed by vCenter using FreeNAS as the shared storage for VMs. 
 

To help with this setup and to prevent bottleneck, I am planning to upgrade the current 1Gb links between the three physical servers to 40Gb links. 
 

I am planning to use NAS pro hard drives on the FreeNAS server. The server will also have 144Gb ECC ram. 
 

I am not expecting to hit 40Gb link speed in between them in real world scenario as there are other components that will cause the bottleneck(HDD).
 

So my question is, what changes can I make to the FreeNAS system, that will help achieve the 40Gb transfer speed? 
 

Any advice will be appreciated. 
Thank you. 

Link to comment
Share on other sites

Link to post
Share on other sites

When you're reading data from RAM you'll quite possibly see those speeds but read/writing to the array there won't be much you can do to increase the performance besides using more HDDs or have some sort of SSD cache (which ZFS doesn't support in this use case).

Link to comment
Share on other sites

Link to post
Share on other sites

Why do you need 144GB of ram on the FreeNAS server? 🤔 

I assume you're going to be creating iSCSi LUN's in FreeNAS to act as Datastores for your ESXi hosts? And connect them via the Software iSCSi Adapater?

I imagine as your LUN's will be fairly large, they won't be cached to ARC. Also would be very little scrubbing since FreeNAS will only be managing the block level. So I think you'll just be wasting $$$ unless your FreeNAS server already has that much ram, or unless you have a stack of other storage in this FreeNAS box that isnt going to just be iSCSi storage. 

 

I guess make sure you set your MTU appropriately. a 40Gb QSFP connection like with Connect X 3's should utilize Jumbo Frames (e.g MTU 9000) which should reduce overhead and increase speed. I assume they'll be direct connection, but I guess just make sure you assign storage & vmotion to the vmkernel attached to your 40Gb vnic's going to your FreeNAS, Then your primary network vmkernel will be your management. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Jarsky said:

I assume you're going to be creating iSCSi LUN's in FreeNAS to act as Datastores for your ESXi hosts? And connect them via the Software iSCSi Adapater?

Maybe using NFS instead might be better? NFS isn't exactly high performance but if it's a better protocol to use with FreeNAS then maybe it's a better option. All our ESXi datastores at work are NFS but we also use OS mounted iSCSI disks for SQL etc.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Maybe using NFS instead might be better? NFS isn't exactly high performance but if it's a better protocol to use with FreeNAS then maybe it's a better option. All our ESXi datastores at work are NFS but we also use OS mounted iSCSI disks for SQL etc.

We typically only use NFS stores for holding things like ISO's or images. 

I just was clarifying he wanted to use iSCSi for its higher performance given hes going 40Gb and he wants advice around maximizing throughput. 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Jarsky said:

Why do you need 144GB of ram on the FreeNAS server? 🤔 

I assume you're going to be creating iSCSi LUN's in FreeNAS to act as Datastores for your ESXi hosts? And connect them via the Software iSCSi Adapater?

I imagine as your LUN's will be fairly large, they won't be cached to ARC. Also would be very little scrubbing since FreeNAS will only be managing the block level. So I think you'll just be wasting $$$ unless your FreeNAS server already has that much ram, or unless you have a stack of other storage in this FreeNAS box that isnt going to just be iSCSi storage. 

 

I guess make sure you set your MTU appropriately. a 40Gb QSFP connection like with Connect X 3's should utilize Jumbo Frames (e.g MTU 9000) which should reduce overhead and increase speed. I assume they'll be direct connection, but I guess just make sure you assign storage & vmotion to the vmkernel attached to your 40Gb vnic's going to your FreeNAS, Then your primary network vmkernel will be your management. 

I already have 144Gb ram on that particular server, so thought of keeping it as such. I could move some accordingly. 
 

Regarding SMB and iSCSi, this is something, I have been debating on as well. Not sure what would be the best to go for. The main reason behind this setup is to help with stability of the setup and to have good redundancy for some of the critical VMs running. 
 

Yes, I’m regards to the settings, I have taken a note of those. I am planning to run it then through a Cisco Nexus 9300 or Catalyst 9500 switch. That will give me additional ports to work with as well if need be, for future projects. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Maybe using NFS instead might be better? NFS isn't exactly high performance but if it's a better protocol to use with FreeNAS then maybe it's a better option. All our ESXi datastores at work are NFS but we also use OS mounted iSCSI disks for SQL etc.

What sort of performance difference are we expecting when it comes to NFS and iSCSi? The main objective here is the stability. 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Windows7ge said:

When you're reading data from RAM you'll quite possibly see those speeds but read/writing to the array there won't be much you can do to increase the performance besides using more HDDs or have some sort of SSD cache (which ZFS doesn't support in this use case).

I was thinking of the same. It would have been useful running an NVMe SSD but for a complete move over to them would cost a lot. As some of the other comments, I have also been debating regarding the performance between iSCSi and NFS. Don’t get me wrong, the whole idea behind the current setup is to have a good redundancy between some of the critical VMs. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, HawkJizbel said:

I was thinking of the same. It would have been useful running an NVMe SSD but for a complete move over to them would cost a lot. As some of the other comments, I have also been debating regarding the performance between iSCSi and NFS. Don’t get me wrong, the whole idea behind the current setup is to have a good redundancy between some of the critical VMs. 

If you're inclined to use SAS drives you could probably make this happen using HDDs within one server. It'd still cost quite a bit though.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Windows7ge said:

 or have some sort of SSD cache (which ZFS doesn't support in this use case).

 

Why do you say that zfs doesn't support having a SSD as a cache?

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, unijab said:

 

Why do you say that zfs doesn't support having a SSD as a cache?

Note where I said "in this use case". ZFS does support the use of a ZIL. This however will not accelerate asynchronous writes to the array via means such as SMB or NFS. It won't help him in this use case. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Windows7ge said:

If you're inclined to use SAS drives you could probably make this happen using HDDs within one server. It'd still cost quite a bit though.

True. I will post some results once I get the NICs. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

Note where I said "in this use case". ZFS does support the use of a ZIL. This however will not accelerate asynchronous writes to the array via means such as SMB or NFS. It won't help him in this use case. 

I noticed that as well. SSD caching doesn’t give much of a performance boost on FreeNAS. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, HawkJizbel said:

I noticed that as well. SSD caching doesn’t give much of a performance boost on FreeNAS. 

My own experience trying to get very high performance out of ZFS was a rocky road that I had no choice but to blame on my choice of hardware. Though all my my research into the software though that's one thing that makes me sad about ZFS.

 

A ZIL will accelerate synchronous writes. This is good for applications like Databases & Virtual Machines. Network shares, no. Your only option is to beef-up the pool itself.

Link to comment
Share on other sites

Link to post
Share on other sites

One thing you could do is take all your drives create ZFS mirrors for each drive pair and then drop all the mirror pairs in the one pool. It will act a bit like Raid 10 and should be better than RaidZ, Z2 etc in this use case.

 

Nothing really beats more spindles though if its still spinning drives.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

This however will not accelerate asynchronous writes to the array via means such as SMB or NFS. It won't help him in this use case. 

you can change the sync tunable and enable it to write to a SSD based ZIL

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, unijab said:

you can change the sync tunable and enable it to write to a SSD based ZIL

sync=always?

Link to comment
Share on other sites

Link to post
Share on other sites

Correct

 

Just have a reliable and fast SSD, in a mirror if you must.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, unijab said:

Correct

 

Just have a reliable and fast SSD, in a mirror if you must.

A long time ago i tried using a Intel 750 series SSD to speed up my 8 drive RAID6 array. After finding that ZIL didn't work the way I wanted it to I learned about sync=always. Trying it actually made the performance even worse. I'm all for giving it another shot though when I find the hardware to spare.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, HawkJizbel said:

What sort of performance difference are we expecting when it comes to NFS and iSCSi? The main objective here is the stability. 

NFS is fine for large I/O sizes and seq throughput type stuff but not so good at small I/O and IOPs intensive, basically not great for SQL 64k block size. But we have plenty of SQL VMs that sit on NFS datastores just fine but it's not like it's best case for performance.

 

Stability wise I wouldn't say either is better it's more a performance thing. Still I haven't really had any problems getting 1.5GB/s-2GB/s disk performance for a single VM on an NFS datastore backed by an all flash array.

 

Try both and see which one performs better for you. Not a FreeNAS user so I don't know which is the best for ESXi.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Windows7ge said:

A long time ago i tried using a Intel 750 series SSD to speed up my 8 drive RAID6 array. After finding that ZIL didn't work the way I wanted it to I learned about sync=always. Trying it actually made the performance even worse. I'm all for giving it another shot though when I find the hardware to spare.

Nothing will be as fast ram.

But setting sync to always will bound all your writing to the performance of the ssd that you have.

So unless you go crazy for your zil setup, most people won't like the perf they get.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, unijab said:

Nothing will be as fast ram.

But setting sync to always will bound all your writing to the performance of the ssd that you have.

So unless you go crazy for your zil setup, most people won't like the perf they get.

I was only seeing around 200~250MB/s forcing writes to the Intel 750 ZIL. I say I'm willing to give it another go because I was having extenuating performance issues with my choice of hardware but still what you say doesn't match up to what I experienced.

 

Getting back on OPs topic. In all likelihood a ZIL will not help him achieve 40Gb writes. But as you said, yes ZFS's ARC will make great use of the 144GB of RAM if he want to read frequently accessed data very quickly. He should have no problems there.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Windows7ge said:

I was only seeing around 200~250MB/s forcing writes to the Intel 750 ZIL. I say I'm willing to give it another go because I was having extenuating performance issues with my choice of hardware but still what you say doesn't match up to what I experienced.

 

Getting back on OPs topic. In all likelihood a ZIL will not help him achieve 40Gb writes. But as you said, yes ZFS's ARC will make great use of the 144GB of RAM if he want to read frequently accessed data very quickly. He should have no problems there.

Thanks. I’ll definitely have a look into it and post the results I get. Should get the setup up and running for testing, in a week. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

NFS is fine for large I/O sizes and seq throughput type stuff but not so good at small I/O and IOPs intensive, basically not great for SQL 64k block size. But we have plenty of SQL VMs that sit on NFS datastores just fine but it's not like it's best case for performance.

 

Stability wise I wouldn't say either is better it's more a performance thing. Still I haven't really had any problems getting 1.5GB/s-2GB/s disk performance for a single VM on an NFS datastore backed by an all flash array.

 

Try both and see which one performs better for you. Not a FreeNAS user so I don't know which is the best for ESXi.

Thanks. I am going to test both the setup out and will let you know what works best. I think iSCSi is probably the best way to go. I’ll post the results soon. Will give us a good idea of what to expect. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×