Jump to content

Using iSCSI from NAS for storage on desktop and a 10G virtual switch

So, this question is kind of long and technically 2 seperate questions but they relate so I'll keep them together.

 

At the minute I've got 4 systems on my wired network:

My desktop which has an i7 4790k in it

An IBM x3650 M1 (2x Xeon e5450 36GB Ram) which runs virtual desktops and virtual apps

An IBM x3650 M2 (2x Xeon X5650 64GB Ram) which is my main lab server as well as running VMware vCenter

And, my main server which runs a router as well as Windows Server 2016 that serves as my NAS and Plex server. This machine is really underpowered with only an AMD a4-6300 dual core and 8GB of ram. I really cheaped out on it but it runs fine.

 

The three servers are all running VMware ESXi 6.5 and are connected to the vCenter Server on the x3650 M2.

The main server only has a dual port gigabit card in it which has one port for WAN and the other goes to a 24-Port gigabit switch which has everything else connected to it.

So everything is running on a standard 1 gigabit connection to the rest of the network.

There is an exception to this. I've got single port 10 gigabit SFP+ cards in both my desktop and my x3650 M2 which have a DAC cable running between them.

 

What I want to do is buy another single port card for my x3650 M1 and 2 dual port cards to put into my main server. I can't do this with the current server as its only got one PCIe slot (it's ITX) which has the dual gigabit card in it. I intend to upgrade my desktop to an i7 8700k in the coming months so the 4790k system will become the main server and the POS a4-6300 will be retired. So i'll be able to do dual 10G cards then.

 

For now what i will do is get the dual 10G cards for the x3650 M2 instead.

I intend to run fibre from the two dual port cards, to each of the other systems with a 10G card, so there'll be one central system with 4 SFP+ ports that connect to everything else. 

With this, what I hope to do is setup a VM on the server that will act as a 10G switch.

To the switch VM, I will use PCIe passthrough to give the VM direct access to the NICs and I'll give it a 10G port on the vSwitch as well so the other VMs can communicate with the rest of the network.

I want to know if it is possible to setup a VM to act purely as a switch between essentially 5 different network ports. If this is possible with PfSense it'd make life easier cos i can just do it with the one VM instead of having a seperate router and switch.

 

The other part of my question is. When I build my 8700k system, I want to only have 2 NVMe SSDs in it, and no SATA drives at all. 

My plan is to setup a 3TB iSCSI drive on my NAS and have my desktop connect to that for all my games and user files.

I also want to move the drives form the IBM servers to the NAS to run all the VMs from network storage (I might have to buy a SAN disk shelf for this and use a SAS expander but that's not a problem)

I know it's technically possible but, is iSCSI and a 10G network actually stable enough to support this? And, how much CPU horsepower will it use (if any).

 

I'm sorry if I've confused you people but I like overly complicated things :)

Thanks for any help.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Ferny said:

I want to know if it is possible to setup a VM to act purely as a switch between essentially 5 different network ports. If this is possible with PfSense it'd make life easier cos i can just do it with the one VM instead of having a seperate router and switch.

Short answer no, not a case of not possible but pfsense is not going to be able to switch multiple ports worth of 10Gb without some very serious CPU power (it's a firewall/router not a switch). It's also not supported to route or firewall iSCSI traffic. There are virtual appliances and/or operating systems you can run for switch functionality but not all are free and I don't even think it's necessary, see below.

 

Here is a proposed setup, let me know what you think of it.

 

IBM x3650 M1

Role: Storage server, NFS or iSCSI for ESXi

Reasoning: Older generation CPU with limited virtualization acceleration features.

Additional hardware: Single or Dual port 10Gb NIC, 2.5" HDDs to create 3TB storage

Potential expansion: External JBOD disk tray for 3.5" disks, SAS HBA to connect tray

 

IBM x3650 M2

Role: Virtual Host, Virtual File Server (VM)

Reasoning: This generation of CPU had decent virtualization acceleration features

Additional hardware: Single or Dual port 10Gb NIC

Reused hardware: Single port 10Gb NIC

 

Desktop,  i7 4790k

Role: Client (w/e you want)

Reasoning: N/A

Reused hardware: Single port 10Gb NIC

 

So the basic outline of the proposed solution is to use the older server as your main storage, you can use what ever OS you like to do this be it FreeNAS, Windows Server or Linux. On the storage server configure storage and present it to the main virtual host via directly connected 10Gb, no switching required. For the virtual host configure this storage as datastore(s) which you can then create VMs on which one of these will be a storage VM that your desktop will talk to. For that VM this is where I would recommend you use Windows Server 2016 that you have, you can either install the iSCSI target server role or what I actually do is just create a VHDX file and host that on a share then mount this virtual disk on my desktop. Both methods work but one is slightly less complex in regards to not needing iSCSI.

 

In future you, or instead of and do it immediately is reusing your current desktop as the file server instead of the IBM x3650 M1. 

 

I run a similar setup myself but I use a single IBM x3500 M4. I pass through a SAS HBA to a Windows Server 2016 VM which I then use Storage Spaces to create the storage pools and then I present this storage back to the ESXi host. The main storage VM in question lives on a hardware RAID using SAS disks because you need to host it somewhere and I had it anyway. I have another server setup in the same exact way but this one uses a single SSD as the ESXi datastore to host the storage VM.

 

48 minutes ago, Ferny said:

To the switch VM, I will use PCIe passthrough to give the VM direct access to the NICs and I'll give it a 10G port on the vSwitch as well so the other VMs can communicate with the rest of the network.

 

Avoid PCIe passthrough if you can, it makes VM management harder and disables features/functions.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the reply.

That setup you proposed does sound much more simple than what I thought of so I will probably go with it or something similar.

Most of the drives I have in my current file server are 3TB or 4TB drives which I believe, would not be be able to be used with the x3650 M1 as it does not have a UEFi and so supports drives 2TB or less. I may be wrong with that so please correct me if i am. Maybe it's just a limitation with the on-board RAID card.

But that server would be convenient for an external SAS disk shelf as there's already an SFF8088 port on he back of it.

Also that server is super loud even though it's not in the same room as my desktop I can still hear it, so I don't like running it 24/7. It's also much more effective than the M2 or the 4790k at heating the room.

 

 

So if i can't get 3TB+ disks to work I'd have to use the 4790k build which will have to wait for a few months until I get the 8700k system.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Ferny said:

Most of the drives I have in my current file server are 3TB or 4TB drives which I believe, would not be be able to be used with the x3650 M1 as it does not have a UEFi and so supports drives 2TB or less. I may be wrong with that so please correct me if i am. Maybe it's just a limitation with the on-board RAID card.

It'll be a limitation of the on-board RAID controller, you can just get a newer add-in card from ebay i.e. M1015. Reusing your desktop as the storage server may still be better but I'll leave that up to you as to what you want to do.

Link to comment
Share on other sites

Link to post
Share on other sites

Just wanted to chime in about pfSense - even with top tier hardware the fastest record speeds (I could find) are ~4gbit/s. Software based routing just outright has limitations. 

 

I second what leadeater suggested - the only thing you really need 10gbit for is storage. Dedicate a machine to storage and just have 10gbit connections to it. Actual virtual machines don't need that kind of throughput.

 

I have 10gbit between my ESXi servers and a storage server, but my personal desktop is only 1gbit. Movies I stream off the NAS itself so no real need for that large of a connection. Only advantage I could foresee is having a disk array that can do better than 200Mbyte/s and running games off it remotely...

 

Speaking of which, have you made sure your disks arrays (future or current) are even capable of speeds worth the 10gbit investment? Your nvme disks will make good use it of, but if your spinners on the NAS can't break 200mbyte/s I don't foresee it being worth the headache.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×