Jump to content

Help with hardware RAID

I recently got a Dell R720XD LFF server to start my homelab. It has a H170P RAID controller. I have two 300gb SAS disks installed in the rear of the server, which I want to configure as my boot / host drives. I selected Control R on startup, and created a RAID 1 array with the two drives, and initialized it.
 
When I attempted to install Proxmox, I was put into the grub recovery screen and when I typed "ls", it did not list the RAID array, but instead listed HD0, HD1, and HD0.MSDOS1 which I believe is the USB disk. When I attempted to install to HD0 and HD1, it told me that the filesystem was unknown. When I selected HD0.MSDOS1, it told me that the the disk was FAT.
 
My end goal here is to be able to configure the boot drives in a hardware RAID 1 with the H170P RAID controller, and pass through all of the other drives with Proxmox to my FreeNAS VM so ZFS can properly control them.
 
I found this tutorial although the comments say it's against ESXi best practices and is to be avoided.
 
Is it possible to configure the H170P in such a "hybrid" mode, and how can I configure the RAID 1 array so Proxmox can install?
Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Teurce said:
I recently got a Dell R720XD LFF server to start my homelab. It has a H170P RAID controller. I have two 300gb SAS disks installed in the rear of the server, which I want to configure as my boot / host drives. I selected Control R on startup, and created a RAID 1 array with the two drives, and initialized it.
 
When I attempted to install Proxmox, I was put into the grub recovery screen and when I typed "ls", it did not list the RAID array, but instead listed HD0, HD1, and HD0.MSDOS1 which I believe is the USB disk. When I attempted to install to HD0 and HD1, it told me that the filesystem was unknown. When I selected HD0.MSDOS1, it told me that the the disk was FAT.
 
My end goal here is to be able to configure the boot drives in a hardware RAID 1 with the H170P RAID controller, and pass through all of the other drives with Proxmox to my FreeNAS VM so ZFS can properly control them.
 
I found this tutorial although the comments say it's against ESXi best practices and is to be avoided.
 
Is it possible to configure the H170P in such a "hybrid" mode, and how can I configure the RAID 1 array so Proxmox can install?

So, I guess I have a few questions... I assume you have plans beyond just running freenas on here as your attempting to virtualize it, yes? If not, I wouldn't virtualize it just to virtualize it. If yes, ok I am with you, that is my use case as well.

 

How are you going to pass the drives through the hypervisor? I am not personally familiar with proxmox but I know you can pass hardware through it like in ESXi which is what I run, but I am confused as to why your saying its against ESXi best practice when your not going to run ESXi..? But regardless, if your using the RAID controller for your OS drives, do you have a secondary HBA you will connect all the "data drives for ZFS" that will be passed through the hypervisor to freenas? You can't pass individual drives through, you need to pass an entire HBA through.

 

Not sure about why it won't just install on the RAID 1 array, theoretically if the drives are actually in a RAID 1, it should just "work". That said, you may consider running an SSD (or 2) for your boot drives vs spinny disks. They are pretty dang cheap, and can make all of your VM's much snappier...

 

 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

ls is for directory listings, do you mean you ran lsblk?

HD1 might be what you're after. Try running parted -l or fdisk -l, which will show you more info about the device. 

 

43 minutes ago, Teurce said:
I found this tutorial although the comments say it's against ESXi best practices and is to be avoided.
Is it possible to configure the H170P in such a "hybrid" mode, and how can I configure the RAID 1 array so Proxmox can install?

 Any install to local disk is against VMware's best practice, their supported method is to install to SDcard or USB, with logging mapped to a VMFS datastore / shared storage. 

 

But back to Proxmox, it is built on a full linux kernel so should be run on disk. Maybe just check in BIOS, under Disk Configuration that its set to RAID and not SATA? Been a while since i've been hands on with configuring Dell's. Perhaps try booting it in Legacy/BIOS instead of UEFI?  The H710 should definately be supported OOTB by Proxmox (IIRC its built on Redhat, personally I installed Proxmox VE onto Debian 10 and it had my LSI controllers already in the Debian OS. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, LIGISTX said:

So, I guess I have a few questions... I assume you have plans beyond just running freenas on here as your attempting to virtualize it, yes? If not, I wouldn't virtualize it just to virtualize it. If yes, ok I am with you, that is my use case as well.

This is my entire server for everything, so it's going to be Proxmox (or ESXi, I used ESXi on my previous basic box but it's my understanding that you run into licensing issues with VMs with more than 8 assigned cores, which could potentially cause issues as I have 16 CPU cores), the plan right now is to run PiHole, Plex, FreeNAS (which will act as the media storage for Plex), a Ubiquti controller, and whatever else I decide to put on there in the future.

14 minutes ago, LIGISTX said:

How are you going to pass the drives through the hypervisor? I am not personally familiar with proxmox but I know you can pass hardware through it like in ESXi which is what I run, but I am confused as to why your saying its against ESXi best practice when your not going to run ESXi..?

I would just pass all of the drives through to the FreeNAS VM. The reason I'm concerned about it being against ESXi best practices is that it will possibly cause issues for Proxmox as well, I don't want to take a risk that I don't have to, but if the configuration suggested in the article is solid, I have no issues running it on Proxmox.

15 minutes ago, LIGISTX said:

But regardless, if your using the RAID controller for your OS drives, do you have a secondary HBA you will connect all the "data drives for ZFS" that will be passed through the hypervisor to freenas? You can't pass individual drives through, you need to pass an entire HBA through.

That's exactly the question. I want to be able to run a hardware RAID of my boot drives (as you cannot install Proxmox or ESXi to a software RAID array), while at the same time being able to passthrough drives to FreeNAS. On ESXi, I was able to create a datastore on my main SSD for general use and then just passthrough the disks specifically to my FreeNAS VM.

18 minutes ago, LIGISTX said:

Not sure about why it won't just install on the RAID 1 array, theoretically if the drives are actually in a RAID 1, it should just "work". That said, you may consider running an SSD (or 2) for your boot drives vs spinny disks. They are pretty dang cheap, and can make all of your VM's much snappier...

That's also part of the question, for some reason Proxmox does not detect the RAID 1 array which leads me to believe that I must have misconfigured it somehow. As for the speed, they are 15k RPM 2.5 inch disks, so along with a RAID 1 (double the read speed) I don't see that being an issue for non IO sensitive VMs such as PiHole and the Ubiquti controller. All of the speed-sensitive applications would use a section of the FreeNAS datastore (mount it as a SMB share or something on Plex to store the files, for example, still working that part out), and I can always add a PCIe cache SSD or buy more RAM if that becomes a problem.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Jarsky said:

ls is for directory listings, do you mean you ran lsblk?

HD1 might be what you're after. Try running parted -l or fdisk -l, which will show you more info about the device.

I just ran "ls", although it should show the RAID array and not the individual disks, otherwise only one disk would show up (the RAID array)

7 minutes ago, Jarsky said:

Any install to local disk is against VMware's best practice, their supported method is to install to SDcard or USB, with logging mapped to a VMFS datastore / shared storage.

Well, I could always have a USB stick / SD card in my server, but I thought that perhaps a hardware RAID 1 would be more secure. I'm new to the server side of things, so whatever would be best, I don't have a huge amount of experience in this.

8 minutes ago, Jarsky said:

But back to Proxmox, it is built on a full linux kernel so should be run on disk. Maybe just check in BIOS, under Disk Configuration that its set to RAID and not SATA? Been a while since i've been hands on with configuring Dell's. Perhaps try booting it in Legacy/BIOS instead of UEFI?  The H710 should definately be supported OOTB by Proxmox (IIRC its built on Redhat, personally I installed Proxmox VE onto Debian 10 and it had my LSI controllers already in the Debian OS. 

I'll try a few different configuration settings, but the main question is if I can run *both* a hardware RAID with my two boot disks *in addition* to simply passing through other disks to my FreeNAS VM (IT/HBA mode)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Teurce said:

I just ran "ls", although it should show the RAID array and not the individual disks, otherwise only one disk would show up (the RAID array)

When i talk about disk im talking about the virtual disk created by your RAID controller, not individual physical disks. 

But the 'ls' command is for file & directory listings, lsblk is for listing block devices (disks includiing virtual disks). 

Can you try running lsblk or parted -l or fdisk -l and paste the output?

 

2 minutes ago, Teurce said:

Well, I could always have a USB stick / SD card in my server, but I thought that perhaps a hardware RAID 1 would be more secure. I'm new to the server side of things, so whatever would be best, I don't have a huge amount of experience in this.

As I said this is best practice for VMware (ESXi) this is not best practice for Proxmox as proxmox runs on a full linux kernel stack, which has to much IO and will degrade NAND (USB/SDCards) very quickly (and on SD card would be slow)

2 minutes ago, Teurce said:

I'll try a few different configuration settings, but the main question is if I can run *both* a hardware RAID with my two boot disks *in addition* to simply passing through other disks to my FreeNAS VM (IT/HBA mode)

No you cannot run the controller in both modes, its one or the other. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Jarsky said:

When i talk about disk im talking about the virtual disk created by your RAID controller, not individual physical disks. 

But the 'ls' command is for file & directory listings, lsblk is for listing block devices (disks includiing virtual disks). 

Can you try running lsblk or parted -l or fdisk -l and paste the output?

I will try this and see what results I get.

13 minutes ago, Jarsky said:

No you cannot run the controller in both modes, its one or the other. 

I linked this article in my first post, perhaps a direct link would be better.

https://www.mrvsan.com/configuring-the-dell-perc-h730-controller-for-passthrough-and-raid/

While this is for the H730 and not the H710p, would something similar apply?

If not, how should I configure the disks? My current plan is RAID 1 Proxmox / hypervisor / small VM stuff set of dual 15k 300gb SAS disks (which I already bought, and are sitting in the server now), and then have highly expandable front bay storage (12 LFF bays) which I can all passthrough to FreeNAS for ZFS. I would really want to be able to add more storage without needing to dump the entire contents of the server and reformat the array though, so I'm still working out what the best way to do this is. Any suggestions, please let me know.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Teurce said:

I linked this article in my first post, perhaps a direct link would be better.

https://www.mrvsan.com/configuring-the-dell-perc-h730-controller-for-passthrough-and-raid/

While this is for the H730 and not the H710p, would something similar apply?

Thats an interesting edge case, haven't seen that before, dont see why it wouldnt work though since its a function of the hardware and not the OS (assuming theres no major difference between the 2 controllers that gives that functionality)

 

Quote

If not, how should I configure the disks? My current plan is RAID 1 Proxmox / hypervisor / small VM stuff set of dual 15k 300gb SAS disks (which I already bought, and are sitting in the server now), and then have highly expandable front bay storage (12 LFF bays) which I can all passthrough to FreeNAS for ZFS. I would really want to be able to add more storage without needing to dump the entire contents of the server and reformat the array though, so I'm still working out what the best way to do this is. Any suggestions, please let me know.

If it doesnt work, then as I said before ZFS on Linux would work and you could get rid of FreeNAS all together.

Proxmox natively supports ZFS through CLI, but you can also use Cockpit with the ZFS Manager plugin for a server management GUI

 

e.g my test box

 

image.thumb.png.630c91d428d8b48b8e7a484d3cd896c6.png

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Teurce said:

This is my entire server for everything, so it's going to be Proxmox (or ESXi, I used ESXi on my previous basic box but it's my understanding that you run into licensing issues with VMs with more than 8 assigned cores, which could potentially cause issues as I have 16 CPU cores), the plan right now is to run PiHole, Plex, FreeNAS (which will act as the media storage for Plex), a Ubiquti controller, and whatever else I decide to put on there in the future.

That is my setup as well, I run ESXi, with a few ubuntu VM's under it, one with docker containers running one of which is pihole, I have plex running in one, Freenas next to those, etc. There are ways around the ESXi limitations.... but those ways are not allowed on this forum so I will leave it at that.

 

1 hour ago, Teurce said:

I would just pass all of the drives through to the FreeNAS VM. The reason I'm concerned about it being against ESXi best practices is that it will possibly cause issues for Proxmox as well, I don't want to take a risk that I don't have to, but if the configuration suggested in the article is solid, I have no issues running it on Proxmox.

You can't just pass drives through to ESXi, you have to pass your entire HBA (RAID card) or SATA controller through. Unless I am mistaken, this is the only option with Intel VT-d. I certainly do not see a way to pass drives through on my ESXi box. I pass through the entire HBA, no option for individual drives...

image.thumb.png.4588cf97a6ac4962f33fc0189dc6667f.png

1 hour ago, Teurce said:

That's also part of the question, for some reason Proxmox does not detect the RAID 1 array which leads me to believe that I must have misconfigured it somehow. As for the speed, they are 15k RPM 2.5 inch disks, so along with a RAID 1 (double the read speed) I don't see that being an issue for non IO sensitive VMs such as PiHole and the Ubiquti controller. All of the speed-sensitive applications would use a section of the FreeNAS datastore (mount it as a SMB share or something on Plex to store the files, for example, still working that part out), and I can always add a PCIe cache SSD or buy more RAM if that becomes a problem.

So, sorta.... I WOULD NOT use the freenas datastore set up as a NFS or SMB share for the VM's or the VM's database data like in Plex. There is a lot of overhead in network protocols, and that will just.... suck. Obviously the data Plex is serving will be stored and access this way, but I don't think you want to plan to have your plex DB accessed this way. You can run iSCSI, but, that has its own headaches and overhead needs. You want to run your VM's and them hold their high performant data on their host OS disc, at least IMO. Thus the idea for a SSD for your boot drives. 15k RPM is not bad, but having multiple VM's doing multiple "OS-ey" things and running a plex DB, etc, 15k's will be fine, but for ~50 bucks a 256 GB SSD also may be a good way to just solve the problem before it exists, or ~80 bucks for a 500 GB. I run a single 256 SSD for my ESXi boot + VM's, and it gets the job done. And I use Veeam to back up the VM's to a NFS point in freenas so if the boot drive does die, I can always recover my VM's (through a bit of a PITA since I will need freenas up and running jsut to get the VM's data off the ZFS array... but I keep a backup of freenas .xml stored somewhere else for this reason). So, I do not have RAID 1 boot, which I admit is not the best idea, but, eh. I am not super worried about it... SSD are pretty resilient these days.

 

Anyways, my way is certainly not the best, but just sharing some of my knowledge and reasoning. If you can get it working in RAID 1, may consider popping SSD's in for boot.... Also, you likely will never need a SSD cache drive for your use case. L2ARC or a SSD SLOG will likely not help you at all. You can EASILY saturate gigabit with just a hhd array. The issue comes in when your trying to run a database off of a SMB connected mount point. Can the array run the DB, 100% yes. Will it be fun doing it over SMB, no, it won't. Is the Plex DB that intensive.... no, but, still. And RAM also isn't "as" important as the FreeNAS forums make it out to be, not for you use case. I run my freenas VM on 16 GB, I have 10x4 TB in RAID Z2 + a single 10 TB drive just for "easily recoverable" data. Multiple plex streams, photo editing over SMB, really not much else is hitting the ZFS array, but 16 GB has never present an issue in the slightest; my entire homelab is running on 28 GB of RAM (I do with I had at least 40 GB, but, once DDR5 is out I will build a new Zen based machine as an upgrade). Hope that helps, like I said, my way certainly isn't perfect, or best, or even good, lol, but its just more data for you to make choices with. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LIGISTX said:

You can't just pass drives through to ESXi, you have to pass your entire HBA (RAID card) or SATA controller through. Unless I am mistaken, this is the only option with Intel VT-d. I certainly do not see a way to pass drives through on my ESXi box. I pass through the entire HBA, no option for individual drives...

Okay, so in the event that I went out and bought dual 300gb SAS drives ($35 shipped), how can I use them? If I can only pass the HBA through, it seems to me like it's either passing everything through, or nothing. The main reason why I want to separate the main disk array from the VM boot stuff is that it would be much more easily organized, as I would be able to do all of my configuration just in Plex or whatever to add more storage when needed.

11 hours ago, LIGISTX said:

So, sorta.... I WOULD NOT use the freenas datastore set up as a NFS or SMB share for the VM's or the VM's database data like in Plex. There is a lot of overhead in network protocols, and that will just.... suck. Obviously the data Plex is serving will be stored and access this way, but I don't think you want to plan to have your plex DB accessed this way. You can run iSCSI, but, that has its own headaches and overhead needs. You want to run your VM's and them hold their high performant data on their host OS disc, at least IMO. Thus the idea for a SSD for your boot drives. 15k RPM is not bad, but having multiple VM's doing multiple "OS-ey" things and running a plex DB, etc, 15k's will be fine, but for ~50 bucks a 256 GB SSD also may be a good way to just solve the problem before it exists, or ~80 bucks for a 500 GB. I run a single 256 SSD for my ESXi boot + VM's, and it gets the job done. And I use Veeam to back up the VM's to a NFS point in freenas so if the boot drive does die, I can always recover my VM's (through a bit of a PITA since I will need freenas up and running jsut to get the VM's data off the ZFS array... but I keep a backup of freenas .xml stored somewhere else for this reason). So, I do not have RAID 1 boot, which I admit is not the best idea, but, eh. I am not super worried about it... SSD are pretty resilient these days.

Got it. I honestly don't have performance sensitive things that I need to get done, this is a homelab, not exactly a prod environment. The reason I went for enterprise drives and RAID 1 is that I would be able to have redundancy, because if I get myself into a host down situation, it's going to be a cluster**** to get it all working again. The only performance sensitive thing that I would have to worry about here is simply booting up the VMs, which I won't be doing that much and I don't mind if it takes 5 minutes versus 2 minutes as everything should be stored on the main array, which as you said can easily saturate gigabit no problem.

12 hours ago, LIGISTX said:

The issue comes in when your trying to run a database off of a SMB connected mount point. Can the array run the DB, 100% yes. Will it be fun doing it over SMB, no, it won't. Is the Plex DB that intensive.... no, but, still.

With that being said, what's the best way to organize Plex then? I would want a general purpose NAS, for backups of stuff, but also a Plex share. It would be great if I could simply say to Plex "Hey see that disk array right there? Just make a folder and put your stuff in there". It sounds like though it would have to be a lot more complicated than that, in addition to the fact that I can't pass through individual drives. Because you can only pass through HBAs, would I need to like buy another HBA for each VM that I want to access the datastore in order to not run into the SMB overhead issues? I would also have to configure multiple arrays and I doubt the backplane supports that... just a bad situation all around.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Teurce said:

Okay, so in the event that I went out and bought dual 300gb SAS drives ($35 shipped), how can I use them? If I can only pass the HBA through, it seems to me like it's either passing everything through, or nothing. The main reason why I want to separate the main disk array from the VM boot stuff is that it would be much more easily organized, as I would be able to do all of my configuration just in Plex or whatever to add more storage when needed.

Got it. I honestly don't have performance sensitive things that I need to get done, this is a homelab, not exactly a prod environment. The reason I went for enterprise drives and RAID 1 is that I would be able to have redundancy, because if I get myself into a host down situation, it's going to be a cluster**** to get it all working again. The only performance sensitive thing that I would have to worry about here is simply booting up the VMs, which I won't be doing that much and I don't mind if it takes 5 minutes versus 2 minutes as everything should be stored on the main array, which as you said can easily saturate gigabit no problem.

With that being said, what's the best way to organize Plex then? I would want a general purpose NAS, for backups of stuff, but also a Plex share. It would be great if I could simply say to Plex "Hey see that disk array right there? Just make a folder and put your stuff in there". It sounds like though it would have to be a lot more complicated than that, in addition to the fact that I can't pass through individual drives. Because you can only pass through HBAs, would I need to like buy another HBA for each VM that I want to access the datastore in order to not run into the SMB overhead issues? I would also have to configure multiple arrays and I doubt the backplane supports that... just a bad situation all around.

 

You wouldn’t want to pass multiple HBA’s to each VM, you want all of the “data” discs on an HBA passed to whatever OS is running your ZFS array (be it freenas, ZFS on linux, etc). To not run into networking overhead when, you can try iSCSI, but for most of your needs you likely can just use SMB. It really depends what the data is and how much needs to move around. Running programs over SMB is not the best tho, that’s why I said instal Plex on the boot drive for your host, and let it live there. I suppose you could just create a datastore on the ZFS array and instal your VM’s to that so you don’t have any networking overhead going from the VM to the Plex database since in this situation the ZFS datastore would host the OS itself and the VM would hold its own database, hmm, this is easier to draw then to type. 
 

Basically, you could create a NFS mount point in ZFS, and point esxi to that as a datastore. Then all your VM’s would be installed to that NFS mount point. No network overhead, but your VM’s would be running off a ZFS array. I suppose this would work, just be careful with your snapshot policy on that data, if you have snapshots turned on in a heavy way on the portion of the array that holds OS’s, that will likely turn into a way to exponentially use drive space or possibly cause actual performance issues.

 

I don’t know, many ways to skin this cat… if you want ESXi you could just get a single SSD to set up as a datastore, no redundancy (it would only host the VM’s), and then just use veeam which is free to back then up nightly to a SMB mount point within the array, or even a USB hard drive, or whatever you want. That way that SSD is basically expendable. If it dies, you still have the ESXi host up, and you have veeam backups to recover from easily. Just need to think out where things live and how to access them. This is why homelabs are fun, everyone will have a different setup, and every setup is wrong 😂

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, LIGISTX said:

This is why homelabs are fun, everyone will have a different setup, and every setup is wrong 😂

Totally agree.

 

It seems like the best thing to do in this situation is ditch Proxmox, and go for the internal RAID 1 SD card thingy to help avoid a host down situation, and then passthrough the entire H170P with all of the storage to a FreeNAS VM. Create a RAID 1 with my two rear 300gb SAS drives for my general VM use, and then create some sort of setup where I can easily add storage to my main array. The only real limitation here is the 8 vCPU limit, but assuming NUMA nodes become a problem, I wouldn't want to passthrough more than an entire CPU at once. Assuming this is threads, and not cores, I can just disable hyperthreading in the BIOS as I have plenty of cores to go around. (E5-2650).

 

Do I generally have the gist of it?

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Teurce said:

Totally agree.

 

It seems like the best thing to do in this situation is ditch Proxmox, and go for the internal RAID 1 SD card thingy to help avoid a host down situation, and then passthrough the entire H170P with all of the storage to a FreeNAS VM. Create a RAID 1 with my two rear 300gb SAS drives for my general VM use, and then create some sort of setup where I can easily add storage to my main array. The only real limitation here is the 8 vCPU limit, but assuming NUMA nodes become a problem, I wouldn't want to passthrough more than an entire CPU at once. Assuming this is threads, and not cores, I can just disable hyperthreading in the BIOS as I have plenty of cores to go around. (E5-2650).

 

Do I generally have the gist of it?

Yea that is the general gist. Also, remember, this is homelab...... don't go in expecting to solve your problem right from the start - things will evolve and as you learn more and you better understand your needs and abilities, things WILL change, that is part of the fun.

 

I started out with a bunch of drives in a wood case I made, running, shoot, open media server? That sounds wrong, I forget what the OS was called. But it was basically just a JBOD box. Then I got my new hardware, went to strictly Freenas with jails and really crappy VM's since back then FreeNAS had horrible VM hypervisor stuff (little better now, still not good). Then I evolved to ESXi and everything that entails and have had many things under ESXi along the way. Its a journey, as you learn you evolve.

 

Back to your point tho, ESXi limitation I believe is per VM, and you can assign how many threads each VM gets. I run my entire homelab on an i3 with 4 threads...... I have a feeling you will be fine for quite some time with only being able to assign 8 threads to a VM, that is a lot of power, I give my Plex VM 2 threads of my i3 and I can do 3-4 1080p to 720p transcodes at once for reference. 99.9% of the time I direct stream anyways, which takes no CPU power since there is no "work" being done by plex besides just serving the data.

 

Also, like I said, there are ways around that limitation..... But also, you can still use Proxmox. I just personally don't know much about it, but LOTS of folks do. It may be a better option, or maybe even consider unraid. If I started over today, there is a solid chance I would go unraid. When I started this project unraid was not nearly as good as it is now. A buddy of mine just built out his homelab using it, and he is enjoying it so far. So, really, there is no right or wrong, I would say just start doing things! The important thing to lock down from the start is how you want to manage your raw data, hard to switch from ZFS to unraid later on, basically need to backup all your data and start over.

 

So I would say do a lot of thinking on unraid vs ZFS, and once you pick that definitively, you can change things after that. Like I said, I used to be freenas on bare metal. Then I realized I can just go ESXi and install FreeNAS under it, pass the HBA through, and effectively only have ~4 hours of downtime while I figured all of that out and set it up. FreeNAS "doesn't know the difference", so I was able to just go ahead and virtualize it after it was running bare metal for over a year, then start playing with VM's etc. 

 

Then going virtual necessitated a SAS expander.... Originally when bare metal all SATA ports on the mobo are exposed to FreeNAS, so I used the HBA I had (didn't have enough SAS cables at the time to get all my drives plugged in via SAS, but didn't matter because it was bare metal) plus the mobo's SATA ports, and freenas was happy. Once I went virtual and had to pass HBA through, I got a SAS expander and more cables, booted ESXi, passed HBA through, and what do you know, my pool showed up in FreeNAS just fine.

 

Its just fun and learning, and don't get stuck in the "host down" being a horrible problem mentality. Its not a prod system, its literally a "home-lab" "lab" "test". lol. Yea its frustrating when things go sideways, but ultimately thats part of the learning. Backup all your important data to backblaze (I use B2 through FreeNAS, so I have about 4 TB of critical data backuped up every few nights to Backblaze B2), so even if things go fully sideways, I have what is really important to me backed up offsite with a reputable company who's job it is to secure that data - also make sure you encrypt your cloud backups, B2 has a very simple way to do this, its encrypted client side so nothing leaves your box that isn't encrypted.

 

So, yea. Some thoughts, some ideas, and ultimately lots of learning to do :). I am by all means a total noob at this, others may have much better input, but sometimes it can be hard to listen to someone who has years and years of very in depth experience and apply it to yourself. My advice, just start playing with it all 🙂

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

image.png.cd0a4a784e4e776759cfdb3d3750b537.png

image.thumb.png.218630a4b682d59cb4ed6ecd094481d5.png

 

For reference, blindspot is a full bluray 1080p rip, so its being reported by plex as: 

  • Bitrate 23254 kbps

 

That is a HEAVY file to transcode, and that is like 80% of my CPU... the other 20% is seal team ("only" ~8000 kbps), networking overhead, OS overhead, etc. 

 

So, I wouldn't say my i3 can chunk through huge files and transcode them, but it typically can transcode much more then I ever need. This was just a test for fun, like I said I don't typically transcode anyways, but if my i3 can do it, I think you will be fine with only assigning 8 threads to a single VM. Remember, my i3 is splitting its 4 cores across ESXi itself, FreeNAS, 3 ubuntu VM's, a Windows LTSC VM, and a UPS management VM. Both transcodes started to struggle, but again, transcoding a full bluray rip is pretty extreme. If that was a normal use case for someone, I would suggest storing the file in a 1080 and a 720p version.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LIGISTX said:

image.png.cd0a4a784e4e776759cfdb3d3750b537.png

image.thumb.png.218630a4b682d59cb4ed6ecd094481d5.png

Impressive! I know that sometimes Plex can use QuickSync, but because I don't pay for power and also 16 cores of Xeon go brrrrrr, I think I'll be just fine. What i3 is it?

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Teurce said:

Impressive! I know that sometimes Plex can use QuickSync, but because I don't pay for power and also 16 cores of Xeon go brrrrrr, I think I'll be just fine. What i3 is it?

Yea you could use quicksync, or pass through a GPU to plex (i think the ubuntu version of plex supports that, which is what I am using). And yes, Zeon's go brrrr, especially compared to my i3 lol.

 

Its a i3 6100. Info in my sig. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Teurce said:

It seems like the best thing to do in this situation is ditch Proxmox, and go for the internal RAID 1 SD card thingy to help avoid a host down situation, and then passthrough the entire H170P with all of the storage to a FreeNAS VM.

Maybe give the UnRAID free trial a try, it runs on USB and is easy to recover in case of major issues, and you can back up the USB easily.  It has a very extensive KVM implementation and has native docker support. It's very easy to do IOMMU passthrough in as well if you really want to run FreeNAS for the storage. Additionally UnRAID also has *unofficial* ZFS support and can easily manage ZFS arrays including snapshots through the CLI. https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764

 

 

7 hours ago, Teurce said:

Impressive! I know that sometimes Plex can use QuickSync, but because I don't pay for power and also 16 cores of Xeon go brrrrrr, I think I'll be just fine. What i3 is it?

You can do that in Plex but require a Plex Pass to unlock it. I'm pretty sure though that E5 Xeon's don't have QuickSync so you wouldnt be able to use that function if you're using the Xeon's. Additionally CPU's prior to 6th Gen have pretty poor codec support, so you might have issues with some of your library especially h265 / x265 and HEVC and HDR in particular. Pretty much as far as hardware transcoding goes, a 10th Gen or newer i3 is ideal for modern encoding. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×