Jump to content

Can you pass through hardrives with proxmox

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, PurpleCodes said:

Yes! This is exactly the setup I use on my homelab. 

This quick reference tells you the basics of doing it. https://pve.proxmox.com/wiki/Physical_disk_to_kvm

You can also enter the Qemu config file for the VM (once created) and add them as sata drives by Serial number. 

Fine you want the PSU tier list? Have the PSU tier list: https://linustechtips.com/main/topic/1116640-psu-tier-list-40-rev-103/

 

Stille (Desktop)

Ryzen 9 3900XT@4.5Ghz - Cryorig H7 Ultimate - 16GB Vengeance LPX 3000Mhz- MSI RTX 3080 Ti Ventus 3x OC - SanDisk Plus 480GB - Crucial MX500 500GB - Intel 660P 1TB SSD - (2x) WD Red 2TB - EVGA G3 650w - Corsair 760T

Evoo Gaming 15"
i7-9750H - 16GB DDR4 - GTX 1660Ti - 480GB SSD M.2 - 1TB 2.5" BX500 SSD 

VM + NAS Server (ProxMox 6.3)

1x Xeon E5-2690 v2  - 92GB ECC DDR3 - Quadro 4000 - Dell H310 HBA (Flashed with IT firmware) -500GB Crucial MX500 (Proxmox Host) Kingston 128GB SSD (FreeNAS dev/ID passthrough) - 8x4TB Toshiba N300 HDD

Toys: Ender 3 Pro, Oculus Rift CV1, Oculus Quest 2, about half a dozen raspberry Pis (2b to 4), Arduino Uno, Arduino Mega, Arduino nano (x3), Arduino nano pro, Atomic Pi. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, BrinkGG said:

Yes! This is exactly the setup I use on my homelab. 

This quick reference tells you the basics of doing it. https://pve.proxmox.com/wiki/Physical_disk_to_kvm

You can also enter the Qemu config file for the VM (once created) and add them as sata drives by Serial number. 

Thanks so much, You have no idea how good it feels to be right for once. Been specing out my server for days now and everyone is telling my different ways to do stuff. Just glad i'm on the right track.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, PurpleCodes said:

Thanks so much, You have no idea how good it feels to be right for once. Been specing out my server for days now and everyone is telling my different ways to do stuff. Just glad i'm on the right track.

As long as you don't put the drives in hardware raid (raid card), you'll have no issue with the hardware you mentioned.  

Fine you want the PSU tier list? Have the PSU tier list: https://linustechtips.com/main/topic/1116640-psu-tier-list-40-rev-103/

 

Stille (Desktop)

Ryzen 9 3900XT@4.5Ghz - Cryorig H7 Ultimate - 16GB Vengeance LPX 3000Mhz- MSI RTX 3080 Ti Ventus 3x OC - SanDisk Plus 480GB - Crucial MX500 500GB - Intel 660P 1TB SSD - (2x) WD Red 2TB - EVGA G3 650w - Corsair 760T

Evoo Gaming 15"
i7-9750H - 16GB DDR4 - GTX 1660Ti - 480GB SSD M.2 - 1TB 2.5" BX500 SSD 

VM + NAS Server (ProxMox 6.3)

1x Xeon E5-2690 v2  - 92GB ECC DDR3 - Quadro 4000 - Dell H310 HBA (Flashed with IT firmware) -500GB Crucial MX500 (Proxmox Host) Kingston 128GB SSD (FreeNAS dev/ID passthrough) - 8x4TB Toshiba N300 HDD

Toys: Ender 3 Pro, Oculus Rift CV1, Oculus Quest 2, about half a dozen raspberry Pis (2b to 4), Arduino Uno, Arduino Mega, Arduino nano (x3), Arduino nano pro, Atomic Pi. 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, BrinkGG said:

As long as you don't put the drives in hardware raid (raid card), you'll have no issue with the hardware you mentioned.  

If i wanted a raid card how would that work? just curious

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, PurpleCodes said:

If i wanted a raid card how would that work? just curious

Depends on your goals. Normally the host will have a filesystem and give virtual disks to the vms

 

Also why do you want to passthough disks? Normally its better to setup raid and stoarge on the proxmox host, then passthough the virtual disks to the vms.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Electronics Wizardy said:

Depends on your goals. Normally the host will have a filesystem and give virtual disks to the vms

 

Also why do you want to passthough disks? Normally its better to setup raid and stoarge on the proxmox host, then passthough the virtual disks to the vms.

From what i've read and heard sending virtual drives to freenas kills preformance. In an ideal world I would have 2 servers and Just have freenas run on one of them and a hypervisor on another but I don't really have a budget for that.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, PurpleCodes said:

From what i've read and heard sending virtual drives to freenas kills preformance. In an ideal world I would have 2 servers and Just have freenas run on one of them and a hypervisor on another but I don't really have a budget for that.

why run freenas at all? You can setup a zfs share on the host, so the main point of freenas is removed. THen just run a conatianer for stuff like smb shares.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Electronics Wizardy said:

why run freenas at all? You can setup a zfs share on the host, so the main point of freenas is removed. THen just run a conatianer for stuff like smb shares.

What would the difficulty of adding things like plex and encrypted backup? Freenas makes it really easy to set that stuff up how would I go about doing that with proxmox and containers?

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/14/2020 at 9:09 PM, PurpleCodes said:

I've seen this done with HBA's and Raid cards but can you do this with the sata ports on the motherboard?

 

I am using a ASUS AM4 TUF GAMING X570-Plus with a ryzen 2600

 

I want to pass in the hardrives to use with freenas and I want to make sure it's possible.

You can, as already mentioned - but it might be better to pass through the whole controller if you must use FreeNAS, unless you have non NAS disks on the same controller?
It's odd, I just had to do exactly what you wanted today, for a single disk - I'm waiting on a new SFF8087 to SATA cable, so temporarily passed a drive through from the motherboard controller, while the rest were on half of a SAS card.

But for the most part, it is indeed better to set up the physical disks on the hypervisor, and allocate virtual ones to your VMs. Just be aware that ZFS on top of RAID does not play nice! If you go ZFS, put your controllers in IT mode and let ZFS handle it all in software.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TehDwonz said:

You can, as already mentioned - but it might be better to pass through the whole controller if you must use FreeNAS, unless you have non NAS disks on the same controller?
It's odd, I just had to do exactly what you wanted today, for a single disk - I'm waiting on a new SFF8087 to SATA cable, so temporarily passed a drive through from the motherboard controller, while the rest were on half of a SAS card.

But for the most part, it is indeed better to set up the physical disks on the hypervisor, and allocate virtual ones to your VMs. Just be aware that ZFS on top of RAID does not play nice! If you go ZFS, put your controllers in IT mode and let ZFS handle it all in software.

So i just bought a HBA card (https://www.amazon.com/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M/ref=as_li_ss_tl?keywords=SAS9211&qid=1577977693&sr=8-1&linkCode=sl1&tag=craftcomputin-20&linkId=ef0870a4aafb2f7b0e72e5765f068e62&language=en_US)

But I am not 100% sure that my mobo supports the pcie pass through. I know the ryzen cpu does but I guess it's a gamble on if it is going to work. I have 2 other drives (m.2) which I am going to run in a raid 1 for all of my vms.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, PurpleCodes said:

So i just bought a HBA card (https://www.amazon.com/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M/ref=as_li_ss_tl?keywords=SAS9211&qid=1577977693&sr=8-1&linkCode=sl1&tag=craftcomputin-20&linkId=ef0870a4aafb2f7b0e72e5765f068e62&language=en_US)

But I am not 100% sure that my mobo supports the pcie pass through. I know the ryzen cpu does but I guess it's a gamble on if it is going to work. I have 2 other drives (m.2) which I am going to run in a raid 1 for all of my vms.

The BIOS feature you are looking for is "IOMMU".

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TehDwonz said:

The BIOS feature you are looking for is "IOMMU".

Ya I looked for that, Just now found it actually and it does support it, so it looks like i can pass my entire hba into freenas (assuming I still do freenas)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, PurpleCodes said:

Ya I looked for that, Just now found it actually and it does support it, so it looks like i can pass my entire hba into freenas (assuming I still do freenas)

Be aware that IOMMU "groupings" can mean passthrough is somewhat limited. In that you can't pass 2 devices in the same group to 2 different VMs.
There is a reasonable discussion on this (spit) reddit: https://www.reddit.com/r/VFIO/comments/dq5w2o/what_are_some_suitable_x570_motherboards/

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TehDwonz said:

Be aware that IOMMU "groupings" can mean passthrough is somewhat limited. In that you can't pass 2 devices in the same group to 2 different VMs.
There is a reasonable discussion on this (spit) reddit: https://www.reddit.com/r/VFIO/comments/dq5w2o/what_are_some_suitable_x570_motherboards/

 

Well i only plan to passthrough the hba card into freenas and nothing else.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, PurpleCodes said:

Well i only plan to passthrough the hba card into freenas and nothing else.

Ok, but don't forget you have on-board PCIe/nvme devices that might share groupings with certain slots. The top x16 is usually on its own, but not always. It's an annoyingly undocumented thing that griefs AMD chipsets in particular.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TehDwonz said:

Ok, but don't forget you have on-board PCIe/nvme devices that might share groupings with certain slots. The top x16 is usually on its own, but not always. It's an annoyingly undocumented thing that griefs AMD chipsets in particular.

Well i'll hope for the best. I saw a video on craft computing that allowed him to pass through the individual drives from the hba card instead of the actual hba card. Is this still using IOMMU or is it something else? How do you know if a board does IOMMU or IOMMU groupings. this is my board https://dlcdnets.asus.com/pub/ASUS/mb/SocketAM4/PRIME_X570-P/E15829_PRIME_PRO_TUF_GAMING_X570_Series_BIOS_EM_WEB.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, PurpleCodes said:

What would the difficulty of adding things like plex and encrypted backup? Freenas makes it really easy to set that stuff up how would I go about doing that with proxmox and containers?

Not really. You can do zfs encryption on the host, and you can setup a container for plex as well(its nice to keep it separated).

 

I don't see a reason to run freenas here at all, and this would make it much simpler and easier to work with

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Not really. You can do zfs encryption on the host, and you can setup a container for plex as well(its nice to keep it separated).

 

I don't see a reason to run freenas here at all, and this would make it much simpler and easier to work with

As said earlier in the thread because of IOMMU groupings it might not be possible to pass the drives into multiple vms. unless a container doesn't count not entirely sure

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, PurpleCodes said:

As said earlier in the thread because of IOMMU groupings it might not be possible to pass the drives into multiple vms. unless a container doesn't count not entirely sure

your not passing the disks to the container, your passing a mount point, and that will work on any system.

 

I just do the file sharing on my proxmox system on the host, but thats does take some of the advanages ouf vms out, so thats why I suggest using a conatiner for doing file sharing

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, PurpleCodes said:

As said earlier in the thread because of IOMMU groupings it might not be possible to pass the drives into multiple vms. unless a container doesn't count not entirely sure

You can pass individual disks no problem - the grouping issue only comes up when passing through controllers/PCI devices. I don't think it'll be an issue for you tbh, as @Electronics Wizardy said. Let ProxMox handle the physical hardware, and pass virtual drives / mount points to VMs and containers. It makes the VMs hardware agnostic, so they can more easily be moved should your host system fail. If you tie them to actual hardware, you'll have to fix it all manually.

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 years later...

Good info in this thread.  I am getting ready to setup a Proxmox server and I see (3) methods to use HDDS.

 

1.) Setup ZFS on Proxmox and pass a mount point of the drive(s) to the VM

2.) Pass through individual drive(s) to VM

3.) Pass through SATA controller or HBA card to VM

 

What are the pros and cons to each?  It seems like #1 is the easiest, since the disk are setup in Proxmox.   I plan on setting up a NAS VM or container to manage files (Turnkey Linux File Server, TrueNAS, or Open Media Vault.)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×