Jump to content

Hi, I'm currently running an UnRaid multi-headed gaming setup/workstation using Threadripper. 

 

System Specs:

Threadripper 1950X (soon to be 2990WX) 

Asus ROG Zenith Extreme

128 GB Trident Z RGB DDR4-2933

3x 960 Evo 1tb

2x 1080 Ti

1x Titan V

1x 1050 Ti

 

Here's my problem: I'm currently running this configuration just fine, however, I want to add  proper NAS functionality using a FreeNAS VM in UnRaid, but

a) I want to add 10 gbe networking, and

b) I want to add a SAS card because I found https://www.pc-pitstop.com/sas_cables_enclosures/sas8bay.asp which seems like a good option since I don't have anywhere to put drives internally, and I want to have hot swap drive bays. 

c) You might wonder why I have the 1050 Ti in there, and it's because I can't seem to pass through the GPU in the primary slot, and so I put the cheapest GPU i could find there, current configuration is:

    1. 1050 Ti

    2. Titan V

    3. 1080 Ti (water cooled, single slot) 

    4. 1080 Ti (water cooled, single slot)

 

So here's what I *wish* I could do. I wish I could split my first slot, the top PCIe3.0  x16 slot between the 1050 Ti (which I'm not really using, it's just taking up space), a 10 GBE card (most of those run on x4 or x8 slots i think) and a SAS card that runs on x4 or x8 as well. However

a) I've not seen anything that splits a PCIe x16 into what I need (other than Mining cards which I'm not sure will do the job)

b) My 1050 Ti runs off a x16 connection (even though I don't really need it, I just need it to register and have it act as primary GPU) 

c) I have no mounting solution even if I do split it, I'd need to find a way to mount it in the case (though this I'm more optimistic, I could use riser cables and maybe CNC something to mount it, less concerned about this). 

 

Any advice? I really want to add 10gbe and the SAS card, but I'm not sure how to do it, and I don't want to build a separate machine for NAS, since i have plenty of cores and RAM to spare on this machine, just need a way of connecting drives and better networking. 

Link to post
Share on other sites

7 hours ago, amelius said:

1050 Ti (which I'm not really using, it's just taking up space),

why not take it out?

 

7 hours ago, amelius said:

I want to add  proper NAS functionality using a FreeNAS VM in UnRaid,

WHy use a vm for this? unraid works well as a nas.

 

If you want a zfs nas and much better vm support look at proxmox.

 

 

Link to post
Share on other sites

2 minutes ago, KarathKasun said:

Spend tons of money on an external PCIe expansion cage or riser that utilizes a PLX switch.

 

Alternatively, drop the 1050 Ti  and use a displaylink USB -> HDMI dongle for admin purposes.

The problem isn't the 1050Ti being need to administrate things. I don't need that, the problem is that whatever the system sees as the "primary" GPU it refuses to pass through. I put a 1050 Ti there so that i can pass through all the other GPUs to VMs, because that GPU basically is wasted. If I take it out, the "primary" GPU will become either the titan v or one of the 1080 Ti's, causing that to become useless since I won't be able to pass it through.

 

Also, just dropping the 1050 Ti, even if I could, wouldn't fix the issue of needing two slots, not just one. Also, I don't mind spending money on an external PCIe expansion cage, however, the only ones I found are capable of a PCIe 1x connection, and hence wouldn't do the trick. If you know of an external expansion cage box that can split a PCIe 3.0 x16 slot, that'd be perfect. 

Link to post
Share on other sites

3 minutes ago, amelius said:

The problem isn't the 1050Ti being need to administrate things. I don't need that, the problem is that whatever the system sees as the "primary" GPU it refuses to pass through. I put a 1050 Ti there so that i can pass through all the other GPUs to VMs, because that GPU basically is wasted. If I take it out, the "primary" GPU will become either the titan v or one of the 1080 Ti's, causing that to become useless since I won't be able to pass it through.

 

Also, just dropping the 1050 Ti, even if I could, wouldn't fix the issue of needing two slots, not just one. Also, I don't mind spending money on an external PCIe expansion cage, however, the only ones I found are capable of a PCIe 1x connection, and hence wouldn't do the trick. If you know of an external expansion cage box that can split a PCIe 3.0 x16 slot, that'd be perfect. 

https://www.bhphotovideo.com/c/product/1065119-REG/cubix_xprm_g3_2viz_rckmnt_2_g3_2x_dual.html

Link to post
Share on other sites

5 minutes ago, Electronics Wizardy said:

why not take it out?

Because if I do, either a 1080 Ti or the Titan V would become the primary GPU and not be possible to pass it through, making one of the actually good GPUs essentially useless. 

6 minutes ago, Electronics Wizardy said:

WHy use a vm for this? unraid works well as a nas.

It works as a NAS but I am using the primary array to provide high speed drives for my VMs. I wish UnRaid supported multiple arrays, but it doesn't, so my best solution is a FreeNas vm with passed through drives. 

 

As for ProxMox, I'm pretty happy with my existing setup, and I really don't want to spend a long time re-setting up all the VMs, (and loosing all of them). 

Link to post
Share on other sites

5 minutes ago, KarathKasun said:

Yikes you weren't kidding how insanely expensive this is... I would not have guessed it'd cost as much as a Titan V... I assume there's no more reasonably priced options? 

Link to post
Share on other sites

2 minutes ago, amelius said:

Yikes you weren't kidding how insanely expensive this is... I would not have guessed it'd cost as much as a Titan V... I assume there's no more reasonably priced options? 

Not really, this is high end custom data center/render cluster territory.

 

If you need the IO, you need to build a system with a MB that has it to start with.  Which usually means upgrading to pro-workstation class or server class hardware.

Link to post
Share on other sites

5 minutes ago, KarathKasun said:

Not really, this is high end custom data center/render cluster territory.

What's the difference between that and something like this? https://www.compsource.com/ttechnote.asp?part_no=RSCR2UT2E8R&vid=428&src=F

Link to post
Share on other sites

2 minutes ago, amelius said:

What's the difference between that and something like this? https://www.compsource.com/ttechnote.asp?part_no=RSCR2UT2E8R&vid=428&src=F

That only works on a motherboard that supports PCIe bifrucation.  Again, pro-workstation/server level feature for the most part.

 

Also, that is a Supermicro part, it only works with Supermicro motherboards.

Link to post
Share on other sites

6 minutes ago, KarathKasun said:

That only works on a motherboard that supports PCIe bifrucation.  Again, pro-workstation/server level feature for the most part.

 

Also, that is a Supermicro part, it only works with Supermicro motherboards.

According to 

all the X399 boards (including mine) support PCIe bifrucation. Does this give me any options?  

Link to post
Share on other sites

5 minutes ago, amelius said:

According to 

all the X399 boards (including mine) support PCIe bifrucation. Does this give me any options?  

You would need a PCIe x16 -> 4x M.2 NVMe card (pic #1) in addition to four M.2 NVMe 4x -> PCIe 4x (pic #2) risers plus four PCIe x4 -> PCIe x16 ribbon risers (pic #3).

 

image.png.8adbb3d74ac953d53b74354151440a32.pngimage.png.8642a9eb326a98a1246f97459f150ac4.png

image.png.f3f4739df3c3cb68e186385acf55fc26.png

Link to post
Share on other sites

19 minutes ago, KarathKasun said:

You would need a PCIe x16 -> 4x M.2 NVMe card (pic #1) in addition to four M.2 NVMe 4x -> PCIe 4x (pic #2) risers plus four PCIe x4 -> PCIe x16 ribbon risers (pic #3).

 

image.png.8adbb3d74ac953d53b74354151440a32.pngimage.png.8642a9eb326a98a1246f97459f150ac4.png

image.png.f3f4739df3c3cb68e186385acf55fc26.png

This seems... oddly, fairly practical. Only question I've got is if there's any PCIe 3.0 x4 Gbe and SAS cards out there that you're aware of. 

Link to post
Share on other sites

3 minutes ago, amelius said:

This seems... oddly, fairly practical. Only question I've got is if there's any PCIe 3.0 x4 Gbe and SAS cards out there that you're aware of. 

Doesn't matter.  x1-x16 all work in those adapters, bandwidth will max out at 4x speeds though.

 

Signal integrity may also be an issue, so you might need shielded ribbon risers or you will need to wrap the ribbons in aluminum foil that is grounded to the case.

Link to post
Share on other sites

7 hours ago, amelius said:

Because if I do, either a 1080 Ti or the Titan V would become the primary GPU and not be possible to pass it through, making one of the actually good GPUs essentially useless. 

You can passthough all the gpus if you a bit tricky in linux. You don't need a gpu for the host to use. You have to tell the system not to use a gpu at boot up.

 

7 hours ago, amelius said:

As for ProxMox, I'm pretty happy with my existing setup, and I really don't want to spend a long time re-setting up all the VMs, (and loosing all of them). 

You won't have to resetup your vms, You just copy the vm images over and then start them in proxmox.

 

 

Link to post
Share on other sites

3 minutes ago, KarathKasun said:

Doesn't matter.  x1-x16 all work in those adapters, bandwidth will max out at 4x speeds though.

So, given the throughput per lane of PCIe 3.0 is 985 MB/s, a 4x connection would yield me just short of 4 GB/s, which i think is actually pretty good, it'd be higher than my drives can achieve even in striped config, and it'd be faster than a 10 gbe connection, which should only have a maximum throughput of 1.25 GB/s. That sounds pretty reasonable, since nothing but the GPU would be capped out, and that's totally fine with me. Does that sound about right to you? 

Link to post
Share on other sites

Just now, amelius said:

So, given the throughput per lane of PCIe 3.0 is 985 MB/s, a 4x connection would yield me just short of 4 GB/s, which i think is actually pretty good, it'd be higher than my drives can achieve even in striped config, and it'd be faster than a 10 gbe connection, which should only have a maximum throughput of 1.25 GB/s. That sounds pretty reasonable, since nothing but the GPU would be capped out, and that's totally fine with me. Does that sound about right to you? 

Those numbers sound pretty spot on.

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

You can passthough all the gpus if you a bit tricky in linux. You don't need a gpu for the host to use. You have to tell the system not to use a gpu at boot up.

Really? Because I haven't found any setting to not use a GPU to boot up, even when I run UnRaid in non-gui mode, and when i blacklist the GPU, I still can't seem to pass it through, I end up with a blank screen. 

 

3 minutes ago, Electronics Wizardy said:

You won't have to resetup your vms, You just copy the vm images over and then start them in proxmox.

Is ProxMox also KVM based? Wouldn't I have to re-setup all the annoyances with GPU and USB passthrough? It was a big enough headache the first time around... 

Link to post
Share on other sites

7 hours ago, amelius said:

Really? Because I haven't found any setting to not use a GPU to boot up, even when I run UnRaid in non-gui mode, and when i blacklist the GPU, I still can't seem to pass it through, I end up with a blank screen. 

Its not a gui setting, I think you need to change the grub config files.

 

7 hours ago, amelius said:

s ProxMox also KVM based? Wouldn't I have to re-setup all the annoyances with GPU and USB passthrough? It was a big enough headache the first time around... 

Yep its KVM based with ZFS aswell. You do need to setup the pcie and usb passthough again, but should be able to just use the same device ids.

Link to post
Share on other sites

6 minutes ago, Electronics Wizardy said:

Yep its KVM based with ZFS aswell. You do need to setup the pcie and usb passthough again, but should be able to just use the same device ids.

Does it support multiple drive arrays? What advantage does this have over just having a FreeNAS VM in UnRaid?

 

Also, what grub setting, is blacklisting the GPU not enough? 

Link to post
Share on other sites

7 hours ago, amelius said:

Does it support multiple drive arrays? What advantage does this have over just having a FreeNAS VM in UnRaid?

 

Also, what grub setting, is blacklisting the GPU not enough? 

Grub is the boot manager. You need to make a setting in the boot manager to tell it to not use a gpu at all in the os. That lets you passthough all gpus

 

Proxmox supports as many zpools as you can. And you make make file level pools too with something like mergerfs.

 

Compared to a freenas vm, you get much better ram sharing, and can run the vms on zfs(which has lots of nice features like snapshots, compression)

 

Also proxmox just has much better vm features like better cpu performance sharing, better ram sharing, vm backup, clustering and otherfeatures.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×