Jump to content

How to Remote locate Unraid Server driving VMs?

I need to move my CPU Tower.  Im thinking optical cables, 1 for each VM, but Is there a thunderbolt card that will work without Headers? I would be using a Mother/daughter card to expand a single pcie  slot to 3 slots. I have 12 lanes left. each card would use 4. I cant think of another way to do this. Maybe 6 thunderbolt docks/extenders?

Link to comment
Share on other sites

Link to post
Share on other sites

Does your motherboard support thunderbolt at all? Not all boards support it.

You also need a board that supports PCIe bifurcation. That’s especially rare on consumer grade hardware. Otherwise you won’t be able to split that slot into multiple others and have it work. 
Even then, you’ll need to be able to pass through each individual card to a VM.

 

I’d go for what Linus did. No VMs, just connect the system to a TB dock over an optical cable and call it a day. 

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, NelizMastr said:

Does your motherboard support thunderbolt at all? Not all boards support it.

You also need a board that supports PCIe bifurcation. That’s especially rare on consumer grade hardware. Otherwise you won’t be able to split that slot into multiple others and have it work. 
Even then, you’ll need to be able to pass through each individual card to a VM.

 

I’d go for what Linus did. No VMs, just connect the system to a TB dock over an optical cable and call it a day. 

Im not sure on bifucation. Good point. Ill check.  I have to use the VMs. 3 active work seats. 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, JCBiggs said:

Im not sure on bifucation. Good point. Ill check.  I have to use the VMs. 3 active work seats. 

 

 

 

On a single 8 core system? That’s not going to work very well. You also have 3 video cards for the different seats? You won’t be able to pass through a single graphics card to 3 VMs, unless it’s an Nvidia Tesla or GRID.

 

probably best to think this through a bit before continuing.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, NelizMastr said:

On a single 8 core system? That’s not going to work very well. You also have 3 video cards for the different seats? You won’t be able to pass through a single graphics card to 3 VMs, unless it’s an Nvidia Tesla or GRID.

 

probably best to think this through a bit before continuing.

it works fine. its a 10 core now. was 8 core runing 2 workstations, now 10 core running 3.   i have 18 threads assigned to each vm, and the last 2 for unraid. works perfect. Users cant tell they arent on anything other than a fast computer.  as i stated in the original post, each vm has its own GPU

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, JCBiggs said:

it works fine. its a 10 core now. was 8 core runing 2 workstations, now 10 core running 3.   i have 18 threads assigned to each vm, and the last 2 for unraid. works perfect. Users cant tell they arent on anything other than a fast computer.  as i stated in the original post, each vm has its own GPU

What are the current system specs?
 

Do you even need vms? id try multi seat softwarae so you can have 3 users on one gpu, and resource sharing works better.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Electronics Wizardy said:

What are the current system specs?
 

Do you even need vms? id try multi seat softwarae so you can have 3 users on one gpu, and resource sharing works better.

the pc in my signature, only been upgraded to a 6950x and 3 quadros (and a USB card),  im not sure what kind of software youre talking about. Got a link?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NelizMastr said:

Does your motherboard support thunderbolt at all? Not all boards support it.

You also need a board that supports PCIe bifurcation. That’s especially rare on consumer grade hardware. Otherwise you won’t be able to split that slot into multiple others and have it work. 

This is just a theory, but based on this circuit layout, it looks like I could possibly run slot 3. Im not sure if im following this correctly,  but after looking at this, it looks like slot 3 is the key to why it even works currently. the NVME drive, and one gpu are both on slot 3. which means its already running in x12 mode. ( i think, i guess its possible it went x4/x4 and I didnt notice)

 

 It looks like I could move that gpu and the nvme drive to slot 5.  and then put my mother card in slot 3.   the q-switches are confusing me. Im not sure how they work, or what they enable/disable, but it does look worst case, i can get a custom bios to bifurcate it however i need. 

 

https://www.tweaktown.com/image.php?image=https://static.tweaktown.com/content/7/8/7887_22_asus-x99-deluxe-ii-motherboard-review_full.png

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, JCBiggs said:

the pc in my signature, only been upgraded to a 6950x and 3 quadros (and a USB card),  im not sure what kind of software youre talking about. Got a link?

Take a look at aster. It lets you have multiple users with one gpu. https://www.ibik.ru/

 

What are the users doing on the pc?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, JCBiggs said:

cad cam

Yea that shoudl work fine here. Id forget the vms.

 

But also why one big pc, getting 3 systems is probably about the same price, with much less hassle, and better performance.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

Yea that shoudl work fine here. Id forget the vms.

 

But also why one big pc, getting 3 systems is probably about the same price, with much less hassle, and better performance.

its not cheaper to build 3 high performance machine. and its a waste of resources. cad/cam is very peaky.  vms utilize the resources better. 

Link to comment
Share on other sites

Link to post
Share on other sites

I think you should stop and think about other solutions that would be way better and far easier to implement. 

 

If you only need dual display you can set up something like parsec on the VMs, put the server anywhere. Then use something like a raspberry pi to connect to the VM. This way you still do all the heavy lifting on the VM, but the users connect to the raspberry Pi. This would be much cheaper than optical cables and getting a bifurcation card for your motherboard. 

 

3x raspberry pis will be about $150. With parsec you need the paid version for multi display streaming, but considering a single optical thunderbolt cable is $500 and a dock is about $200, and the cost of parsec is about $10 per month, you would get a total of 18 years use (6 years of use from each VM) before it would be cheaper to use thunderbolt. 

 

That and Parsec would then let you use the VMs from anywhere in the world too. 

 

I work for a TV production company in London and we have been using Parsec Teams since the summer last year and it has worked flawlessly for us to have about 7 editors and 3-4 support staff working remotely. Means we can use our own infrastructure, the editors remote into the workstations still in the office and as far as they are concerned their workflow doesn't change apart from the fact that they are at home rather than in central London. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/6/2021 at 6:53 AM, jkirkcaldy said:

I think you should stop and think about other solutions that would be way better and far easier to implement. 

 

If you only need dual display you can set up something like parsec on the VMs, put the server anywhere. Then use something like a raspberry pi to connect to the VM. This way you still do all the heavy lifting on the VM, but the users connect to the raspberry Pi. This would be much cheaper than optical cables and getting a bifurcation card for your motherboard. 

 

3x raspberry pis will be about $150. With parsec you need the paid version for multi display streaming, but considering a single optical thunderbolt cable is $500 and a dock is about $200, and the cost of parsec is about $10 per month, you would get a total of 18 years use (6 years of use from each VM) before it would be cheaper to use thunderbolt. 

 

That and Parsec would then let you use the VMs from anywhere in the world too. 

 

I work for a TV production company in London and we have been using Parsec Teams since the summer last year and it has worked flawlessly for us to have about 7 editors and 3-4 support staff working remotely. Means we can use our own infrastructure, the editors remote into the workstations still in the office and as far as they are concerned their workflow doesn't change apart from the fact that they are at home rather than in central London. 

We're building a new system with Newer GPUs that will allow us to virtualize the whole deal with a single gpu. Planned upgrade coincides with windows 11 tpm requirements so probably sometime next year.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×