Jump to content

Virtuaized Nvidia Video Cards anyone doing this?

Grimlakin

So here is what we are looking to actually do.

 

We want to have 2016 Guests running Citrix for user use from workstations.  The render happens on the server but the final is pushed to the end user.   If we can considerably lower the compute needs of the Citrix host by having one of the M10 video cards in our host ESXi server and assigning lets say 8 gig and one "core" to each Citrix guest would that be a noticeable improvement for our end users?

 

Just curious what you all have experienced in that regard.   Building out a 20k test bed I'm trying to avoid right now.   I could get a server and card for POC but that is down the line.

Link to comment
Share on other sites

Link to post
Share on other sites

I've never messed with this but let me know how it goes seems interesting.

CPU:R9 3900x@4.5Ghz RAM:Vengeance Pro LPX @ 3200mhz MOBO:MSI Tomohawk B350 GPU:PNY GTX 1080 XLR8

DRIVES:500GB Samsung 970 Pro + Patriot Blast 480GB x2 + 12tb RAID10 NAS

MONITORS:Pixio PX329 32inch 1440p 165hz, LG 34UM68-p 1080p 75hz

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know that the new amd pro card supported virtualization, that means chopping the card into pieces, so each client gets a set amount of cores and vram, but i dont know which nvidia cards support it.

I only see your reply if you @ me.

This reply/comment was generated by AI.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia has a line of business cards that do.  The lowest end is the M10 good for 96 users has 32gb of ram and 4 "chips" that you can slice up.  In the presentation I sat through at VMworld you slice it up by video memory.

Link to comment
Share on other sites

Link to post
Share on other sites

What are the users doing? If they need graphics power it will be a big boost, otherwise it won't make a difference.

 

It won't lower other load on the system, just make graphics tasks faster. All depends on what the users are doing.

Link to comment
Share on other sites

Link to post
Share on other sites

There is an impact for systems serving even a modern gui rendered page these days in windows 10 or 2016.  There is some offload by not having to CPU render what is being delivered.  Just wondering at the impact.

 

Yes it will be more impact for VDI environments.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Grimlakin said:

There is an impact for systems serving even a modern gui rendered page these days in windows 10 or 2016.  There is some offload by not having to CPU render what is being delivered.  Just wondering at the impact.

 

Yes it will be more impact for VDI environments.

what are your using doing? Thats what matters here. For normal desktop users like web browsing and ms office it really won't make a difference. That virtual gpu doesn't need much cpu power.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

what are your using doing? Thats what matters here. For normal desktop users like web browsing and ms office it really won't make a difference. That virtual gpu doesn't need much cpu power.

My issue is I have anywhere from 200-800 users any given point in time that could be requesting the applications in question.   If I can offload that render (especially of more graphically active browser pages.) it will help alleviate load on the CPU AND improve the end user experience as well.  That is a win win.   

 

What I'm trying to get at is if anyone has any practical experience doing this as of yet.

 

I understand the theory.  Yes it's better for people doing design work in a virtual work space that needs 3d.   Or if you're trying to deliver a 3d gaming experience to end users over something like VDI or Citrix.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Grimlakin said:

What I'm trying to get at is if anyone has any practical experience doing this as of yet.

Ive done this with older quadros that supported it.

 

1 minute ago, Grimlakin said:

My issue is I have anywhere from 200-800 users any given point in time that could be requesting the applications in question.   If I can offload that render (especially of more graphically active browser pages.) it will help alleviate load on the CPU AND improve the end user experience as well.  That is a win win.   

It really depends on the app. Its kinda something you have to test.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Ive done this with older quadros that supported it.

 

It really depends on the app. Its kinda something you have to test.

So how did that older quadro work out for you?  Did you just deliver a VDI via ESXi or was it more delivering a guest that was a host to multiple users that leveraged a portion of the video card?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Grimlakin said:

So how did that older quadro work out for you?  Did you just deliver a VDI via ESXi or was it more delivering a guest that was a host to multiple users that leveraged a portion of the video card?

Worked out fine. I used the gpu for accelerating a few vms on a older server. The user experience wasn't that much better for desktop tasks.

 

But this really depends on your program. What program is this? It all depends on how much gpu power its using.

Link to comment
Share on other sites

Link to post
Share on other sites

It's a custom program.  The real consumer of the GPU will be the browser and video/map access via that.

Link to comment
Share on other sites

Link to post
Share on other sites

Are you simply looking at accelerated desktop with HDX to improve the end user experience or are you looking at actually providing GPU compute resource inside the desktop?

 

I have experience with Citrix and Nvidia Grid using accelerated desktop, shared and dedicated GPU passthrough but you are going to have to provide more information.

You say it's in a web browser, what browser graphics API is being called?  Is the software making use of the API's in the browser using OpenGL/WebGL?

 

 

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×