Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Uttamattamakin

Member
  • Content Count

    603
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Uttamattamakin got a reaction from Uji Ninja in Making an ARM-based PC?   
    If you just want to work with RISC but don't want to do a Rasberry Pi there is the RISC V Platform. 
     
    https://abopen.com/news/a-look-at-the-risc-v-pc-from-sifive/
     

     
    RISC V has that X86 feature of being an open source platform that really allows building a true PC. 
     
    ARM is not sold as an open platform to consumers.  It is always sold in a locked down form or a from that is so bespoke and / or expensive that they'd make no sense for consumers even if more open. 

    In fact this makes me kinda want to buy this just to play with. 
  2. Like
  3. Informative
    Uttamattamakin reacted to Drama Lama in Making an ARM-based PC?   
    Pinebook Pro
    https://www.pine64.org/pinebook-pro/
    i mean it's not super fast, but it's to my knowledge the best you can get for playing around
     
  4. Agree
    Uttamattamakin reacted to Jumballi in Making an ARM-based PC?   
    The Nvidia Jetson is considered the most powerful you can get as a consumer. You'd already have a Qualcomm Dev kit if you're one of the people who could have gotten on.

    The best thing to get is the raspberry pi compute module 4, since it has the most documentation and is the easiest to get started with. It's crazy powerful for what it is and it's best not to invest a lot now seeing as the next few years will have a ton of new options coming out.
  5. Agree
    Uttamattamakin reacted to Jumballi in Making an ARM-based PC?   
    The only option at this time are the SBC, no ARM processors are sold to general consumers at this time.
  6. Informative
    Uttamattamakin reacted to TOMPPIX in Nvidia GPU Virtualization Hacked For GeForce Cards   
    More hacks https://www.tomshardware.com/news/patch-boosts-video-encoding-for-nvidias-consumer-gpus
    Didn't Linus mention something about this limit in one of his videos?
  7. Informative
    Uttamattamakin got a reaction from Beskamir in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  8. Like
    Uttamattamakin reacted to igormp in Nvidia GPU Virtualization Hacked For GeForce Cards   
    You have different profiles, so you can pick any one. For my GPU (2060 Super spoofed as a T4), I have the following ones:

     
    I left my VM with a T4-2Q profile, meaning 2gb of vram, 60fps max, and 8k max resolution.
  9. Agree
    Uttamattamakin reacted to Kisai in Nvidia GPU Virtualization Hacked For GeForce Cards   
    PCI ID spoofing has been how swapping quadro and geforce parts have been done historically as well. Calling it a hack is like calling using your older siblings drivers license as your own a hack to drive or drink. It only matters if you get caught. It's not a fake ID, just presenting the wrong one.
     
    That said, it's all but certain it violates licenses and if you did this for a commercial effort, you're likely to get spanked for it hard. If you do this with your personal PC, it's pretty much a given that nVidia isn't going to chase you down because the drivers themselves don't report back telemetry to nVidia (that we know of.)
     
  10. Agree
    Uttamattamakin got a reaction from VirtualBlack in Nvidia GPU Virtualization Hacked For GeForce Cards   
    This more or less disproves the idea that it is "not possible" with a GE Force GPU.  The question is the inevitable lower performance worth it.  I would consider it only with a 1080 or better since a 980 level of performance is respectable for killing time while something else is rendering or whatever in Linux.   Once one has a 3080 getting a 2070 or 1080Ti level of performance for Windows .... but with Tensor cores so yeah.  

    What Nvidia should do is make an official version of this and allow people to virtual ONE card, all legal, licensed, and supported.  At least on their top tier cards which would have the kick to make it really worth while. 1070+ , 2070 + and 3070 + or some such. 
  11. Informative
    Uttamattamakin got a reaction from thechinchinsong in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  12. Informative
    Uttamattamakin reacted to Jamtea in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Yep, turning that off should result in best effort performance. IMO that should result in much higher frame rates being available to streaming devices, so long as they can display them of course.
  13. Informative
    Uttamattamakin reacted to leadeater in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Yes.
     
    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/Virtual-GPU-Packaging-and-Licensing-Guide.pdf
    https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/partners/nvidia/vmware-nvidia-grid-vgpu-faq.pdf
     
    So as far as I understand this only lowers the hardware cost?
     
    But also:
    Sooo.....
     

  14. Informative
    Uttamattamakin reacted to igormp in Nvidia GPU Virtualization Hacked For GeForce Cards   
    I couldn't properly understand what you meant, but the vGPUs have frame limiting enabled by default:
    Source
  15. Informative
    Uttamattamakin got a reaction from Taf the Ghost in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  16. Like
    Uttamattamakin got a reaction from Cyberspirit in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  17. Like
    Uttamattamakin reacted to Jamtea in Nvidia GPU Virtualization Hacked For GeForce Cards   
    I want to see if you could effectively run a local network version of Grid or the equivalent to run a single or couple of high end Geforce RTX series GPUs to give enough gaming performance to a bunch of computers at once with the respective framerates. Basically a thin client enabled LAN setup of sorts, or even see if running breakout displays from the single card to multiple users is possible. I don't know if that level of segmentation for the display outputs is possible, but 2-4 gamers on 1 CPU AND 1 GPU would be pretty damn awesome!
  18. Agree
    Uttamattamakin reacted to igormp in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Keep in mind that it's the TU104-based 2060 that works (the one usually called 2060 KO), not the regular TU106 2060.
     
    Basically, it's faking the PCI ID of the GPU for a Tesla/Grid GPU that uses the same underlying chip.
  19. Informative
    Uttamattamakin reacted to James Evens in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Not completely accurate but this goes probably due to simplification made by mainstream media:
    1. not all variants of the GPUs are supported so not any.
    2. 2060 is supported while 2060 super not
    So currently not that simple like anything better than that.
     
    from the code:
    // GP102 if(actual_devid == 0x1b00 || // TITAN X (Pascal) actual_devid == 0x1b02 || // TITAN Xp actual_devid == 0x1b06 || // GTX 1080 Ti actual_devid == 0x1b30) { // Quadro P6000 spoofed_devid = 0x1b38; // Tesla P40 } // GP104 if(actual_devid == 0x1b80 || // GTX 1080 actual_devid == 0x1b81 || // GTX 1070 actual_devid == 0x1b82 || // GTX 1070 Ti actual_devid == 0x1b83 || // GTX 1060 6GB actual_devid == 0x1b84 || // GTX 1060 3GB actual_devid == 0x1bb0) { // Quadro P5000 spoofed_devid = 0x1bb3; // Tesla P4 } // TU102 if(actual_devid == 0x1e02 || // TITAN RTX actual_devid == 0x1e04 || // RTX 2080 Ti actual_devid == 0x1e07) { // RTX 2080 Ti Rev. A spoofed_devid = 0x1e30; // Quadro RTX 6000 spoofed_subsysid = 0x12ba; } // TU104 if(actual_devid == 0x1e81 || // RTX 2080 Super actual_devid == 0x1e82 || // RTX 2080 actual_devid == 0x1e84 || // RTX 2070 Super actual_devid == 0x1e87 || // RTX 2080 Rev. A actual_devid == 0x1e89 || // RTX 2060 actual_devid == 0x1eb0 || // Quadro RTX 5000 actual_devid == 0x1eb1) { // Quadro RTX 4000 spoofed_devid = 0x1eb8; // Tesla T4 } // GA102 if(actual_devid == 0x2204 || // RTX 3090 actual_devid == 0x2205 || // RTX 3080 Ti actual_devid == 0x2206) { // RTX 3080 spoofed_devid = 0x2235; // RTX A40 } https://raw.githubusercontent.com/DualCoder/vgpu_unlock/master/vgpu_unlock
  20. Like
    Uttamattamakin reacted to igormp in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Holy cow, that's amazing!
     
    Sadly it doesn't seem to support my 2060S (TU106), but I'll try add my GPU id on the script anyway and see how that fares. This would finally allow me to properly use Fusion 360 without having to mess with Wine or having subpar performance in a VM.
  21. Informative
    Uttamattamakin got a reaction from Bazrat in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  22. Like
    Uttamattamakin got a reaction from igormp in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  23. Like
    Uttamattamakin got a reaction from Vishera in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Yeah.... it makes no sense since it has got to be a pretty powerful GPU. I mean if theoretically a 1060 might be able to do it. 

    I understand binning and some parts having parts cut out of them... but that was an interesting choice to cut out. 
  24. Informative
    Uttamattamakin got a reaction from Bananasplit_00 in Nvidia GPU Virtualization Hacked For GeForce Cards   
    Abstract
    Nvidia limits using one GPU as one or more virtual GPU's to its enterprise level products.  The ability exist in the silicon for GeForce but it is not enabled in the drivers.  Similar to how GeForce drivers would give error 43 when passed through in a VM.  However, we are not talking about passthrough but something similar to SR-IOV.  This allows one to use one GPU to run two separate operating systems by using hardware in the GPU to virtualize  a GPU.  That said logically there would be some limits to this from the nature of virtualization.   This is a hack so one must be comfortable with downloading code from github, patching kernel modules, etc.  That said, anyone who would really need this ability should be able to do this. 
     
    Quotes
     
    My thoughts
    My first response is hallelujah! Now if I can get my hands on a compatible GPU then I can accomplish all of my work and game on my desktop computer.  If I can get my hands on one I could run CUDA code in Mathematica or Matlab or Python modeling various theoretical physics situations and kill the time by gaming.  I could run Windows for the use of Microsoft Office for presenting power point content to my class in the best possible way, then capture that window and send it out over Zoom or Blackboard Collaborate Ultra.  Not to mention using windows and Linux based tools at the same time for all purposes.  Then finally keeping everything stored redundantly on my raid array, and backed up with both online NAS and via external/removeable drives.  
     
    There are a few problems though.  Number one is the fact that to use this one needs.
     
    and
    In terms of GPUs one would buy that would mean having according to VideoCardz:
    Any* 10 series card 1060 or better.
    Any* 20 series card 2070 Super or better. 
    Any 30 series card 3080 or better. 
     
    *Any such card with enough VRAM for this to make sense and enough horsepower to be more than a tech demo.  For example right now one can run CDUA code in WSL2.0 using Ubuntu for WSL 2 as a beta.  However, the performance is not great compare to bare metal, not even half.   
     
    If one does not have two GPU's (or an IGP or APU processor) anyway then using one 1080Ti as the display out for Linux and also to virtualize a GPU for Windows would be very limited.   Basically using VGPU does not unlock more power.  Instead it would reduce the power by at least half.    Half of the VRam, half of the CUDA cores, half of everything.   So  a GTX 1080Ti or even 1080 that can hold its own in games in terms of rasterization performance would be as weak as a 980.  
     
    For a card that is less than a 1080 this might not even be very functional for gaming.  Perhaps using it to virtualize Windows to run office apps alone this will work.  It would be like having a GT level card for your VM. 
     
    All of that said, while I can't risk it on my working computer during the school term I may try this out during the summer. If so then this would get me to computing nirvana. Since I have and will have a Ryzen APU to run my Linux desktop.  Thus the overall effect of virtualizing a Windows instance on my 1080 might be no different than running a game on it while also running a computation.   The only other way I could do this would be to build a new computer with enough expansion slots to get two dGPU's in addition to my APU.  So this will wave me from a lot of headache.  hallelujah, hallelujah, hallleeeeluuujah!.  

    However this would not be for everyone.   
     
     
    Sources
     Nvidia's Virtualization Unlocked On Gaming GPUs via Hack | Tom's Hardware
    GitHub - DualCoder/vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs. 
    Getting started with CUDA on Ubuntu on WSL 2 | Ubuntu
  25. Agree
    Uttamattamakin got a reaction from WereCat in Nvidia GPU Virtualization Hacked For GeForce Cards   
    This more or less disproves the idea that it is "not possible" with a GE Force GPU.  The question is the inevitable lower performance worth it.  I would consider it only with a 1080 or better since a 980 level of performance is respectable for killing time while something else is rendering or whatever in Linux.   Once one has a 3080 getting a 2070 or 1080Ti level of performance for Windows .... but with Tensor cores so yeah.  

    What Nvidia should do is make an official version of this and allow people to virtual ONE card, all legal, licensed, and supported.  At least on their top tier cards which would have the kick to make it really worth while. 1070+ , 2070 + and 3070 + or some such. 
×