Jump to content

TheGermanEngie

Member
  • Posts

    280
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TheGermanEngie's Achievements

  1. Hi all - So with Nvidia's recent announcement of driver 465, beta testing for GeForce card passthrough into a VM can be done. I want to set up a W10 VM on a Ubuntu host os. I have three GPUs in my system and I want to make sure everything will work. Specs are as follows - Ryzen 1700 32GB RAM ASRock X370 Taichi, latest UEFI BIOS GPU(s) - 1x Quadro K600 (run on lowest PCI-e x16 @ x4 for display), 1x Titan Maxwell (PCI-e x16 @x8) 1x Tesla M40 24GB (PCI-e x16 @ x8) My plan is to use the K600 for display, Tesla M40 for Linux, and the Titan X Maxwell for Windows 10. Where should I start? What VM program should I use? I am all ears.
  2. Nothing worked. I'm completely dumbfounded.. guess I'll have to use Ubuntu since that's the only OS that seemed to work. Edit: this guy on L1T seems to have had the same problem. Looks like a memory allocation issue. https://forum.level1techs.com/t/pc-will-not-boot-with-tesla-m40/160416/17
  3. I just tried that. It came back up but with the same exclamation point as I had before. Should I try again with Above 4G MMIO enabled?
  4. Enabling SR-IOV made no difference. I also tried enabling "Above 4G MMIO" with the 40 bit address. Still no difference.
  5. I will try to find any sort of relevant network settings this post is referencing, but I'm highly skeptical because of their modded BIOS. I'm completely stock.
  6. I have made an update to my findings: disabling CSM on its own doesn't affect my network connection, but after enabling "Above 4G Decoding" does. The M40 is UEFI ready, as is my Titan X.
  7. I have an AMD ASRock X370 Taichi, UEFI Rev. 6.20.
  8. Hi all - in the latest rev. of Windows 10 (20H2) when disabling CSM in UEFI, the network adapter fails and it cannot recognize any drivers or sign of connection. The NIC is just blinking yellow. I need CSM to be disabled, because my Tesla accelerator needs "Above 4G Decoding" to be enabled in order to post. Enabling Above 4G Decoding requires CSM be disabled. I'm in a big catch 22. I'm using a Titan X Maxwell as a display GPU and a Tesla M40 24GB in the second PCI-e x16 slot. Interestingly, I had no issues disabling CSM in the past, but instead of a Titan X I used a small low profile Quadro for display, because the Tesla has no display outputs. Help and suggestions are very much appreciated. If need be I can move this to the Windows thread. Edit 1: Disabling CSM does not prevent network failure, but enabling "Above 4G Decoding" does.
  9. Interesting. That means I should probably put my Titan on the bottom slot and leave the open air Tesla on the top slot. The Tesla only peaks at around 50c but there's still a decent amount of heat that comes off it. Not to mention the Titan already runs at 70c at close to max fan speed.
  10. I have the Phanteks Enthoo Pro M. ATX Mid tower.
  11. I have the ASRock X370 Taichi. Full ATX.
  12. Hi all - I'm mostly doing a lot of machine learning intensive programs with my full tower. For this reason I modded a server GPU (Tesla M40 24GB) to fit an arctic accelero cooler so it can have active cooling. Problem is, it takes up 3 pcie slots and in my ATX mid tower, that's a lot of GPU real estate. I want to put in my old Titan X Maxwell along side the M40 so I can run two models at the same time, but my case does not have the right headroom to give the Titan ample breathing space. Are there any full size ATX cases out there that are both relatively inexpensive and can hold a 3 slot + 2 slot card config? (Edit: I still have the original cooling block for the Tesla, so if I should buy a used server rack for 2 slot cards, that works too.) Thanks.
  13. Hello all - My saga of the modded Tesla M40 continues. The first problem I've continuously had was when checking GPU-Z, it shows all 24GB of VRAM being used up, even at idle. Running workloads didn't seem to matter, and it processes workloads just fine without any OOM errors. But GPU-Z, HWiNFO, and MSI Afterburner both say the memory is fully used. Tried driver reinstalls, windows updates, can't seem to get anything to work. Maybe it has to do with be using a non UEFI GPU (radeon 5450) for display? Or that it's installed in W10 Pro? The Tesla is in the first X16 slot and has full PCI-e bandwidth. Also, when checking the Task Manager, neither the Tesla shows up nor the Radeon 5450 being used to power the display. Tesla Drivers: 452.39 for Windows 10 Pro, CUDA 11.0, 9/22/2020, top X16 PCIE 3.0 slot Radeon Drivers: 15.201.1151.1108, 11/4/2015, bottom X16 PCIE 2.0 slot Windows 10 in UEFI mode, GPT disk, CSM disabled.
  14. Hi all - My saga of the modded Tesla M40 continues. The first problem I've continuously had was when checking GPU-Z, it shows all 24GB of VRAM being used up, even at idle. Running workloads didn't seem to matter, and it processes workloads just fine without any OOM errors. But GPU-Z, HWiNFO, and MSI Afterburner both say the memory is fully used. Tried driver reinstalls, windows updates, can't seem to get anything to work. Maybe it has to do with be using a non UEFI GPU (radeon 5450) for display? Or that it's installed in W10 Pro? The Tesla is in the first X16 slot and has full PCI-e bandwidth. Also, when checking the Task Manager, neither the Tesla shows up nor the Radeon 5450 being used to power the display. Tesla Drivers: 452.39 for Windows 10 Pro, CUDA 11.0, 9/22/2020, top X16 PCIE 3.0 slot Radeon Drivers: 15.201.1151.1108, 11/4/2015, bottom X16 PCIE 2.0 slot Windows 10 in UEFI mode, GPT disk, CSM disabled.
×