Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About xl3b4n0nx

  • Title

Profile Information

  • Gender
  • Occupation
    Computer Engineer


  • CPU
    Ryzen 5 2600X
  • Motherboard
    Asus ROG STRIX B450-F Gaming
  • RAM
    16GB Corsair Vengeance LPDDR4 3200MHz
  • GPU
    Gigabyte GTX 1080
  • Case
    Fractal Design Define R6
  • Storage
    500 GB Samsung 970 EVO M.2, 2 TB WD Black
  • PSU
    EVGA SuperNOVA G1 650W
  • Display(s)
    Asus VW246H, Asus VG248Q, LG Flatron W2343T
  • Cooling
    Corsair H100i V2
  • Sound
    Logitech z501
  • Operating System
    Windows 10/Ubuntu

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I can not believe I made top 50... RIP power bill this month. Hopefully I can hang on to top 50 for the next 2 days.
  2. I am working on testing out some through put issues on my Dell R740xd with 2 Xeon Gold 6254 CPUs. I am using 8 Intel DC4510 8TB SSDs in a mdadm raid 0 array with xfs on top. Running parallel (8 to 12) dd commands (bs=32K count=8M) I am getting about 24GB/s reads and 6GB/s writes. The writes seem to be kind of slow and I can't figure out where the bottleneck is. Does anyone have any experience troubleshooting something like this? I know LTT posted a video about this, but my problem is a bit different and is on an Intel platform so the same techniques may not apply.
  3. I figured the Asus router wasn't capable but I wanted to make sure. I am going to have them on the same VLAN so that won't matter.
  4. Network upgrade (pfsense router + switches) to support VLANs and need confirmation before hardware purchases. For more details on what I am trying to do see below. Would it be possible for me to use my existing Asus RT-AC66U router as an AP for multiple VLANs? If question 1 is no, then would I be able to use this Ubiquiti AP to have a wifi SSID for each VLAN? I want to have 10GbE connnection between my main PC (add-in NIC) and my server (built in NIC) and have those be the only connections to the network as well for both. I plan to get this Netgear switc
  5. What I ended up having to do was force Unraid to not initialize the GPU on boot up by adding append vfio-pci.ids=10de:13bb,10de:0fbc,10de:1287,10de:0e0f,10de:0ffe,10de:0e1b into the syslinux configuration for the boot USB. What that does is force the vfio driver to grab the GPUs with those IDs. If you run 'lspci -v' you will see ID numbers similar to those for your system.
  6. So @Jarsky I have found that all 3 GPUs in my system have the "kernel driver in use: nvidia" line. If I try to unbind any of them the terminal hangs and the process can;t be killed. Is there a way to prevent the auto grabbing on boot up?
  7. I agree with you there. I have recieved much more help here than on the unraid forums. Do you know what makes linux bind GPUs? Two of mine shouldn't be in use at all, but they are bound.
  8. I tried the unbind commands and it ended up locking up the nvidia cards. I don't understand what is grabbing them. Is there a way to determine where they are being used?
  9. There are no adjustable settings there for me. Everything is under CBS.
  10. I don't have it bound to a docker. It was in use in Folding at Home docker but I removed the UUID from the docker. It shouldn't be in use.
  11. In advanced for me I have the North bridge settings. It has IOMMU and SR-IOV, both of which are enabled. I did find the Memory Interleave in AMD CBS.
  12. Do you know where the IVRS setting might be in an AsRock BIOS? I can't find the setting.
  13. Ohh ok. That makes sense. I was wondering if pinning CPU cores to a NUMA node was possible. I tried to assign cores based on which die they were on. I will give these a shot. Thanks!
  14. IOMMU is on. Where is memory interleave? How would it effect this?