Jump to content

alexfsr

Member
  • Posts

    42
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

  • Steam
    alexfsr
  • Origin
    MayorIzrod
  • Battle.net
    KiLiMaNjArO #2264
  • PlayStation Network
    eww console
  • Xbox Live
    eww console

Profile Information

  • Gender
    Male
  • Location
    Bulgaria
  • Interests
    Computers, games, food
  • Occupation
    Student

System

  • CPU
    Intel Core i7-3930k 4.0 GHz
  • Motherboard
    ASUS Sabertooth X79
  • RAM
    4x8GB Kingston XMP Predator 1866
  • GPU
    Asus GeForce GTX 780Ti DC2 + Asus GTX 660 DC2
  • Case
    CoolerMaster HAF X 942
  • Storage
    2xIntel 530 Series 180GB SSD RAID 0
  • PSU
    FSP Aurum 700 Gold
  • Display(s)
    ASUS VG236HE (3D) + Asus MK241H
  • Cooling
    Super Ghetto Custom Loop
  • Keyboard
    Logitech G710+ (Cherry MX Browns)
  • Mouse
    X7 XL-740K
  • Sound
    On Board
  • Operating System
    Windows 7 Pro 64-bit

Recent Profile Visitors

963 profile views

alexfsr's Achievements

  1. B2B pricing without VAT:
  2. Hey guys, I'm getting real desperate here. I just got my 980Ti a full-cover water block and it decided to start making problems shortly after. I was playing overwatch with my friends yesterday when my system hung up and both my displays were displaying solid colors (pink and green). Rebooting the system resulted in POST fail. Rebooting again i got to windows and tried to rejoin the game, this time same shit happened at the main menu, didnt even get to the game. I opened up the card to inspect it - nothing seems to be the problem. Re-assembled it and put in another PCIe slot. Now it doesnt hang up but it cant keep performance up. Core clocks just keep spinning in circles - 1350-1240-950-730-870-950-1240-1350-....etc. FPS goes from 40 to 160 all the time. GPU-z perfcap says "power", but power consumption tops out at about 120% (i've set power limit to 130%). Temps dont go over 50C with the water block so it's defo not overheating. Card had same issue a few months back and i RMA-d it. They fixed it, by the looks of it in software as my card had nothing done to it physically. It's now out of warranty however, so i need to fix it however way i can. I tried: Driver reinstall with DDU. PCIe slot change. New thermal paste. VBIOS update. Maximum Performance mode in NCP. Increase Vcore. Underclock/Overclock. Prayer? Monitoring Nvidia Inspector while playing, i noticed the card doesn't like sitting in P0 very much. It mostly stays in P2-P5 range. Any ideas on how to fix this?
  3. Unfortunately, no. My monitor only has HDMI, and the card only has 1.
  4. The card looks perfectly normal. Fans are a bit worn off but still work. I dusted it off and cranked up the fans even more but same thing. I run the computer with the side panel off anyways cuz i have a very shit case and with these temperatures here (Netherlands), it suffocates otherwise.
  5. Hey all, Troubled times have befallen my trusty old GTX 980 Ti and i need you help. I think it's dying but would like a second opinion before i try to RMA it. The card is Gigabyte G1 Gaming 980 Ti, purchased in late 2015. It has been my primary GPU 24/7 since then, giving me glorious 60+ fps in every game i have ever thrown at it, including PUBG, at max settings. I usually run it with +100 on core clock, which combined with GPU Boost gives me 1454Mhz when gaming. It tends to run quite hot thou, topping at around 71-72C with a very aggressive and loud fan curve. Not big fan of high temps but since Nvidia cards don't throttle until they reach 81C, it has been fine. Today, i went in for a game of PUBG as usual. Shortly after jumping out of the plane however, my screen turned white. I could still hear the game's sounds but the computer became unresponsive and shortly after the monitor lost the gpu signal (game sounds were still running thou). I had to reset the computer. I reset overclocks and reinstalled the Nvidia driver after running DDU just to be sure. Same result, except this time the screen froze showing all red. I reset again and decided to try BF1 as i was getting seriously worried about my gpu (up until this point i assumed it was typical PUBG). BF1 didn't crash but was running at horrendous fps (~25 fps, ususally runs at 100-120). I enabled RTSS OSD and noticed that the core clock of the gpu was fluctuating like crazy, going from the usual 1454 down to 500 and back up constantly and in increments, not instant drops (e.g. 1454-1280-1054-867-543 [or smth like that/approx. values] and then back up again). I ran PUBG again and noticed the same thing, except unlike BF1 which just tucked along at console fps, PUBG completely craps the bed and freezes (every time with a different color on the screen lol). At this point i was kinda sure mu gpu is the culprit so i swapped it. I have spare 1060 lying around so plugged it in, replacing the 980ti and booted it up. I reinstalled rivers with DDU again and fired up PUBG without touching MSI afterburner, all stock. The game ran like a charm. So did BF1. I think at this point i have kinda answered my own question but as i said in the beginning i wanted a second opinion as i currently live abroad and RMA-ing the card will be funny business to say the least. From what i could gather googling, fluctuating core clocks are either caused by driver that crapped itself or power delivery circuitry failure. What do you guys think?
  6. After 4 painful hours i got a Win 10 running with 2 core and their 2 threads with 4 gigs of RAM, VNC as graphics adapter and no sound of any sort. It runs fine (the 4 hours i spent installing drivers and stuff because my mouse derped in VNC everytime and i had to do it all only with a keyboard). I'm scared to try to edit it to passthrough the gpu now because the unRAID server is now actively used for storage and so far all my attempts with the gpu passthrough ended with the entire unRAID system hanging up. But i have to try i guess.
  7. So i'm basically stranded here... Bummer
  8. Booting the VM with VNC as vga adapter and no audio boots fine with default config (no acs override). I haven't tried installing it with VNC and loading the drivers (which iirc won't work since driver installers check for gpu presence?). Haven't tried booting linux live cd as a VM. The problem is with KVM/QEMU. It can't access PCIe devices properly for some reason. If i don't have the acs override and vfio_iommu_type1.allow_unsafe_interrupts=1 in the syslinux.cfg, VMs don't start at all. This eddit to the cfg (the unsafe interrupts=1) basically forces KVM/QEMU to ingore the problem and boot anyways from what i could gather. But since the underlying problem is still there, it boots broken (crashing / improper rendering). The improper rendering with the RX480 leads me to believe the VM can't communicate with the GPU properly (hence the weird green squares/text), although the GTX1050 doesn't have this problem and it boots seemingly fine, it just crashes the entire unRAID server when it starts loading the windows installation.
  9. I came across this before but couldn't find anything helpful
  10. Hello everyone, I'm pretty new to unRAID and VMs. I installed unRAID on an old (but beefy) PC of mine that i used to host servers on (TS3, MC, SE, etc.) in order to convert it into a family NAS as well as use its VM capabilities to run a windows 10 to host my servers. I plugged in 4 4TB drives (1 is parity) and a 128 gig SSD for cache and the NAS runs like a charm. Now onto VMs. I cannot get it to work with passthorugh devices. If i set the VM to use dedicated GPU or the motherboard's sound card, it gives IOMMU errors (below) and doesn't start. If i select VNC as graphics and no sound at all, it runs fine. I tried useing PCIe ACS override but no luck with it either. From what i could gather from extensive google-ing, it seems like proper VT-d support on the chipset i'm using is pretty hit/miss . Machine Specs: Asus P6X58D Premium | Xeon X5650 | 12GB DDR3 RAM | GTX 1050 | RX 480 | 4x4TB NAS drives | 128GB SSD for cache | 120GB SSD for VM vDisks. GTX 1050 is in the top PCIe slot to be used by unRAID. I'm trying to passthrough the RX 480 to the VM (second PCIe Slot). Any idea how i can fix this without getting a new mobo because that's kind of not possible atm (and my friends are w8ing for my servers )? IOMMU report: Error log from trying to launch a VM: I posted this on reddit and unRAID's own forums which seem to be deserted on the KVM topic. Since posting this i tried some things: Moving GPUs around PCIe slots = no luck. PCIe ACS Override = no luck. PCIe ACS Override + vfio_iommu_type1.allow_unsafe_interrupts=1 in syslinux.cfg allows the VM to start but causes the following: With the RX480 as the passthrough device, it renders bright green squares on the screen for a few seconds after VM startup and then shows the "press any key to boot", again in green. If i press a key and send it to boot it starts loading windows, again with inproer green rendering and then crashes and the entire unRAID server hangs up. With the GTX1050 as a passthrough, it fires up the OVMF UEFI emulated bios after starting the VM. When i leave it and proceed to boot the windows installer, it start loading and hangs up again like with the RX480 (subsequently crashing the whole unRAID server a few seconds later as well), but the GTX1050 renders properly - no green artifacts and whatnot. Both of these didn't leave anything in the logs for me to investigate thou. You are my last hope LTT forum people. I'm really liking unRAID as a NAS, I just need to get these VMs to work properly.
  11. I figured that too, but i can't afford to spend money on that right now, sadly.
  12. I'm going to try and pray that it stays in one piece. Fingers crossed.
  13. If it's not going to burst in flames, it works.
  14. Hello, I recently got a task to decrypt some files, but i need more gpu horsepower to use with programs like hashcat. I run the it on a server pc i built a little while ago with second hand parts that i got relatively cheap (Asus P6X58d Premium, Xeon X5650@4GHz, 12 GB RAM, 120gb SSD). I also had a 660 lying around and i threw it in, so i needed a power supply. I bought a Seasonic 430W Eco power supply, because i figured it's gonna be enough since i won't really be pushing the server a lot - i use to run servers for games and whatnot. Fast forward to my little task here, the 660 is kind of slow in compute performance. So a friend of mine who has a gtx 770 that he doesn't really use all that much agreed to swap gpus (660 for 770). The problem is, i don't know if the PSU i have is going to be able to power the 770 at 100% load. The cpu will be idling, so the gpu is the only real power consumer. But the psu doesn't have the 2 PCI-e power connectors i need to power the 770, so i'm thinking of using a 2xMolex to 6pin PCI-e adapter, but i don't know if the PSU is going to handle the AMPs. The card wants 42 amps 12v rail, and this power supply has 2 12V rails, 18 amps each, fora total of 396 watts. The 770 will pull 250W at most (anandtech/tomshardware reviews), but i don't know how the power supply will handle it, if at all. Is it safe to use this power supply with a Molex to Pci-e adapter to power the 770 continuously at full load for couple of days.
  15. Okie Dokie, will see what i can do. Thanks for helping.
×