Jump to content

NeoFrux

Member
  • Posts

    638
  • Joined

Everything posted by NeoFrux

  1. Let's say you have 3 monitors (the top one is out of the horizontal plane and not affected), and they're laid out like so: You change the resolution of monitor 2 to 4:3. To Windows, that looks like this: What does Windows do? This: Except all your window positions stay the same. So when monitor 3 "moves over", the windows that were on it are now shifted to the right, and the windows that were on monitor 2 are now at least partially visible on monitor 3.
  2. Competitive CS:GO players are split between 16:9 and 4:3 because playing in 4:3 stretched makes it seem like your targets are much wider (and thus easier to aim at), even if it's not the case. It's mainly a psychological thing, and it does tend to help some players quite a lot. 16:9 on the other hand obviously allows you to see more of your surroundings.
  3. So what resolution does it change to? Can you tell via the OSD? Or is it just that your window positions change as a result of your centre monitor's resolution changing (which is one very large reason why if I'm not playing a competitive game I opt for borderless fullscreen mode)?
  4. Even in the case of a PCI-E based NVMe M.2 SSD, the CPU still has enough lanes for it along with dual GPU's, one in x16, one in x8, while the SSD takes up x4. The PCH's DMI link is entirely separate and has its own PCI-E lanes with which to handle the rest of the system's needs. As @M.Yurizaki points out in their linked post, this X99 board opts for using the CPU's PCI-E lanes for NVMe, which works out beautifully since you can't run x16 and x16 on a 28-lane CPU anyway. So for @F Bomb29 and @8-Bit Ninja, there's no worry of a bottleneck anywhere in this scenario. Even if this were a Z170 board or something where a 6700K only has 16 lanes and the chipset has 20, there wouldn't be any bottlenecking going on. Now, if you wanted to add multiple NVMe SSD's or another GPU to the mix (for reasons), then you start bumping up against the limit. But keeping in mind that GPU's don't saturate more than x8 PCI-E 3.0 anyway, you can get another 8 lanes purely by running the first GPU in x8 too, freeing up enough lanes for either 2 more NVMe SSD's or an SSD and a third GPU (for PhysX or coin mining or something?), and that's without touching the PCH's lanes in the X99 scenario.
  5. A couple things stand out to me: First and foremost, your CPU is the odd man out here. It's exceedingly rare for a CPU to just die, but it's not unheard of. The local computer store idea might be a good one, although you might need to offer to pay them for their time. You mentioned setting your RAM speed to 2400 - Did you do this by enabling XMP? If not, the correct timings for running at 2400 probably haven't been set unless you did it manually. You mention clearing your CMOS before you'd done this; Have you cleared it since? Have you tried switching to the backup BIOS (kinda dumb the switch they used to use for that is gone)? If you get any joy out of it at all, it might be worth flashing it to the latest revision (F20b as of today) to see if that helps with stability.
  6. As stated in the manual, with a 28-lane CPU, 2 GPU's, and an M.2 SSD, you're looking at running x16, x8, and x4 link widths respectively. There shouldn't be any issue with running the second GPU in x8 mode, as AFAIK we still haven't got a GPU that can exceed that bandwidth. If there bizarrely is some kind of issue, you can run the first GPU in x8 mode in your UEFI setup and all should be good. Just to clarify: CPU: 28 lanes GPU1: 16 lanes GPU2: 8 lanes M.2: 4 lanes Total: 28 lanes PCH: Up to 8 lanes connected to CPU via DMI 2.0 x4 (switched) Ethernet Storage Etc Also, the motherboard should automatically allocate the GPU's to x16/x8 when the M.2 SSD is installed.
  7. You said you reinstalled your Realtek HD Audio drivers, can you take a screenshot of your Device Manager as it looks right now? Also, do you know what the service tag for your Dell is? Would help narrow down exactly what hardware configuration we're looking at. You should be able to find that on the underside of the laptop, or in setup when you first boot up.
  8. About where in the boot process does it "hang" for that long? At the BIOS/UEFI? At Windows bootup? At the login screen? Some things to try in the meantime: Clear your CMOS first and foremost. Re-seat your power cables to be absolutely certain you've got a good connection; Do the same with your SATA cables, and while you're in there, might as well re-seat the RAM to cover all the bases.
  9. Hmm, if you run GPU-Z while you try to run a game/benchmark, what's the PerfCap Reason on the Sensors tab when the performance dips?
  10. ECC vs non-ECC shouldn't make a difference as far as this goes as long as the memory tests OK. Have you run a couple passes of Memtest yet?
  11. Not familiar with the software itself, but I'm grabbing it now to take a look. Out of curiosity, are you using the paid version, and are you setting a frame rate target? What does the CPU / GPU usage look like?
  12. I've had issues with larger WD MyBooks doing things like that; First thing I'd do is try disabling USB Legacy Mode in the BIOS, and make sure you've got the CSM disabled. If you're on Windows 10, you'll want to enable Secure Boot and also turn off the EHCI (if present) and XHCI handoff options. If none of that works, it might be worth seeing if there's a firmware update available for the drive, try plugging it into a different USB port, and in the extreme, copy everything off the drive and clean it via diskpart, then re-initialize and format it. I'm fairly certain that last part is what got mine up and running, but it's been a while now since that's happened to me.
  13. Wow that's an old system. We used to put together stuff with Sparkle (SPI) PSU's 10 years ago, and they were ironically one of the better ones we had. That said, I had a higher-end one in my personal system and one night it decided to die on me and take my mobo with it. Neeeeever purchased a PSU with less than a 5 year warranty since. Anyway, this really could be any number of issues, but with the heat sink having come loose like that for who knows how long, my guess is the biggest issue is the CPU probably having gone belly-up. The PSU is definitely old enough now that it's not going to be outputting steady power, and with a motherboard that old, I almost expect it to have bad capacitors by age alone. Couple things to try would be to try POSTing without the RAM in at all and see if you get a POST error, clear the CMOS, and if you've got one, try a different PSU just to see if that makes any difference.
  14. Do you have XMP enabled in your BIOS? Try disabling that if enabled, or better yet, clear your CMOS entirely and see if that makes a difference for you.
  15. Personally for the past few years I've been using VMWare Workstation Player (and on my Macbook, Parallels) as I've found it more reliable overall than VirtualBox; That said I haven't actually used VirtualBox for my own personal use since, so things might have changed by now.
  16. It can be if it's running at something ludicrous like 4x @ 1.0, which I've had happen before while I was running my Gigabyte Z77X-UP5 TH (iffy WiFi adapter was the culprit).
  17. That is a beautiful build. I'm really sorry things went south but I'm also really glad it was an easy fix. Enjoy!
  18. Have you set the supervisor password?
  19. Sweet! Gotta love good customer service
  20. Pff, hopefully you didn't end up needing to purchase a box of band-aids while you were at it.
  21. I'm not sure it would be the fans, since these basically just get driven by varying the voltage (or in the case of PWM, it's a constant 12V + a clock signal to turn the motor on and off). But stranger things have happened I suppose, and if you're replacing them with Noctuas anyway (good move IMO, since they're much quieter), it's worth holding off until you get those before going any further with troubleshooting.
  22. Yup, seems like it to me. Before you proceed with an RMA, got a return policy with the place you purchased it from? Would be a little easier. Either way, the RMA form is over at https://kb.sandisk.com/app/rmaform
  23. Hm... And your WD Green drive, does it lock up? You've tried its cable / SATA port, too, I take it from the previous topic? And it's in AHCI mode? If the answers to those questions are "no", "yes", and "yes" respectively, it's probably time to start filling out an RMA form. You can check AHCI mode by entering your motherboard's UEFI setup, then going to Peripherals->SATA Configuration and checking which setting "SATA Mode Selection" is set to.
  24. By any chance, do you have XMP enabled for your RAM in your BIOS? I've been finding that I've had pretty poor luck getting memory to run at XMP-rated speeds recently (my 2x16GB kit of G.Skill DDR4-3200 for instance) where the system will hard-lock like that at pretty regular intervals. Runs fine without XMP on, so it might be worth checking into just to be sure.
  25. I'll echo @jools' suggestion of updating the firmware for the drive; The Sandisk SSD Dashboard should help there. Besides that, when the SSD locks up like that, do you get any events logged in Event Viewer under Windows Logs->System? Also, just to confirm, you're running in AHCI mode, right (it looks like it from CrystalDiskInfo but I feel I should ask anyway)? Does the SSD become unresponsive in other PC's as well?
×