Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

D2ultima

Member
  • Content Count

    4,338
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About D2ultima

  • Title
    Livestreaming Master
  • Birthday 1989-11-06

Contact Methods

  • Steam
    d2ultima
  • Twitch.tv
    d2ultima
  • Twitter
    D2ultima

Profile Information

  • Gender
    Male
  • Location
    Trinidad and Tobago
  • Interests
    Gaming, PCs, laptops, 3D gaming, reading, livestreaming
  • Biography
    Just a guy who loves tech in a country that's technologically stagnant.
  • Occupation
    Currently NEET

System

  • CPU
    i7-7700K
  • Motherboard
    Clevo P870DM3
  • RAM
    4 x 8GB DDR4 2400MHz 17-17-17-39 (needs fixing)
  • GPU
    GTX 1080N 8GB x2 (SLI)
  • Case
    P870DM3 chassis
  • Storage
    850 Pro 256GB, 850 EVO 500GB M.2, Crucial M4 512GB, Samsung PM961 256GB
  • PSU
    780W Eurocom PSU
  • Display(s)
    AUO B173HAN01.2 17.3" 120Hz laptop display + 1360 x 768 Sharp TV (second screen)
  • Cooling
    P870DM3 un-modified internal cooling
  • Keyboard
    P870DM3 internal keyboard
  • Mouse
    Logitech G502 Proteus Core
  • Sound
    Corsair Vengeance 1500 v2 & Steelseries H-Wireless
  • Operating System
    Windows 10 Pro x64 (garbage)

Recent Profile Visitors

6,234 profile views
  1. Even if you do research, if you find a Maximus X hero or code, take it. NOT THE XI. The X. The BIOS is so far above and away with everything it's not even close to having competition at all. That said, you need to look at people who have long-term boards. Look at problems with previous generation boards. Nobody gets boards that just go poof in 3 months where you can see a bunch of bad test data, you want long-term info. When companies have a track record of things conking out in a year or two. Otherwise, current overclocking data works, since people compare those to other boards right away and one doesn't need to see how long term ownership turns out.
  2. I haven't had a desktop in far too long, I just talk to a LOT of people who have custom PCs and are PC gamers. We have all agreed that until I tell people "buy MSI motherboard" they are to assume they don't exist as an option
  3. Save your god damn soul and never buy a MSI motherboard. I'd suggest getting a Maximus X hero or barring that, a Maximus XI formula if you want to overclock a lot (I think formula is the one where they didn't destroy the VRM setup). USB BIOS flashback for updating for 9900K/KS/etc on Maximus X line. You're welcome, I wish I could update them easily/properly on this forum.
  4. Correct Correct, and it is a rarity, the only games I could think of that might do it are Dirty Bomb and Ark: Survival unoptimized. And those are just my guesses, since both games run pretty different even on same-hardware PCs, I couldn't say. They have always run exceptionally badly for me on single GPU though. I was thinking about a much more significant drop in performance, like I could barely hold even 60 flat in dirty bomb regardless of settings sometimes when I used to try it, and my GPU usage was low as hell and my PC had ample other performance leftover no matter what I did. The devs couldn't seem to figure out what my issue was either. Thinking back on it now, it might have been that, but I have no way of checking since I cannot force my GPUs to x16 in my laptop and I don't yet have a desktop to check with. You really shouldn't worry about it, you can always turn off the slot and get x16 properly if you must play one of those games on a desktop (assuming your board is half decent anyway)
  5. NVLink SLI makes x8/x16 moot, unless in the last month Nvidia used a driver update to limit the bandwidth from the cards, so the only time it'd matter is if a game hates a SINGLE card being on x8 for some reason.
  6. SLI won't do anything in the vive for your cards, you'd need Turing for NVLink to have enough bandwidth (but that'd still require them to code for it).
  7. If you're dedicating one directly to SLI and not using performance mode, that's not SLI... the memory does not stack (which you'd know if you actually read the guide :D) That said, I have stated multiple times, updating this thread involves literally re-writing the entire thing from scratch because it was not written before the forum went to IPB4 and it has some... apparent compatibility issues. It frequently breaks spoiler tags and page colour among other things. I would probably have to create a new thread entirely to make it ease to edit/update, but that's just an extreme amount of effort and I'd need to get it re-pinned and whatnot.
  8. It won't make any sense. SLI only functions as a way to make cards render separate frames, PhysX isn't accelerate-able by it. And even if it was, your Titan Xp will slow down to the card that is running PhysX, and 660Tis are likely to not do it nearly as fast. I would quicker toss a 1660 or 1660S at the job and running your second monitors, and you can use it for solid NVENC as well.
  9. That is not what was said. The statement read "you are never going to get a memory clock fast enough to fill that vRAM buffer on 128-bit". This statement, as a whole, is false. I said whether it was useful, but that was also a statement I shouldn't have said. What I should have said was "whether the card is useful", because higher resolutions and multisample AA types etc hurt the core performance far more than memory, but shadowmap is still a situation that can eat up a vRAM buffer and not particularly hurt core or memory bandwidth except for initial loading. In other words, using such low end cards where you would find a 128-bit memory bus in a situation where it would need 8GB of vRAM is ill-advised, and there are very few cases where it makes sense, but if you want to throw a 128-bit memory bus 8GB card at something like Aliens: Isolation where 16k x 16k shadowmap resolution was possible via cfg edits and only cared about your vRAM buffer size, it would use it perfectly fine. Edit: To clarify my statement about the core performance... vRAM chokes don't usually have hitching longer than the inherent delay in low-fps gameplay around 30fps. It is why the low-vRAM-buffer R9 Fury and R9 Fury X never showed low vRAM hitching/stallouts when testing because they ran the tests at 4k and aimed to keep around a 30fps give or take performance level, actually adjusting gaming settings down or up. There have also been tests between 2GB and 4GB vRAM cards at higher resolutions where the 2GB cards don't show much variance in frametime, but then the games were between 25 and 32fps anyway, so any stutter is masked by already-low FPS.
  10. Nope, it was always false. 100%. It was never not false. Filling a memory buffer is never impossible. Whether the card's memory bandwidth was useful for the situations which were likely to fill that buffer (large shadowmap resolutions, high resolutions and multisampling techniques, etc) is another story. But filling it was never impossible.
  11. That comment was false from the start
  12. iGPUs should be perfectly fine I'm not sure what specifically I could do, I don't know enough about that since each architecture is very different and it's hard to quantify changes, or specify when they will make improvements. I could make general statements like how backend has barely any changes between maxwell and pascal but Turing is significantly faster in a lot of aspects that don't show up on tests like Firestrike, and how as more games take to using those hardware they'll pull far ahead of pascal counterparts (like how 2070 Super > 1080Ti in COD: MW and Superposition and Apex Legends but sticks near a 1080 in Sekiro or something), but that's just going to be too much things I can't prove.
  13. Not tested it (there's no point), but I'm going to say no. That part number you're seeing is almost certainly not the Nvidia ID number for the cards, which is a different thing, and what is usually used to determine compatibility as far as I remember. Next, you need identical card types to SLI, as well as even the full part numbers there are different. And even if you DID manage to hack the driver to force NVLink, the 2080 Super would run as slowly as the 2070 Super would, so it would be a waste. Unlike Crossfire which can operate in a sort of async mode, SLI generally tries to force the cards to perform roughly the same. If one card is slowing down, the other will too. If you want to test this, Killing Floor 2 is a very good tester. Force PhysX to the slave card and max the setting out in-game then try to run the game at as high a framerate as possible, taxing the living daylights out of the second card which needs to calculate both PhysX and half of the game's frames. What you'll run into is the slave card being maxed out at 100% and the primary card running at low loads because it will not pick up slack the slave card isn't picking up. I.E. a 2080, 2080 Super, 2080Ti, or any other card stronger than a 2070 Super, is quite simply, going to run like a 2070 Super.
  14. I have things I could change and add, mainly simply adding GDDR5X/GDDR6/etc but this thread is broken because it was written on IPB v3 and the IPB v4 transition forever broke the article. I would have to start an entirely new thread and copy everything over manually for it to save correctly.
  15. This is the video RAM guide, as in memory on your video card. You are talking about system RAM. And yes, system RAM is a big factor in CPU performance and can be a massive bottleneck, but this isn't the thread for it.
×