Jump to content

amelius

Member
  • Posts

    48
  • Joined

  • Last visited

Everything posted by amelius

  1. Just to clarify, other X399 platforms were confirmed to work by several users, including other Asus boards, the issue is exclusive to the flagship X399 board, the ROG Zenith Extreme. Additionally, since it seems the board is fine with supporting each GPU when the other is disabled, and only has this issue when two GPUs are used at once, would lead me to think it's not a hardware but a software issue, since the physical hardware for each slot and working with these GPUs works fine, just the POST check fails for some reason.
  2. I didn't say it's cost effective... i said that the only case in which it's cost effective is when there's no other option for improvement... It's got low marginal improvement per dollar, but if you run out of other upgrades to make, it's cost effective because there's no other options. But yes, agreed, it's not a common issue, and it's definitely not for everyone. My rig: ROG Zenith Extreme Threadripper 2990WX 128gb Trident Z DDR4-2933 2x 2080Ti 1x Titan V 3x 960 Evo All water cooled custom loop with CPU(monoblock) and all 3 GPUs in it, 3 radiators, and quick disconnects everywhere. Acer Predator 1440p 165hz IPS GSync So yeah, in my case, moving from 2x 1080Ti's to 2x 2080Ti's was the only reasonably upgrade left. Now, if only my damn GPUs worked with my motherboard, instead of having some bizarre error, I'd be very happy.
  3. I've never had screen tearing issues, nor stuttering, nor game not launching. I've had a few times where performance games weren't great, but supposedly the new nvlink sli bridge helps with that a lot and most games scale at least 50%, which is good enough for me. Is it cost effective? Only if you can't upgrade any further. given how my system's specs are TR 2990WX, Titan V, 2x 2080Ti, i really don't have anywhere else to make performance gains.
  4. Agreed, especially in a custom water cooling loop like I have it, with UV reactive fluid.
  5. Mostly, i'm concerned with Asus's seeming lack of interest in compatibility, and helping, just brushing it off and saying we should contact support to check if our parts are compatible with them.
  6. Well, in this case, a fair number of users on different forums had the exact same, reproducible issue, and all of them had sufficient PSUs. This also occurs with Titan V's in combination with RTX cards. Also, several tests by techtubers have shown that SLI scaling with the new nvlink bridge is substantially better than old SLI bridges, resulting to 80% or higher scaling in quite a few things, which seems worth it. Additionally, I'm guessing that people who had this issue with RTX + Titan V configurations (like myself) are using these cards as budget machine learning cards, since they have tensor cores. That's a good reason to have a multi-gpu setup, however, in this case, even having the GPUs plugged in, with no bridge, results in the motherboard failing to post, with an error "OE Load VGA Bios". While each GPU separately works fine, when the others are disabled.
  7. Asus seems to think that supporting new hardware isn't important. After numerous others and I encountered an issue where two RTX 2080Ti's or RTX 2080's together fail to POST when both are enabled. Several forum threads have found the same issue, to no response other than "clear your CMOS" and "check your psu" from Asus which had no effect for anyone, in the last several weeks. https://forums.geforce.com/default/topic/1074074/asus-zeneth-extreme-x399-rtx2080-sli-not-working/?offset=10 https://rog.asus.com/forum/showthread.php?96162-Zenith-Extreme-bug-report-form https://rog.asus.com/forum/showthread.php?105105-Code-0E-Load-VGA-Bios One of their support agents went as far as to say that it's not supported because it's not on https://dlcdnets.asus.com/pub/ASUS/mb/socketTR4/ROG_ZENITH_EXTREME/RZE_Devices.pdf?_ga=2.162430392.527285745.1539026443-891392033.1492966352 saying: "There is no listed stability with two 2080ti cards" "You can visit the support page to confirm when the BIOS is released." "I regret that you spent that amount of money [name]. Please await our confirmation to ensure that this works. You could've contacted us before hand to check but I sincerely regret the experience." Other users have had similar issues, where Asus support is entirely uninterested in helping with this issue, with no resolution whatsoever. It seems the ROG Zenith Extreme (and specifically that board) is incompatible with Turing or Volta multi-GPU configurations, with Asus uninterested in resolving this. People building HEDT systems or upgrading should be careful and avoid this platform, both due to the incompatibility and the horrible customer support.
  8. Well, the first two going into my workstation i can write off taxes as a business expense since i use my machine for that... wouldn't be able to write off the second set since it'd be only in the personal rig... Trying to avoid that.
  9. I have all of the above, and I actually have background in EE, i was just hoping a product or combination of products existed to help, but as you pointed out before, this product almost certainly doesn't exist because there's no demand for it... there's very few scenarios, even in enterprise where this would ever be desired, I don't think that this is necessarily infeasible so much as it has very little if any demand.
  10. Actually, yeah. Ideally, i'd like there to be some sort of easy way to switch them, without plugging/unplugging cables, but ultimately, the worst case scenario is literally just switching riser cables.
  11. a) see previous comment, the stuff in the video doesn't really address what i want... b) that's exactly what i'm trying to do, want to move to a 4k 120Hz display from my 1440p 165Hz display.
  12. This isn't quite what i'm looking for, looks like this is more about bifrucation, where I don't want that, I want a dumb switch that moves the electrical connection from one GPU to a different GPU. Basically, a PCI Riser version of a single pole double throw switch for PCIe x16 risers.
  13. Again, I want to use the 2080Tis together with NVLink when I'm gaming, and when i'm not, move them back over to the workstation... If it was a 1080Ti sure, I'd go for it, but I want to use 2080Ti's to their maximum potential in both kinds of workloads... I don't understand why people refuse to actual help me solve the problem i came here with... I guarantee that the simplest solution, which is just having two daisy chained pci risers which i manually switch from one to another would cost less than getting another 2 2080Ti's... I'm just trying to do better than the simplest solution.
  14. I'm asking for help figuring out a custom solution... Do you realize you're saying "buy a crappy machine for gaming when you have all this incredible hardware to use"? My current setup performs *well*, but it doesn't perform perfect... It's bottlenecked, but it'd still beat the hell out of a 2200G + $200 GPU... I have a 1440p 165Hz GSync display... the goal here isn't "i need a machine to game, i can't game" it's "i have hardware that for a specific case I can't take full advantage of, and I want to find a way to use it to it's full potential". I'm a 100% fine with a crazy solution... And I do in fact already use hardware virtualization, but that's not the solution here, because that doesn't really allow you to have the stuff on different machines without huge performance impacts, I virtualize and can assign any or all of my GPUs and CPU cores and RAM to specific virtual machines on the workstation, and currently I game in one of the VMs (before you ask, no, that's not the source of my performance hit, that's maybe a 1-2% difference). I'm asking a very simple set of questions: 1) Can I join two PSUs together so that they power both computers safely while playing nice. I know such options exist for powering a single computer with two PSUs, but i'm wondering if i can do that for two different PSU, so that they've got a shared ground and I can run GPUs off them and plug them into different machines without worrying. 2) Are there *physical* switches that exist or could be made that would let me electrically switch from one PCIe3.0x16 riser cable to a second cable with the push of a button, so that i could "move" the GPU from one machine to another, not virtually, but physically.
  15. Well, I already bought 2x2080Ti's for the workstation which currently has two 1080Ti's and a Titan V in it, and I don't want to buy another pair... I want to be able to use them fully as I need them in different cases... So far, nobody's actually tried to answer my question, and only has been suggesting doing something else. Believe me, if something else was a good solution, I've already considered it and ignored it for a reason... I'm asking how to solve this specific issue. I have a fully working workstation already, I want to add the second more gaming oriented machine that won't be CPU bottlenecked, and can fully utilize those GPUs for gaming when i do game, but when i'm working, i want those GPUs in the workstation machine.
  16. I'm a 100% sure i'm wording that correctly. I know that I'm getting CPU bottlenecked because when I switch from a 1080 Ti to my Titan V for games, I get zero improvement, and I get under what fps people have gotten with "lower end" CPUs. What I mean is i have a Threadripper 2990WX which genuinely performs worse in games than say, a 8700k. You're misunderstanding the problem... it's not that there's too many GPUs on the machine... my setup is already virtualized, i can assign as many GPUs as i want... my problem is that the workstation CPU bottlenecks gaming applications, and I don't want to buy two more expensive GPUs, but I want to use them for both gaming and work stuff. I'm waiting on two 2080Ti's and a Titan V. I'd like to be able to use the 2080Ti's for gaming, but also for the workstation as needed, but i don't want to bottleneck them with a CPU that has insufficient single thread performance for games. I need the workstation for other stuff, but I want to take full advantage of my GPUs.
  17. So, I wanted to make a build where I have two machines, one that's a workstation, and another that's a more gaming machine, but I want to be able to swap the GPUs between them easily without moving them. I was thinking about using PCI riser cables, and then swapping those out for this, however, two issues: 1. I'd need to make it so that both power supplies could be used interchangeably, so I wouldn't need to plug the GPUs into a different power supply for each machine, (not sure how to make sure both PSUs play nice with each other) 2. Ideally, it'd be great to have some sort of "PCI Riser Switch" (whether a DIY thing or commercial, which could easily swap PCIe3.0x16 cards from routing to one motherboard to the other. The reason I'm doing this is because I have a powerful workstation which sometimes needs multiple GPUs for workloads, however, I also game, and when i do game, i find this machine reaches CPU bottlenecks because it's got a very high thread count processor, and I want to be able to move a GPU or a pair of GPUs over to another, lower cost more gaming oriented system easily without having to do much. Any thoughts on this? EDIT: Please stop suggesting that I do something else, and answer the actual question i'm trying to solve. No, buying more GPUs isn't the option I want, that's another $3000 roughly. No, my CPU really IS bottlenecking me, a TR2990WX does in fact game *worse* than a i78700k, but performs great for workstation tasks, like machine learning, which is what I use my workstation for primarily.
  18. As far as I can tell, the motherboard still lists the same exact RAM skus as compatible. This one is listed as compatible. Since compatibility is usually a matter of setting correct timings and voltages, I don't see why that should change. It's usually on the mobo manufacturer to improve compatibility, which they already have.
  19. Yes, latest bios updated before I swapped out processors, pretty sure it wouldn't have booted at all if it wasn't updated. Version 1402
  20. So, I upgraded my system from a TR1950X to a TR2990WX, and I've encountered some strange memory speed stability issues, which is odd, since they're within spec, and supported by the motherboard. Motherboard: ROG Zenith Extreme Memory: Triden Z RBG 128gb DDR4-2933 I used to be able to run the DOCP profile just fine, and hit the expected memory speed of 2933, however, even since my upgrade to TR2990WX, activating that profile leads to lack of stability, independent of the activation of precision overdrive, which seems to not affect stability (side note, raising power limits there really have it step up clocks, going from doing an all core boost of about 3.4-3.6 ghz to keeping it at about 3.9-4.1 ghz). I tested stability via Prime95 torture test + furmark at the same time. Turning memory up at all from 2133 results in loss of stability and crashes after an hour or so. On the other hand, in terms of practical performance, leaving it at 2400-ish is ok, with one exception: PUBG seems to crash and have random errors of unclear origin. No other games or applications seem to be affected. I've tried turning up RAM voltage as high up as 1.4v and it's not really improved stability too much, though without raising voltage, I can't even boot up the machine about 2400, and with it, it boots. Any ideas on what to do? This exact RAM kit and mobo worked fine before the upgrade, so I'm at a loss as to what's causing this.
  21. Does it support multiple drive arrays? What advantage does this have over just having a FreeNAS VM in UnRaid? Also, what grub setting, is blacklisting the GPU not enough?
  22. Really? Because I haven't found any setting to not use a GPU to boot up, even when I run UnRaid in non-gui mode, and when i blacklist the GPU, I still can't seem to pass it through, I end up with a blank screen. Is ProxMox also KVM based? Wouldn't I have to re-setup all the annoyances with GPU and USB passthrough? It was a big enough headache the first time around...
  23. So, given the throughput per lane of PCIe 3.0 is 985 MB/s, a 4x connection would yield me just short of 4 GB/s, which i think is actually pretty good, it'd be higher than my drives can achieve even in striped config, and it'd be faster than a 10 gbe connection, which should only have a maximum throughput of 1.25 GB/s. That sounds pretty reasonable, since nothing but the GPU would be capped out, and that's totally fine with me. Does that sound about right to you?
  24. This seems... oddly, fairly practical. Only question I've got is if there's any PCIe 3.0 x4 Gbe and SAS cards out there that you're aware of.
  25. According to all the X399 boards (including mine) support PCIe bifrucation. Does this give me any options?
×