Jump to content

Drakinite_

Member
  • Posts

    44
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

865 profile views
  1. Update for anyone who might be trying the same thing: It took me many tries to get to this solution, but thankfully the solution is quite easy. I have both cards installed in the PC, with both drivers installed. My displays are plugged in to my Arc. MOST games will run fine with this config, but some (e.g. Cyberpunk 2077) will fail to run because the existence of the Nvidia card/driver makes it confused. To get Cyberpunk to run, all I have to do is disable (NOT UNINSTALL) the Nvidia card in Windows Device Manager. I usually keep the card enabled though, because for some reason when it's disabled, it causes the fan to spin at its maximum speed and it's pretty noisy. When playing VR, all I have to do for most games is plug my headset into the Nvidia card. However, VRChat requires me to have the main display plugged into the same card as the headset, so for VRC, I move my main display onto the Nvidia card and it works fine.
  2. Yes, it has a second GPU slot. It will physically be able to fit in the motherboard, but my question is about drivers and interoperability, etc.
  3. Hi, I was able to play on my Rift S with my PC before I upgraded to my Arc A750, when I was using a GTX 1060 3GB. It's not the best GPU in the world, and doesn't have enough VRAM for No Man's Sky, but it at least works with most other games I've played. I am aware that Arc drivers do not currently support VR, at least not on headsets like the Rift S. Additionally, I am aware that the general advice is to completely uninstall your existing graphics drivers before installing a new card from a different maker (i.e. uninstall NVIDIA drivers before installing an Arc card), which I have done. With this in mind, however, do you guys know if it's *possible* to install my GTX 1060 into the motherboard as a secondary card, and use the 1060 exclusively for VR but use the A750 for everything else? And if it is possible, would you recommend for or against doing that? My other option would be to dust off my old gaming laptop, buy a new DisplayPort dongle, and try to use that for VR. However, the CPU on that guy isn't the best and I can't run games like Half Life Alyx because of the CPU bottleneck. Which of these options would be best? Thanks!
  4. Indeed, all of LTT's controversies are caused by something Linus said on WAN, kek
  5. Right before the WAN show goes live to YouTube, Linus usually says "I am resuming the selected". Does anyone know what that means? I assume there's probably a GUI with a button that says "resume selected" but I'm wondering what that interface is. Could it be either the YouTube live dashboard, or perhaps their custom multi-streaming software that they use to stream to multiple platforms at once?
  6. That doesn't directly answer my question. It sounds like you're still talking about max power draw, but my question was about normal use, i.e. when CPU is mostly idle but might be doing a couple tiny things (e.g. web browsing, code editing, playing music, that kinda stuff where only a couple percent is used). My 5800X caps at around 115W but in the use case I'm describing, it varies from like 40-70W depending on what it's doing.
  7. Thanks for the informative responses, all. How do I know how much mounting pressure is right? This is all the information on mounting pressure that my cooler's manual provided (aka nothing): Interesting. The thing about temps makes sense, especially after I saw my CPU power slowly drop from 105W to as low as 80W, when i set my cooler to a "Zero RPM" preset, just to see what would happen. Would "in many cases less v will allow the chip to run cooler and thus draw *more* wattage" only be applicable if I'm actively running something stressful (e.g. Folding@home) or is it also true if I'm just doing everyday tasks? For example, say the CPU is just idling, or doing a couple things in the background, so the cooler isn't the bottleneck. Might an overvolt lead to higher power draw in that case? Definitely wasn't referring to that. I think Mark Kaine's response explained best what I was missing. My electrical engineering / computer engineering classes are all about the fundamentals, going into details about how transistors n shit are made and how computing systems are designed, but they never talked about the ways modern CPUs manage their own performance. Probably for three reasons: (1) i'll bet a lot of the tech behind that is proprietary and secretive; (2) CPUs are like a fuckin labyrinth down there, really advanced stuff for undergraduate classes; and (3) my school doesn't do any research in the field, so it's all based on old textbooks rather than current researchers or industry professionals.
  8. Ok, thanks for the input - It still feels weird, but I'm glad to hear that these temps are expected. Is 92°C totally fine for a CPU, or will it slowly wear down if it's kept at that temperature for a very extended period of time? Theoretically, if I did something to improve the thermal contact between the CPU and cooler, would that maybe prevent the temps from going so high (or allow the CPU to boost for longer, consume more power, etc.)? Again I don't fully understand the science behind this stuff, but the fact that the CPU cools down really quickly after pausing folding seems to imply to me that the radiator is having no problems getting rid of the heat. I didn't check my coolant temps earlier today when folding had been running for hours, but right now the coolant is at a chilly 33.2°C. Are you sure? I have a background in electrical and computer engineering, so I have a pretty strong grasp of the physics of transistors, but not much experience with the practical aspects of CPUs and overclocking. What I know for sure is this: Each transistor wastes some X amount of charge whenever it switches on or off, so the more frequently it switches, the more current it consumes over time. Which I think means: Current is directly proportional with clock speed Sometimes a higher voltage is needed in order to stably achieve a higher clock speed, because (slight oversimplification) the faster you switch, the more likely you are for little errors to add up and result to an incorrect result; and a higher voltage can help mitigate that result Power = current * voltage What I'm pretty sure can be concluded from bullets 1 and 3 is that at the same clock speed, a CPU will consume more power with a higher voltage and less power with a lower voltage. That was actually my reason for doing a slight overvolt in the past: I actually wanted my CPU to consume slightly more power during the wintertime, to warm my room a little bit more, without caring about better performance. Not sure if that's a bad idea though? And what I think can be concluded from the bullets is that at the same power, the CPU that's at a higher voltage would actually be running at a lower clock speed. Which if true, yes, would mean that it's silly for me to overvolt my CPU. And I think it means that NikolakiH was right? But I'm aware CPU frequency and clocking is really complicated compared to the fundamentals I know about transistors; please lmk if I missed something.
  9. After writing the post, I checked my overclocking settings: - DDR4 is set to 3600 MT/s, which is the rated speed of my RAM, but i think that's still technically an overclock, right? - Everything else in the "OC Tweaker" section of the BIOS, including voltage offset and CPU load-line calibration, is set to default (auto) Advanced -> AMD Overclocking settings: - ECO mode is disabled - Precision Boost Overdrive is set to "Advanced", PBO Scalar is set to 3X, CPU Boost Clock Override is set to +100 So I misremembered, there was no overvolt. To test further, I just changed the PBO settings to Auto and rebooted, and I still see the same behavior. The weird thing is, I get the exact same behavior when Folding@home is set to Light. Task Manager reports CPU usage as around 60-70% while AIDA64 and HWMonitor report CPU usage at 40-45% (weird, I don't know why there's a discrepancy), but the CPU still consumes 100-105 W and the temperature still goes up to 92­°C in that case. And it's still really sensitive, where the temperature shoots up within a few seconds when I resume folding, and then cools back down really quickly when I pause folding.
  10. I have a Rysen 5800X being cooled by a Corsair H115i RGB PRO XT, 280mm AIO radiator. If I recall correctly, the AIO radiator came bundled with the 5800X when I bought it. I have precision boost overdrive on and if I recall correctly, I've got it slightly overvolted. When I have Folding@home running (CPU only, no GPU), it goes up to 92°C and stays there, drawing around 105 W. With this cooler, is this type of temperature expected at this power draw, or might I have set something up incorrectly (e.g. bad thermal paste application, or incorrect pressure between the CPU and cooler, or have fans blowing in the wrong way)? I tried looking up the cooling capacity of the H115i Pro XT 280mm, but I couldn't find a simple "it's good for up to X watts" answer. For additional context: When I start/stop intensive CPU activity, be it Folding@home or an AIDA64 stress test, the CPU temps rocket up / drop down to normal within a matter of seconds. I don't know the significance of this fact, though, nor do I know whether it's expected. Here's a screenshot of HWMonitor while Folding@home is running:
  11. Microsoft has bought many look-alike domains, like microsoft.net, microsof.com, microsofr.com, micrsoft.com, etc. which just redirect to microsoft.com, which was pointed out in ThioJoe's recent video on .zip domains. That got me wondering... How many are there? It would be time consuming to look through all permutations of microsoft.com and check all of them. Anyone know how to look through a WHOIS database for every domain registered by Microsoft Corporation? Most WHOIS services only let you search for a domain or IP address, not for a specific organization/owner.
  12. Hey, Linus mentioned a multiplayer game on the WAN show a couple weeks ago that he said was good for parties... I think the name had "Wiz" in the title or something? He said that everyone should go into the game blind because part of the fun was in not knowing how to play. I don't wanna have to go through the entire WAN show archive to search for the title, so I thought I'd ask here in case anyone remembers. Thanks!
  13. Good news! It wasn't a virus and no data was lost. Something happened to the BIOS, causing it to reset all its settings, thus "forgetting" which drive to boot to. The screen I saw was just the result of attempting to boot to that drive. I checked the CMOS battery, and it's outputting 3.25V, which I think is plenty to power the CMOS. My parents (in whose house the server resides) told me that it lost power for a while due to the breakers being tripped, but I don't understand how that could cause the BIOS settings to be reset. Any ideas of how this happened? (and yes, I'm definitely gonna get a UPS so this doesn't happen again)
×