Jump to content

Chen G

Member
  • Posts

    209
  • Joined

  • Last visited

Everything posted by Chen G

  1. it is single rail, there's no extension, the custom silver cables go straight to the PSU yea I know, obviously. What I think is happening is the PSUs have over current protection that detect a over current, even just for 5ms (from power spike) and it shuts down the system. I just think a bigger PSU will have their protection set at higher current so it won't trip and reboot. my original spec if just counting the CPU and GPU, was 120+250=370w, way under the 600w PSU rating. Even adding in RAM and chipset I'm pretty sure I'm under 450w, which is also confirmed by PSU report I got after switching PSU. That's over 25% headroom, and it wasn't enough. with the 3090 upgrade, I added 100w to the GPU and maybe 10 on SSD. So let's say 560w theoretical, again confirmed from PSU report, that's about as high as it goes, way under 850w PSU, this time with 34% headroom, it wasn't enough. so I'd say add 50% headroom for spikes.
  2. I would agree, but regardless of who's fault it is, the end result is my 850w PSU cannot properly power my system drawing ~500w average. Older CPU and GPUs basically run all out all the time, they don't have power limits, their TDP basically is their maximum power. Where as modern chips can theoretically run much more but are limited to their TDP number, so I guess the power limiters don't react instantly so sometimes you get power spikes. Again theoretically, if they can put some truly gigantic capacitors in the PSU then you'd be able to run much closer to your actual power requirements, but since there's no spec for any of that, you only get one power wattage number, so from now on I'm defiantly going more to the save side.
  3. I'm not sure how most people feel about this now days but I've always felt a lot of people tend to buy too big of a PSU for their build so I've been careful not to do that myself. However, I'd like to suggest anyone looking to just go big, because PSUs are cheap, the wattage numbers are nonsense, and if you don't get enough you get a lot of headaches later on. So a few years ago I got started with my current ultra silent build, I went for one of those Seasonic fanless "600W" PSUs. I thought that would be enough because I was running a ~130w CPU and ~250w GPU. even with some CPU overclocking let's say 250w, I'd still only max out at 500w so "600W" continuous power should theoretically be enough, right? It wasn't enough the PC would just randomly reboot when running some games. so I had to swap over to a Corsair HX850i, now I'm at "850w" continuous power, and it's been fine. And with the PSU reports I could see my power usage never reached 500w. Last year, I picked up an RTX3090 to replace the 2080ti I had, now I'm at 350w, 100w more than before. But still, the 3950x CPU I have is pretty efficient, I'm not even overcloking it really. So still theoretically, I should still be well below 850w. In fact the PSU tells me I'm never reaching 600w. Guess what, it wasn't enough Well it is enough for most games, in fact I could run 3Dmark with both overclocking CPU and GPU. But it's not enough for Cyberpunk 2077 2.0 I started getting those same old reboots again. I thought my clocking had become unstable or something, but no, I 've eliminated all those variables. The only thing that will prevent the reboots is 90% GPU power limit, with the same clocks. So even though the reported power usage from my PSU is ~500w while running the game, I in fact need, more than 850w continuous power PSU to keep it stable. I'm not sure why they even bother with the PSU wattage numbers, it's basically meaningless. Next time I'm just getting a 1000w. and yea, I could totally see how if you're running a 13900k and 4090, 1200w+ is NOT overkill.
  4. yes I did, still wouldn't turn. I contacted EKWB they sent me a new one and I took my loop apart. The pump does turn just fine after I took it out, but I can feel the rotor has more resistance compared to the new one, no idea why. If I swap the rotors for the two motors then they have about the same resistance.
  5. I have one of those flat res/pump combo things from EKWB with a DDC pump on it, it’s only like 2 month old but a week ago it just randomly stopped working. It didn’t stop while powered on, instead it just suddenly wouldn’t start when I started the PC. I thought it was dead so I sent an RMA request to EKWB, I think two days later they got back to me saying I need video evidence so I took my phone, started recording, tried to show the impeller, turned on the PC, and the pump was working fine somehow. before this, I have tried many things in the previous two days, setting the speed to max in BIOS, adjusting the power connector, I even connected the molex to another PSU to try to power it externally, nothing worked but it magically fixed itself after I plugged it back to its own PSU and verified it did not work! So now a week later, I used the PC every day and it worked fine everyday but today the pump won’t start again, same as last time. Anybody with experience with pumps? Any guesses as to what’s happening?
  6. Update: I opened up my card and put hot glue between all the coils, the whine is definitely muffled but not as muffled as the stock Nvidial cooler. ==================================== I have a 2080ti in an otherwise dead silent build but the coil whine makes all that pointless. However here's an observation that gave me some hope, I had my hands on a brand new 2080ti FE and it has almost no coil whine. However after I took apart, put on my waterblock and stick it in my system, the whine is just as bad as all the other 2080ti I've had, seems like proof to me that there's something about the stock cooler that prevents coil whine. My guess would be the tight fit and thermal past-like sticky stuff they have all over the coil housings. I think It should be a simple physical process, I should be able to just fill the gaps between those coil housings with hot glue and it should do the trick shouldn't it? I really just want some confirmation that this works because it's such a pain to drain the look and get the card out. So has anyone done this? Put some kind of epoxy/glue/past around the coils to stop them from vibrating?
  7. I would say there is, some games look worse in 4k for one very specific reason, for optimal image quality you want the sharpness of textures to roughly match the sharpness of geometry, because that's what happens with real life captured images. The level of detail in real world is infinite and you're only limited by capture resolution so everything will be equally sharp, that is, as sharp as your camera system is. However in games the sharpness of geometry is determined by render resolution where as the sharpness of textures is determined by the textures themselves. So if you have your geometry significantly sharper than your texture it looks worse, and you aren't getting any benefit from the increased resolution.
  8. It is 100% stable once booted, I can run everything I want, prime 95, memetest, you name it. Also how come it failes 100% the first two times, if it's just unstable it should at least be able to boot sometimes. I've been doing this for very long I know how to overclock, how to test for stability, what is or isn't stable, thank you. I have some much better guesses, it probably has something to do with the unusual setups I run. The X79 booted from a PCIE SSD, and the X570 now I'm running RAID0 from 2x970 pro. It probably fails because it booted before the array is fully initialized, so the file could not be read correctly. Or maybe it's the crappy memory timing training, and it trains different values between cold boots and reboots.
  9. Back in 2012, I bought a Gigabyte X79 motherboard, while it kept working and working for a very long time, and it still is working, it had this one quirky bug, to actually boot I must follow this exact procedure: Press power button Smash delete to get into BIOS, because otherwise it will fail to boot 100% Choose save and exit, or anything that will make the system reboot Smash delete to get into BIOS again, because again it will not boot if you don't Choose save and exist, again. This time it will boot 100% Yes I had to follow this procedure for 7 years, it has always worked and failure to do so has never worked. The thing is, this is only required once I overclock beyond a certain point, I believe it was over 3.8Ghz on the 3930k. Anything below or at 3.8Ghz can just boot normally without doing all that stunt, but anything beyond 4Ghz requires it to work. And no, it's not because some settings got reset and I'm booting into the system with stock settings or anything stupid like that, it does run the exact overclock I entered in BIOS, but somehow that stunt just is mandatory. Also it's been 7 years so I know the overclock is stable. Now I have myself a Gigabyte X570 Aorus Master, I put together this rather complicated build with custom loop and modified case, what do I get? A very similar stupid bug. I think there' just something seriously wrong with Gigabyte's boot firmware. Although quite similar, it isn't exactly the same, I haven't figured out the exact pattern yet but from what I have figured out so far: A straight cold boot is 100% fail, Windows boot manager comes up with a blue screen saying some kernel file can't be loaded or something It sometimes will freeze either during the POST logo, or inside the BIOS If I go in the BIOS and just select save and exit, it will almost certainly boot Although it appears that booting does not require two restarts like before, the fact that it can sometimes freeze at the POST screen makes installing and updating Windows a nightmare. What can happen is during a critical boot sequence, like the first boot or first boot after major update, it may fail with that can't find kernel file stuff, and then Windows will thing something went wrong and it won't boot normaly the next time but go to recovery or try to roll back the update, which may also fail and just causing the entire system to be corrupted. I'm not sure if this time it only happens with overclocking, haven't done enough testing yet. But I do know the freezing in BIOS thing only happens when overcloking, it does not happen with stock settings. But it's not like I have an unstable clock, I'm just using PBO it's not even a manual overclock like the last system. Also It's rock stable once it boots, it's just the POST and boot sequence that's extremely fragile and finicky. Between these two boards I used an ASUS Rampage Extreme VI for a year, never had any issues, it ran beautifully, felt a distinct lack of quirky random bugs. Like for example on this x570 BIOS if you go into settings>miscellaneous and back out, the contents of the "settings" category changes, and has 3 more options, which would normally be in different places. What a mess.
  10. No, it doesn't change at all because it doesn't need to, SDR content like the desktop or the browser still work in HDR mode like normal. In fact I can run an HDR and an SDR game at the same time.
  11. Chen G

    GPU Tier List?

    more expensive=better. It's not It's not 50% And it's common to not be able to use all the modules in a chip, you don't use 100% of the CPU either, even when it says 100%, that's just 100% of the execution unit.
  12. HDR setting does not change regardless of what game I run.
  13. Everything you can do in software is safe, that's why they let you do it without voiding the warranty. Also just in general, frequency alone does not really degrade chips at all, it's the current and temperature.
  14. but I put a water block on it... It's from ASUS BTW, if that makes a difference. I have RMAed a totally dead one from MSI before but I didn't disassemble and put water block on that one.
  15. I did a clean installation and now it works fine again. I'm still a little worried because I've had other 2080ti before and I've failed overclocks on them many many times, I don't ever remember seeing space invaders. It's always driver crash or the game itself starts to glitch out like MW would have a black hole expanding from the center until it cover the entire screen, those kind of software level stuff. full screen and system wide glitch like space invaders really worries me. Just to clarify, looks exactly like this: https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html mine is indeed Micron memory.
  16. now I'm at stock clocks, but when I launch any game the display the game is on looses signal and I have to unplug it to get it back and the game will have crashed from some directX error.
  17. I have a 2080ti I got used, which worked fine but I had just experienced an instance of space invaders artifact and had to reset the computer. I was running a +1000 overclock on the VRAM but here's what I'm not yet clear on, what exactly causes the space invader artifacts? Can it happen purely as a result of unstable clocks, or does it always indicate a hardware fault?
  18. Those are just not the mainstream kind of fabric sleeve, it's a layer of clear teflon coat. CPU is around 55 for gaming and ~65 fo max load. GPU is just chilling, doesn't even go over 45. The PSU temp readings don't go above 50, I cannot flip it around for cable management reasons, that would make the back a lot more messy.
  19. That's what happens when you do side grades rather than upgrades. I only upgrade when I can double performance. There's nothing that can double a 6700k, and only a 2080ti can double the 1070. The 3800x has about the same single thread performance as the 6700k in games, and a 2070 super is what 50% stronger than 1070?
  20. So a bit about the specs. I really had a crush on the ASUS X570 WS board, but I did not end up going that route because I want to put a block on the chipset to get rid of the fan. I had already made another x570 build so I know the fan doesn't really bother anybody at all but still, it's aesthetics. EK only makes blocks for the gigabyte boards so that's what I had use. I picked the X570 Master because it's the only one without display outputs, and I like that, because it's more elegant to just not have something you'll never use anyway. For storage, I am actually doing a RAID-0 with 2x 970 Pro. I don't think this will give better performance than one PCI-E 4.0 drive, since you are now going through the chipset rather than the CPU. However, I don't like any of the existing PCIE 4.0 drives, they all have silly designs and silly names. Plus they're all TLC or 3D VNAND, not MLC. The PSU still has a fan on it sure, but I have confirmed that it never actually spins because of the efficiency is so high. The system when gaming operates exactly around peak efficiency of this platinum rated PSU, so that's just perfect. I could go get a Seasonic fanless, but it's a bit cheaper to use the used HX850i I already have. The GPU is as mentioned, a reference design 2080ti. BTW I like this reference design much better than the non-reference I had las time. That one was much bigger but there were quite a few empty spaces on the board and I don't like that. Much better to have them all cramped on a reference sized PCB, there's a waterblock ontop anyway so thermals isn't a problem. The EKWB block worked like a charm this time, GPU temps is only like 15 degrees higher than water at max load. Where as before it was more like 30 degrees. Is the full cover monoblock necessary? Oh hell no, in fact I'm not sure it cools better than the stock fin array, because there's like 1cm of metal between the water and the VRM, and it's just flat metal, no water channel. It's just for aesthetics, which I'm kinda glad I did this because the plastic I/O cover conflicts with the radiator and I had to remove it. Without the big monoblock this would look quite sad also without that plastic I/O cover. The chipset block is also just thick flat metal, so the cooling performance isn't stellar there either, the chipset can get as high as 56 degrees, which is just fine for chips but still, couldn't they at least make the metal thinner? Here is the RGB With clear water those lights kinda just goes through and light up the ceiling, which I did not like that much. Having this white/silver stuff in the water really makes the RGB shine, literally. A lot of them stuck to the walls but that kinda works too.
  21. While I don't really like colour fluids, I thought clear looked boring, so I tried some which didn't work out as I imagined but still somewhat better than totally clear I think... Rheoscopic Fluid. You basically just add extremely fine powder in the water, it never dissolves so it makes these swirly patterns. It looked really cool at first, and although it doesn't clog or dissolve, it doesn't just stay looking like this either... It seems to stick to the walls of the loop, so there's less and less of it actually circulating in the water. I didn't want to add more of it because I'm afraid of clogging. But I also didn't try to get rid of them, so I guess we'll see what happens in the long term. Last time I got lazy and just used a card that came with a block. It was terrible, there is a huge gap between the GPU and the plate and temps were abysmal until I changed to better paste, it was still not good but at least better than air. This time I'm putting on EK blocks myself. It just looks like fog/droplets, or probably ice/frost, which is probably a decent effect? No, it does not clog at the water channels at all. Now, fully completed beauty shots:
  22. I had a proof of concept silent build which I hacked together with existing and used parts, while it certainly wasn't gehtto, it wasn't as refined as I had hoped. I was able to sell it and recoup almost all my investment, and I tried again. This time I am mostly focusing on improving aesthetics, because the cooling and silence already worked pretty well. I start with the same Cooler Master SL600M case. Rather than brown Noctua fans, I got black ones this time. These fans aren't cheap, and they don't make all that much of a difference. However, since these are the ONLY fans that cools everything, it'll still be worth it. I still wanted to improve cooling so I chose slightly thicker rads, knowing I would have to raise the radiator bracket to make it fit. However this created an additional problem, I could no longer rely on taping up the radiator bracket to stop air from escaping without going through the radiator, which is absolutely critical to the performance of my design. Since I don't have a 3D printer, black cardboard will have to do. I am basically cutting out a gasket for the radiator to fit through, sealing all the holes to force as much air as possible through the radiator. The chosen hardware this time is X570 with 3950x. Having just one CCD would be weird because the other spot is empty. And having one chiplet be like partially disabled would also be kinda weird, so for best aesthetics, I had to choose the 3950x. Although admittedly any disabled chip is aesthetically unsatisfying, like the 2080ti, but the RTX Titan is just too expensive. Plus, the situation is less severe with monolithic dies.
  23. you need to elaborate, a lot
  24. But how long does it take to drop to liquid temperature after the load is off?
  25. I have an EK monoblock on my 3950x, I used liquid metal as TIM. I have so much water and cooling water temperature is basically constant at 24 degrees unchanging. So first of all CPU temperature is like 32 idle minimum, seems a bit high for liquid metal. Load temperature is over 64 degrees at 165W SOC+Core power, again seems high for less than 200w CPU with solder and liquid metal. But the real strange thing is after the CB run is done, it takes like 20 seconds for it to get back down to 3X degrees idle temperature, and I don't remember seeing this with my old system. From what I can remember water cooled GPU basically fall instantly back to liquid temperature once the load is off, CPU has IHS so probably not as quick but I don't think it should like 20 seconds should it? At first I thought it was a flow rate issue, like the block got heated to like 50 degrees, but I boosted flow rate and there's basically no difference, it takes just as long to get back down to ambient. So what's going on here? I had trouble with LM because the ryzen IHS doesn't stick at all, luckily the cold plate does so I just tried to put a layer of it on the cold plate only. Could it be that for example the cold plate and the IHS is only partially connected with liquid metal, and there's an air gap above where the chips are? Also how come I can't get PBO to go above ~160w, I set the power and current limits to over 200 but it stops boosting at ~160w.
×