Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Kodan

Member
  • Content Count

    25
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Kodan

  • Title
    Member

Recent Profile Visitors

313 profile views
  1. Oh I am sorry. Maybe I can help with a couple pictures. This was the setup before the change (when everything worked): There are 9 fans on here going to 2 splitters. 5 go to the upper splitter, 4 to the lower one (where that bundle of cables is in the bottom right. It's 2 of these splitters: They have a PWM cable as input and copy that signal to the other headers. With the 9 corsair fans, that worked. The case fans are 3 Noctua NF-F12 and they were hooked up to my CHA_FAN2 PWM header. There are 3 of them in the top of the case and they are linked together using Y-splitter cables that all end in a single PWM fan header going to the motherboard: With this setup everything could be controlled using either the AISuite Software of the MB, or through whatever I set in UEFI. It worked. Always. Now for the changes. This is the setup of the radiator now: 9 Corsair fans are gone, 4 Noctua fans are used instead. Instead of having half of them go to one splitter and half of them to the other and having two wires run to the motherboard, there is now just one. I hooked it up and it didn't read the RPMs of them at all. No RPM control works whatsoever. With the Corsair fans this was not an issue. They were hooked up to the same fan headers and it worked. I would like to have the option to deactive the fans with certain water temperatures, so I opted to hook the radiator up to a fan header, since you could run them off the water temp sensor reading - this is not possible for the CPU fan header. That header is now used for the Noctua fans in the top of the case.
  2. That thing has to be loud as hell, and turning them down so they're not too annoying defeats the purpose of having them in the first place. So the only real usage I can think of is in an external assembly of some sort. Either build a duct to the back of a CPU tower cooler or build an external water cooling loop with the radiator far enough away.
  3. It's incredible how much wasted space there is in that tower if you take the covers off. Imagine Dell would put that price tag on an unpainted, metal box with an airflow design coming straight from a 2005 business desktop. But hey... at least its watercooled... *cough*
  4. Hi there! I have been running my system with the following fan configuration without any issues across multiple BIOS revisions on a Crosshair 6 Hero: 1x PWM extension going from CPU_FAN to an Aquacomputer Splitty9 going to 5 Corsair ML Pro 140mm fans 1x PWM extension going from CPU_OPT to an Aquacomputer Splitty9 going to 3 Corsair ML Pro 140mm fans 3 Noctua NF-F12s bundled via the included Y-splitters going to CHA_FAN2 I recently swapped the radiator fans for 4 NF-A20 fans to save myself some wiring and to improve noise, but those new fans do not show up in UEFI at all. RPM is reported as N/A. My new fan setup is: 1x PWM extension going from CHA_FAN2 to an Aquacomputer Splitty9 going to 4 Noctua NF-A20 PWM 3 Noctua NF-F12s bundled via the included Y-splitters going to CPU_FAN According to the manual, all headers were and currently are capable of driving the fans I hooked up to them. I had the Noctua fans split across both Splitty9s and hooked up the CPU_FAN and CPU_OPT (2 per header, essentially) with no difference in behaviour. When in UEFI I can now see CPU_FAN reporting RPM correctly, but I can not make a fan curve for it because whenever I select that header to customize fan curves, it jumps to N/A and will remain there until reboot. CHA_FAN2 that I want to use for the external radiator is also on N/A both in AISuite and in UEFI. When I switch sources from which to pick temperatures from I can hear the fans changing speed, but I can't see that in the software as there is no RPM reported, and I can't make a fan curve for this header, either. I have updated to version 7901 using the EZ flash feature, but no changes. I currently have no control over either set of fan headers, and the ones hooked up to CPU_FAN are stuck on 100%. When I switch all fan headers to DC mode fan control suddenly works, but I only get a range from 60-100% in the software when making fan curves, even for the CPU fan headers. Any help would be appreciated.
  5. Hello there! I've been looking around to expand my storage as I am currently on rather old 128, 250 and 2 new 500GB samsung SSDs. Providing power and SATA data to the cage is using up most of my cable space on the back of the case and since I'm routing water cooling tubing through there as well as fan cables and I've got a farbwerk on its way here that I need to put somewhere, it's super stuffed. I'd like to replace all the drives with M.2 storage or PCIe. My graphics card is on a riser and there are 2 more PCIe slots in my case that I could make use of. My board only has a single M.2 at the bottom that is currently unused. Now I've seen those sweet fancy ASUS Hyper Raid cards, or things like the ASRock Ultra Quad M.2 cards. They hold up to 4 M.2 SSDs and seem like they be what I'm looking for. Thing is, both ASUS and ASRock only ever mention intel VROC and X399 on their product pages, and I'm on X370. I've also come across the Gigabyte Aorus Gen4 AIC SSD with 8 TB (a little much but it seems like it's plug-and-play?) which would have the advantage of being PCIE 4.0 already in case I upgrade motherboards in the future. I don't really care about bootable RAID, I just want the storage on that one PCIe slot but if I could get it to work on my platform that would be a little plus. Does anyone have experience with these cards and could tell me which one would work for me?
  6. So following a leak on my EK Res X4 that I was unable to fix, I had replace it. Since it was already sitting on a modded mounting plate for the MO-RA, I went with a Heatkiller Tube 200 D5, which has the added bonus of getting rid of the front mounted pump and enabling me to use a grille at some point in the future, which I may actually give a try. The Heatkiller Tube was a real pleasure to work with and compared to other alternatives on the market is was really affordable and the quality is overall amazing. It looks and feels really high quality, and has some serious weight to it. The holders are attached by taking off the top and sliding them over those pillars. They can be adjusted up and down to meet 4 of the 6 available screw holes on the mounting plate. I decided to use the two upper ones, as the inlet tube perfectly aligns with my dual rotary + 90° angle fitting contraption coming from the rad. This eliminates the need to run a tube back down to the bottom inlet. Now all I gotta do is redo the cables and give the entire thing a good dusting!
  7. Hot damn that looks cool. I was holding off on bending hard tubes for my build but after seeing this I feel obligated to do it! That looks absolutely amazing. Well done!
  8. After a long time of waiting for the block, the Aquacomputer Kryographics 2080 Ti Nickel arrived today alongside the active backplate which means it's time to get a temperature test going to see what I've actually gained from watercooling the card. For my testing I've fired up Deus Ex: Mankind Divided, loaded a save and stared at the ceiling for 2 hours. With Afterburner OSD up to monitor the card, this is what we were at after those 2 hours: 81°C, 1980Mhz @ 1.025V, 161 fps This is with all sliders maxed in Afterburner and a 310W BIOS and a custom curve that I got from running OC scanner. This is the spread of the factory TIM. The thermal pads are also pretty damn thick but... I bought the cheapest 2080 Ti available and I got a working card soooo... I'm not complaining. While the quality of the block and the active backplate are exceptional, it's very disappointing to see that the metal piece that is supposed to go ontop of those 4 threads to secure the heatpipe was not included in the box I received. I've already emailed them about this issue and I hope I'll get it soon. However the heatpipe sits in that groove pretty nicely and I put TIM between it and the backplate...I will just be using it without the metal cover. I also put a thermal pad ontop in case it touches the mid plate of the case - don't want any scratches! Here is the front of the card with the block mounted. Installation was super simple and straightforward. The manual is easy to understand and the screws are color coded in the manual so you don't accidentally use the longer ones where you shouldn't. Thermal pads are thin and you get 3 stripes that you have to cut yourself. Plenty to cover every component on the card and have some left over. It also comes with 2g of Kryonaut that you're supposed to use on the GPU DIE and the VRAM. My Palit card was missing several components that I was supposed to be putting thermal pads on, but that was easy to spot. Interestingly, they also put thermal pads on their stock cooler in those areas, even though the component was not actually present on the PCB. Just an empty pad. And here it is inside the loop. There's a large air bubble on the left, but it's slowly shrinking and the temperatures are great so far so I'll just let it do its thing in there until it works itself out of the block. Here's a picture of the system post-upgrade. I am very impressed by the block. The temperatures are outstanding and the quality is awesome all around. It's my first time using an Aquacomputer block, but it surely will not be the last! I reran the test in Deus Ex. Here's the result: 45°C, 2100Mhz @ 1.050V, 171 fps
  9. That depends on what optimal PBO behaviour looks like, which seems to be hard to archive. AMD isn't open on how the algorithm reacts to temperatures, voltage and whatnot, so many people are just guessing their settings, run a few benchmarks, adjust, run a few more benchmarks. Then you're not sure if your settings are off or the silicone is just bad and you've lost the lottery with yours... it's pretty confusing. On my fast 6 cores the primary limitation seems to be temperatures. at 95°C the system just shuts down as if someone pulled the power plug. I'll have to test it with other slower 6 cores or disabled. 4,5GHz is currently not reachable by me as that requires voltages that drive the temperatures too high.
  10. I would agree but only if you're planning to use the processor mainly in applications that make use of all the cores at almost all the times. In gaming benchmarks there was almost no difference between my manual overclock that scored 7705 in R20 multi and just running it with PBO enabled. They were small enough that they could just be within the margin of run-to-run differences. PBO scored about 7200 in R20. Here's SOTTR: Manual OC: PBO:
  11. I've got a little update for everyone still interested on my 3900X journey I've updated to the latest BIOS version right now which includes the 1003 ABB agesa version (it's not the release version so I'm having a funny issue where the fan control is inverted but everything else works fine) PBO improved slightly, but not by much. I still feel like manual OC is the way to go for me right now. Instead of overclocking and trying to force my way up to 4,5GHz for the faster CCD, I've dialed it down to 4425MHz on the faster one and 4275 on the slower one and looked into undervolting the chip. I've dropped it from the previous 1,45V which I set while desperately trying to pass with 4,5GHz to 1,2875V Surprisingly, the chip doesn't give a damn. Still holds clock speeds like a champ... However this makes me question the numbers Ryzen Master shows. At 1,45V it shows all the cores running at the exact clock speeds that I set for the entire duration of the cinebench run....With the apocalyptic temperatures included. When I undervolt the chip the clocks are also shown being the exact same, but Cinebench score exploded to: Right now this feels much superior to leaving the CPU at stock or overvolting when overclocking. I will try PBO with these voltage settings when I get a bit more time. Maybe next weekend Can somebody shed some light as to what's happening withing Ryzen Master? If it reports the clock speeds correctly and they don't budge at both 1,45V and 1,2875V then why do my scores go up that much when undervolting?
  12. I've been playing around with a 3900X and Ryzen Master, testing using Cinebench R20. I'm on my "old" Crosshair 6 Hero with an EKWB monoblock cooling both the CPU and the VRM. I'm on the latest BIOS for my board and the latest chipset drivers. WIth PBO, all 12 cores boosted to slightly above 4.125GHz and no voltage increase changed it. Under load they all ran at exactly the same speed. R20 hits the CPU pretty hard, even more than Prime. Temperatures are pretty damn high all around. This resulted in scores floating around between 6980 and 7150. I've just recently tried out manual overclocking. I took my HWInfo clock readings as reference to see what I could expect from each of the two DIEs on the chip since AMD always puts one into the 3900X that is faster than the other (most people speak about the "good" and the "bad" CCD). So far I've gotten 6 cores on 4425 GHz manually (CCX1, the "good" half of the chip) and 4325 on the "bad" one. Voltages are at 1,40V This results in a R20 score of 7500+ (but the temperatures... dear god the temperatures!!!) I was getting close to 91°C So far I'm sticking to manually overclocking the chip. HWInfo shows that the max clocks on basically all of the first 6 cores is about 4,625 with PBO, but I have absolutely no clue where that reading comes from. Clocks in HWInfo spike up to that high even when Ryzen Master reports those cores as being in idle at that time, so...yeah. I will try getting some gaming benchmarks with PBO vs manual if somebody is interested.
  13. I was sorta gambling to get an A-chip on a non-A card, which can happen. Wasn't even aware that I couldn't flash a non-A chip to another vbios. The cooler will probably allow for more performance gains than any other bios, aside from the HOF and the Kingpin ones. The holes are about 2cm wide with the plastic inserts in place. They're large enough to force a molex connector through, but not SATA. I will look into manufacturing my own cables next year
  14. The build is now running a 3900X and a Palit RTX 2080 Ti after getting a really good deal on selling my previous 1080. I also ordered the Aquacomputer Kryographics NEXT GPU waterblock to go along with my new graphics card. Unfortunately, they require 30 days of lead time to make one, so I will get a bit of time to tinker with the card. So far I am very impressed with the 3900X performance. Rendering is incredibly fast and everything is super smooth. The CPU is a beast. Not a lot of luck with the 2080 Ti though. I was not blessed with the A-chip on it and I am therefor unable to flash a higher power limit onto the card aside from the 124% one that Palit provides. That brings the card to 310W max. With the fans cranked up all the way it reaches 2150MHz. I'm curious to see what kind of clock speeds the card will be able to hold once the block is here. I also opted for the active backplate. Removing the 1080 from the loop also improved coolant flow by a lot. The sensor on the D5 pump is also able to correctly read and display the flow rate so it really sucked before. I didn't even notice that. I took the Alphacool block apart (it has never been opened even though I had very bad coolant reaction problems before) and boy oh boy was there some disgusting stuff in there. No wonder the flow was bad - the block was clogged! So right now I'm waiting for the GPU waterblock upgrade.
  15. That must be a revision then. This is the riser that was included with the case and I would've gladly gone with the other one. This one is super stiff and almost always bends so far that it touches the glass. I can not bend it down further without what feels like excessive force that might damage it. The other cable in most product pictures has a wide black band right after the PCIe connector. That one looks far more flexible. That's 13mm OD. Holes at the back are super tight. I can not fit my power cable and a tube through those so I had to route some cables out the slits next to he PCIe slot covers.
×