Jump to content

skullbringer

Member
  • Posts

    2,357
  • Joined

  • Last visited

Awards

About skullbringer

  • Birthday December 19

Profile Information

  • Gender
    Male
  • Interests
    fighting the fanboys and getting bad profile ratings from them.
  • Occupation
    computer stuff
  • Member title
    I bring 'em, ya know

System

  • CPU
    i7 4930k @ 4.6 GHz / 1.4375 V
  • Motherboard
    Asus Rampage IV Extreme
  • RAM
    4 x 4GB G.Skill RipjawsZ 1600MHz 9-9-9-24
  • GPU
    EVGA GeForce GTX 780 Ti K|ngp|n @ 1424MHz
  • Case
    CMStorm Stryker modded
  • Storage
    250GB & 500GB 840 evo, 8 TB of HDDs
  • PSU
    Corsair RM1000
  • Display(s)
    Benq XL2720T, 2 x 1440p korean
  • Cooling
    custom water loop
  • Keyboard
    Code Keyboard Tenkeyless
  • Mouse
    Logitech G502
  • Sound
    O2+ODAC, AKG K701's, Modmic 4.0

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. tl;dw if the white polarity stripe on the cap near the postcode is on the left, don't run the system or leave it running unattended!
  2. Holy shit, that was unexpected and made my day. Thank you very much!
  3. Plot twist incoming! BSOD's still happened after the TIM change with hydronaut. The last thing I had not yet tried was a BIOS update. The release notes listed 'microcode update', which gave me hope, since broken CPU "instruction handling" could explain the blue screens under light load. And holy crap, YES, no BSOD has occured since!! ? But wait, there is more! Now I can no longer enter the BIOS. Tried everything: different gpu, onboard gpu, CMOS clear, unplugging drives and usb devices, removing all hardware, removing CMOS battery and draining the board of any power. Still hangs after the ROG has been displayed and Q code shows A9. In 5% of cases, I can actually enter the BIOS, but then it freezes after 2 keyboard inputs. Oh and also the onboard NIC now takes about 2 minutes in Windows to initialize, which is additional fun. So I opened a support case with Asus (supposedly respond in 48 hours). Took 5 days for them to send me an answer and suggest removing all hardware, removing CMOS battery and draining the board of any power, even though I told them I already did that. Also they do not offer RMA and I was told that I should check with my etailer. Then I wanted to read the Asus ROG RMA guide, but it is located in a forum, which you need to register for, but you cant, because the forum has a broken captcha on the registration site. ? GJ Asus, what a great customer support experience.
  4. built the rig when the 8086k came out, so about 5 months I ran out of conductonaut for now, but next week I will re-try conductonaut and try to saturate the copper IHS. In case that dries again I can still switch to the stock ihs. Thanks for all the suggestions! ?
  5. thermal grizzly conductonaut between die and ihs, and hydronaut between ihs and water block
  6. Recently my system has black screening / blue screening while gaming. I ran some stress tests with Prime95 v29 and Furmark simultaneously to check if temps were out of control under load. However CPU was only getting up to 85C with AVX and GPU only 50C. I also tried reverting CPU, memory and GPU overclocks, still the system crashed after about 30 minutes. The crashing also occurs with much lower temps (CPU 60 C, GPU 45 C) than during the stress test. Since I recently moved my system around, I decided to check my liquid metal application having feared that the liquid metal might have gotten moved around by physical shock. I found this: The liquid metal was not liquid anymore. I have done some shunt resistor shorting and liquid metal applications in the past, but I have never seen something like this. It is basically completely hardened and does not come off the cpu with isopropanol. Instead it had the texture of a lottery scratch ticket, so I had to rub it off with my finger nails. This is how it looks after cleaning: Managed to clean the CPU, but the heat spreader remains stained as you can see. I went back on conventional TIM for now and temps are about 10 C higher during stress tests and fluctuate a lot more, but I have not experienced a blue screen yet. My theory so far is that the thermal transfer was so bad because of the hardening that some part of the CPU overheated and shut down the system, even though no such temperature is being reported by software monitoring. But I can not yet explain why the liquid metal dried out, maybe because of the custom raw copper heat spreader? Any ideas? Specs: - i7 8086k @ 5.2GHz 1.375V - GTX 1080 Ti @ 2025MHz 1.093V - custom water cooling with EK blocks, 2x 360mm radiators - thermal grizzly conductonaut under ihs, hydronaut everywhere else
  7. Long time no see, but I thought I wanted to share my progress of me getting into mining. As you might notice from the v1.2, this is not the original configuration, but I will get to that later. Specs: Intel Core i7-4930k Asus Rampage IV BE Corsair H80 three Noiseblocker NB eloop B12-4's Corsair RM1000 some Seagate 1TB HDD two MSI GTX 1070 Armor 8G's some random 15-year old case which housed my first ever gaming pc back in the day All of these parts I had lying around from older builds, so I might as well put them to work mining, instead of sitting on a shelf. Apart from the GPUs which I managed to get for a reasonable deal NEW from Alternate.de This is the most standard atx case you could have probably gotten 15 years ago, just cheapo thin metal, mostly riveted, with a plastic font cover, perfect for modding. So I just removed the front plastic cover and all the pre stamped metal 5.25 inch cutouts for air intakes. With v1.1 I also added the fan in the side panel to help airflow, originally the side panel was just solid all the way. Also all the fan mounts are for 80mm fans. I guess 120mm was not a thing back then, so fan mounting was creative. If you need a Windows XP license, there is one for you on the side panel xD. Opening up the side panel, you can see the crowded interior, which was never meant to house this much and powerful hardware. In the back you can see the for me useless 80mm fan mounts. In Hotbox v1 I had a fan there, thinking it would help exhaust hot air, but placing it next to the gpus on the side panel was much more effective, so the psu fan does all the exhaust for the cpu area. Also you can see the fan directly in front of the gpus to push as much air towards them as possible, being circulated by the gpu coolers and then exhausted by the side panel fan. This setup actually allows the gpus to run without throttling in this very confined space, whereas in v1 without the side panel fan, the gpus would be constantly at 93°C and throttling down. From here you can see the H80 with its fan just sitting casually in the 3 5.25" bays pulling in fresh air from the front. The cpu is running at stock, as no cpu power is needed and hovers around 50°C idle. You can also see that the HDD is pushed as far to the front as possible to make room for the motherboard. Since the R4BE is eATX, it barely even fits along side the hdd cage. The result is the HDD is poking out through the front 3.5" floppy bay. lol Speaking of the HDD cage, you can see the traces of my modding efforts when making room for those GPUs. Below the now installed HDD would normally be another 5 bays, which I drilled out the bottom rivets and then cut out with a Dremel to allow for these beefy GPU coolers. Finally, the heart of Hotbox, the two MSI GTX 1070 Armor 8G's. I got them from Alternate.de for 450 Euros per piece with 30 Euros cashback each. So a pretty good deal considering they are going on Ebay used for 500+ Understandably so, they are running amazingly well and even at the constant 100% fan speed I set them to reasonably quiet. And they overclock like a beast as well: GPU1: 2088 MHz avg. core clock, 4325 MHz memory ~ 29.5 MH/s DAG GPU2: 2124 MHz avg. core clock, 4400 MHz memory ~ 31 MH/s DAG The overall system power draw with this config is 440W. Originally Hotbox v1 to v1.1 consisted of a MSI P67A-GD55 and an i5 2500k, admittedly much more reasonable in terms of power usage for a dual gpu system. With this setup, my power draw was 380W total. But, since I wanted to add more GPU's I swapped the Sandy Bridge rig for Ivy Bridge-E, with some disappointment. The third GPU I added in (Powercolor RX 480 8GB Red Dragon) caused so much heat and very diminished returns, making it not viable in this chassis. It barely held 1000 MHz core clock and was constantly sitting at its tjmax of 90°C. Even when pushing the two GTX 1070s down to the bottom of the motherboard with no clearance between them, and having the RX 480 on top with spare slots between it and the 1070s and a fan blowing directly at it, it would not run reliably at 100% load. So this is what I am at for now, I am probably going to go back to spec v1.1 with P67A and 2500k with Intel stock cooler, but I am also in the process of building a mining rig out of some old slat frame. This way I should be able to run the R4BE with all its pcie slots populated with pcie usb extenders to its full potential. Star of the show for me are definitely the two 1070's. For about 160W power usage each, you get about 30MH/s DAG, 0.3GH/s Lbry and 450H/s Equi, which is slightly more than an RX480/580. Granted, RX cards are cheaper, but they also use more power (about 220W in my testing). Considering energy costs are quite high in Germany, the more power efficient 1070s look like a more profitable solution in the long run. Unfortunately, I can not get any more of them new for now, since prices have sky rocketed even more. But then again, who knows how long this whole mining rush is going to hold up anyways, and if investing at this point even makes sense... At least I am having fun with some projects and am hopefully able to flip the cards for a reasonable return, in case things go south.. So what do think? RX or GTX for mining? Any way I could improve Hotbox for v1.3?
  8. since device manager shows all 16 active threads, the cpu and the bios are not your problem. the issue is probably an old windows install that is based on your old 4 core, 8 thread cpu which your windows install is "used to". for windows to use all of your cores after the upgrade, you will either need to fiddle with windows startup options, see https://answers.microsoft.com/en-us/windows/forum/windows_7-hardware/howto-get-windows-7-to-detect-your-new-multi-core/71519d51-f6cb-47df-b3ff-66c2928d6de4 or do a reinstall.
  9. Ok, so not the 1607 update, but the small updates and patches before fixed it already. My windows 10 installation stick was from the before 1511 days, so this could explain it. Also, the guys on ocn have linked me to this thread, which has an even later bios (5803) than the one on the Asus download page. http://www.overclock.net/t/1624603/rog-crosshair-vi-overclocking-thread
  10. kk, will do, maybe the guys in the ocn ryzen owners thread have some ideas... Yeah, especially since Ryzen Master requires HPET enabled for overclocking to work. But even after trying oc with Ryzen Master it, I found it not useful at all, since the system has to reboot anyways to apply changes. So why not just reboot into the bios and do it manually, I guess... Did a fresh install of Windows 10, just installed the amd chipset driver, intel lan driver, and oc tools. I will update this thread once my potato internet connection has managed to download the 1607 update.
  11. tried that, multiple times. also on the current bios there is no difference between optimized defaults and factory defaults, I crosschecked that. hpet is off because AMD recommends that according to Paul's video: Also I tried with hpet on and off, same behaviour.
  12. I guess this is just part of being an enthusiast. Debugging weird shit like this is not the most satisfying part of running the newest hardware, but it is fun nonetheless. Also my EK Supremacy leaked all over the C6H yesterday, not great, but to quote Linus: "Well, that was exciting." Also this performance is nothing to sneeze at.
  13. yeah, I saw that and Paul's, too bios is the latest available 5704, windows power plan is set to high performance and hpet is off. hm, ok, will try. I have had it running at 3050 14-14-14-34, 1.4V and I also played around a lot with different bclk, multipliers, deviders and shit. Freezing behaviour was not impacted by this, still occurs randomly when booted into the os. If there was a memory related problem the board's qcode readout always showed 8, for example when the bclk was too high or if memory voltage was not high enough. But when it freezes it shows 24 like it would be running normally. I will also try updating Windows 10 to 1607, maybe this will fix things? No idea at this point.
  14. G.Skill TridentZ 2x 16GB 3200-14, but it is running at 2133 15-15-15-36 on optimized defaults
  15. So my Ryzen system just randomly freezes in Windows. The qcode read out does not show a crash, but I can't do anything but reset the system at this point. I am running a C6H with the 5704 beta bios. Any ideas what could cause this?
×