Jump to content

Eggicus Roundplumpius

Member
  • Posts

    26
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Eggicus Roundplumpius

  • Birthday Apr 22, 2002

Contact Methods

  • Discord
    Eggicus Roundplumpius#3333
  • Reddit
    u/eggnorman

Profile Information

  • Gender
    Male
  • Location
    United Kingdom
  • Interests
    Retro tech, music, IT, baking
  • Occupation
    Service Desk Analyst
  • Member title
    Lord of the Eggs

System

  • CPU
    AMD Ryzen 5950X
  • Motherboard
    Asus Crosshair VIII Dark Hero X570
  • RAM
    32GB Corsair Vengence LPX DDR4-3600
  • GPU
    Nvidia RTX 3090 - Founders Edition
  • Case
    be quiet! Dark Base Pro 900 rev. 2
  • Storage
    1TB Samsung 980 Pro x2, 2TB Samsung 870 Evo, 5x6TB Seagate IronWolf Pro
  • PSU
    Corsair HXi 1000W
  • Display(s)
    BenQ PD2705Q, Dell 2407WFP
  • Cooling
    Noctua NH-D15 chromax, 5x Noctua iPPC 140mm Fans
  • Keyboard
    Razer Huntsman V3
  • Mouse
    Logitech MX Master 2S
  • Sound
    Harmon Kardon Soundsticks III
  • Operating System
    Windows 11 Pro
  • Laptop
    Lenovo ThinkPad X270 (i7, 16GB RAM, 500GB SSD)
  • Phone
    iPhone 11 Pro (Midnight Green)
  • PCPartPicker URL

Recent Profile Visitors

597 profile views

Eggicus Roundplumpius's Achievements

  1. Alright, before I go and daily these settings, could anyone tell me if they're safe? PBO is configured with Auto Scalar & Default Limits Asus Core Performance Boost is enabled. Curve Optimiser set to -15 on almost all cores and -10 on two of them (this is what I found to be stable, -30 wouldn't even boot). Core, SOC Voltages are set to Auto, reading as 1.353V and 1.080V in CMOS respectively. In single-threaded workloads, VID reads between 1.2V and 1.26V Clocks range between 4.69Ghz (nice) and 4.95Ghz in single-threaded workloads, steady at 4.6Ghz all-core in Cinebench Max temperatures are around 87⁰C (it's a bit toasty but I think this is all my NH-D15 can do at over 210W all-core) DOCP at 3600Mhz, Auto Voltage reading at 1.352V System Specs: AMD Ryzen 5950X (PBO Enabled, Core Enhancements Enabled) Noctua NH-D15 chromax Corsair Vengence LPX 3600Mhz/C16 (DOCP Enabled) Asus Crosshair Dark Hero VIII X570 Samsung 980 Pro 1TB x2 LG Optical Drive Hotplug 2.5 inch SATA Bay Samsung 870 Evo 2TB Nvidia RTX 3090 Founders LSi 9211-8i HBA 5x Seagate IronWolf Pro 6TB Corsair HXi 1000W PSU
  2. To preface this, I've done a ton of research into this problem. I'm not the only one and RMAs don't seem to help. Whatever's going on here is just considered a normal operating quirk of either the Dark Hero, the higher end Ryzen 5000 chips or both. Simply put, the system will crash at a firmware level every so often. It doesn't matter if it's under load or idle, it can just lock up with no response to even holding the power button. This immediately makes me think motherboard, but it was suggested by others that RAM could be the cause. I've tried disabling Gear Down mode as well as DDR Power Down - this had no effect, and may even have made it worse. Someone else suggested increasing the DRAM Voltage, and that yielded some really interesting results. In Asus's BIOS, if you're not familiar, you get your input field for the voltage and a little box to the left showing what voltage would be applied by default. Normally, this field is set to "Auto". When you enable DOCP, this field changes to whatever the DOCP profile sets. What's interesting, however, is that the default value in that little box keeps changing. One boot, it might be 1.27V. Another, it could be as high as 1.38V. Often, after a crash, it'll read above whatever you try and set indicating that the BIOS seems to think the memory needs more voltage. Surely, though, 1.38V is enough? I even tried redlining it at 1.4V but it still wanted more (which I was not willing to give it, I don't believe that's safe). It was also suggested to me that 5950X CPUs can sometimes be very badly binned, and when using PBO it can cause significant instability where certain cores (or even entire CCDs) are less up to the task. Someone even suggested that the IO die could've been badly binned, and that was causing the apparent memory instability. If this is the case, there's little I can do because I've passed my return window now. I'm not necessarily looking for a solution, although I'll take it if you have one, but rather just second opinions that might help me piece together a picture of what's going on in my little box of thinking sand. System Specs: AMD Ryzen 5950X (PBO Enabled, Core Enhancements Enabled) Noctua NH-D15 chromax Corsair Vengence LPX 3600Mhz/C16 (DOCP Enabled) Asus Crosshair Dark Hero VIII X570 Samsung 980 Pro 1TB x2 LG Optical Drive Hotplug 2.5 inch SATA Bay Samsung 870 Evo 2TB Nvidia RTX 3090 Founders LSi 9211-8i HBA 5x Seagate IronWolf Pro 6TB Corsair HXi 1000W PSU
  3. I'm facing a bit of a conundrum, because I just can't seem to cross the T when it comes to airflow with the Dark Base Pro 900 in an inverted layout. The issue is that the air comes in, flows a bunch over the socket area (and the through the CPU cooler) and to a lesser degree the PCIe slots (and to the GPU cooler). The problem is that there just isn't enough airflow to the PCIe slots. There's a filtered intake on the front (two 140mm fans) and a filtered intake on the bottom (another two 140mm fans). This puts a bunch of pressure squarely around the CPU area. This wouldn't be an issue for the GPU if Nvidia hadn't designed GPUs that also blow directly into this area, fighting against all that pressure - especially since the other fan on the GPU - the one that exhausts outwards - will be getting bugger all airflow like this. To clarify, I am using a Founder's 3090 so it has a funky cooler. What I could do is add intakes on the top, but this isn't a filtered intake so it'll be pretty dusty. I'm also concerned that it'll struggle with the limited airflow in that spot, not to mention that the slits are optimised to exhaust air towards the back of the case, so an intake config could just recycle all the heat coming out from the rear CPU and GPU exhausts. Another solution could also omit the optical drives and add another 140mm fan in its place, but I am unfortunately one of the few people still pottering about that actually uses optical drives and CDs. If push comes to shove, I'll use an external one, but that seems a bit silly when I have two perfectly good slots in my very large case already. I could really use some creative ideas - all are welcome! P.S. This is how Be Quiet! expects it all to work in a standard ATX layout:
  4. From what I understand, Intel RST uses the CPU for calculations, doesn't it? I'm sure it's not a huge hit, but I figured I might as well use a RAID card if I could. My main concern, if the performance would be fine, is just this: could I easily move the array to a new motherboard if I had to? And as far as the NAS goes, it's an option I considered but disregarded because I also want to encrypt the volume with Bitlocker, plus I'm in a situation with limited space so the less boxes the better.
  5. From what I've heard, it's dirt slow. Plus, if I'm using a RAID card, I don't have to worry about parity calculations taking up CPU.
  6. If I was using something like Storage Spaces, yeah, but I've heard some pretty bad things about that. The way I'd imagine setting up RAID is through the motherboard's BIOS and using the chipset's RAID functionality.
  7. The reason I don't want to use the onboard ports is because it means I'd have to keep that motherboard between upgrades. If I'm using a card, I can keep the array or even migrate it if I want. But yeah, if there's no harm in using the chipset 4x slot I'll just do that.
  8. So, I'm planning a new system and I'll be using an Intel i9-12900KS and an Asus ProArt Z690 for the motherboard. I need to connect 5 hard drives and configure it in RAID 5, and I figured the most sensible way to do this would be with a RAID card. My concern is that most of the cards I can find are PCIe 2.0 and take 8 lanes. The ProArt configures its lanes 8x/8x from the CPU, for two slots, but I'm worried that introducing a 2.0 device to a 5.0 link will hurt the graphics card's performance. - If I have to configure those 8 lanes at 2.0 speeds, will all the lanes direct from the CPU have to be 2.0? - In fact, is it even possible to configure from 5.0 all the way down to 2.0? - Would it be more sensible to use the chipset slot, electrically wired for only 4x? If it has to be configured down to 2.0, I'll only get 2.0 x4 speeds, but would that matter? Any help and suggestions would be appreciated, thanks!
  9. Well, the title says the first and foremost: I regret everything. Nevertheless, I'm in this hole now and so I need to make camp. I have a 16" MacBook Pro with an i9-9880H and a 5600M. Despite the 5600M being a very competent chip, I failed to realise something: it puts heat into the same cooler as the CPU. When gaming, they have to share the same credit card cooler and it has some, ahem, interesting side effects. Namely, the CPU will almost never go above 2Ghz when gaming. It'll usually just sit around 1.78Ghz or 1.67Ghz. Sometimes, if it's feeling extra miserable, it'll even go down to 1.3Ghz which is a very fun experience. I'm well aware this is a cooling issue because Throttlestop is telling me BDPROCHOT basically constantly. Fun fact: the MacBook Pro 16" will actually throttle even if the CPU is well below 90 degrees celsius. Why? Well, the VRMs overheat! Fun, that. Anyway, question of the day: would an eGPU, effectively removing the 5600M from the cooling problem, at least get me to base clock while gaming? My reasoning is simply that if the internal GPU is doing nothing, it aught to give the CPU more breathing room, right? I mean, it'd still be like moving from a rubber mask to a pillow - still a smothering that will eventually chime the bells of death by warpage - but an improvement? If anyone has access to at least similar hardware, I'd love to hear what your results were.
  10. Update: 1911Mhz is not stock. 1683Mhz is stock. This is super weird and stressful because my card might be killing itself. Does anything think that a card BIOS refresh might do it?
  11. On mine, the stock boost is somewhere around 1600Mhz. I wonder if the card somehow got confused and is now just falling back on the most basic setting. I mean, this card isn't a founders but it uses a reference design. Could that be it? If so, how do I get it to stop? I would like to have control of my graphics card back.
  12. This has had no effect. I'm really scared now that my GPU has just broken something.
  13. Update: Severly underclocking the card seems to do okay, but naturally that deals a hit to my performance. Also, the OC Scanner function no longer works. It fails immediatly.
  14. My GTX 1070 is stuck turboing to 1911Mhz every time it is under load. It causes instability and just doesn't work well. It was fine before I tried to creep up the base clock and I got to a point where it crashed and now, even after turning it way down, it won't stop boosting. I never touched the voltage - only the power target, base clock and memory clock. All are now at their default levels. Anyone know what to do?
  15. This system has always been weird, but just today it seems to be rebooting (without windows shutting down cleanly) 20 minutes after start up. It's done it twice this morning and is due another in roughly 5 minutes, if the timing is consistent. Windows Event Manager simply reads the event as the system losing power, but the fans never turn off. The system just reboots. I'm not sure what could be causing this but I had it under pretty heavy load for 20 minutes last night for the first time in a while. Given that Windows doesn't know what it is, I'm worried it might be hardware. What could cause the reboots to be this systematic though?
×