Jump to content

Thavion Hawk

Member
  • Posts

    168
  • Joined

  • Last visited

Everything posted by Thavion Hawk

  1. The only cables connected are the EPS 8 Pin and the ATX 24 Pin. No SATA or PCIe devices, and the PSU is non modular.
  2. Update: Testing the second of the EVGA 600GD's on one of my workbench PC's resulted in a perfectly fine boot up and run. It's a much older system but it boots and runs fine with the PSU. Specs: Pentium G620 2x8GB DDR3 500GB WD Blue 2.5" SSD ASUS P6H67-M PRO
  3. It may be the case that the batch is at fault. Thankfully the customer is not in a rush to get the PC so I'm going to first try swapping in one of the EVGA 500w PSU's that I did order for stock on hand when they get to me. The hope was the system would be done before next week, but early next week isn't unreasonable.
  4. So I'm building a new computer for a customer and I specked them a 500w EVGA GD PSU because it was more than enough power and not all that much money for it. Sadly the 500GD was out of stock, but the 600GD was not much more so I ordered that one and just ate the cost on my end. This system's specs are well under even the 500w unit, but I wanted a reasonably good PSU for the system... Specs: CPU: i3 13100 RAM: TeamGroup Elite 2x16GB DDR5 5600(Running base JDEC, not XMP) Mobo: Gigabyte H610M S2H (Came with BIOS version F2, Updated to version F4) PSU: EVGA 600GD (I've tried 2 of them so far) SSD: 1TB Crucial P3 Plus Case: Fractal Design 1000C ODD: ASUS DVD-RW (Customer wanted it) OS: Windows 11 Pro As said above, the system as built doesn't power on with the EVGA 600G PSU's that I first ordered nor the one that I got to replace it. It does however work perfectly fine with my test PSU's that include a EVGA 500w (100-W1-0500), a 2022 Corsair RM1000x and an Antec Earth Watt 380w. I tried updating the BIOS to the newest, but at this point I'm thinking that for some reason the EVGA 600GD's in particular suffer from enough of a startup power drop that it triggers the Motherboard's power loss circuitry or something else related to Voltage or Current protection. Any ideas? At this point I'm thinking I'm going to return the motherboard and order in a different one as it is the only other real suspect. I should say that I have done a full install of Windows 11 on the system but as per my order of operations have yet to activate the licenses so replacing the board shouldn't be an issue outside of the time and effort. PS: I know a B660 motherboard isn't much more expensive than a H610 one. I'm looking at that option in replacing the board.
  5. Swapping the DIMMs around worked. Cleared the Drivers in Safe Mode and disabled Procs and Services got me into Windows normally. After installing the new Drivers I left it over night in windows stress testing with no problem. I saved the stable settings as a User Profile in the BIOS, flipped XMP on and re-ran stress tests without problem. I'm just so happy I don't need to RMA the CPU. Thank's for responding to this, I really just needed a few outside inputs as a sanity check.
  6. So I got it to POST with three sticks, at which point I entered BIOS and notices something I'd missed to this point. The stick I put into Channel 1 Slot A was rated 2,133 while the stick in Channel 1 Slot B was 2,600. AKA the Kits, though both sold with 3,200 CL16 XMP Profiles by Corsair, are different versions with different DRAM defaulting to different JEDEC clocks and timings... I've placed the pair of 2,133 sticks into Slot A on both Channels and the 2,666 in Slot B so that the system defaults to the lower clock speed and am now running another full 4x8GB Memtest 9.4 run. If it passes, I'll do the same with XMP enabled... After all of that I'll try getting Windows to not BSOD. Now the most annoying part of all of this, is the System was running fine until the crashing started! The RAM was in XMP and everything was fine then it just wasn't.
  7. Before and After Mobo swap, the minidumps indicated HIDCLASS.SYS, the USB Human Interface Devices driver was crashing. I've got WinDbg setup and use it to analyze the DMP's. I've not tried running a clean install of Windows given the No POST problems, but I've booted up a Windows 10 PE based on 1909 as well as Ubuntu 22.04 from USB and both boot and run fine. All of my systems are run through APC UPS' and I have tried swapping circuits to no effect. I only ever have the Keyboard, Mouse and a flash drive connected to the system when testing, and have tested with and without the GPU installed. Also with another PSU as stated above. 050523-22453-01.dmp 051123-17281-01.dmp
  8. Before reading, note that I do know what I'm doing and just want some outside opinions on my diagnosis before plowing on. Specs: CPU: i7-11700k (No OC) Cooler: Cooler Master ML240(Basic 240mm CLC) RAM: Two kits of 2x8GB Corsair Vengeance RGB Pro CMW16GX4M2C3200C16 (tested with and without XMP enabled) Mobo: MEG Z590 Tomahawk Wifi (Swapped for ASUS TUF Z590-Plus Wifi) GPU: EVGA RTX3070 PSU: EVGA 850GA (Also tested with Corsair RM1000x) SSD: 2TB Crucial P5 Case: Corsair 4000D Airflow The starting problem was system freezing and crashing after a few minutes of use. That would be followed by the system failing to even POST. Steps taken to diagnose the problem include the following: - Test each stick of RAM with Memtest86 = Single stick, Pairs and all Four sticks. Also shifting a tested stick slot to slot to validate each slot is okay. - Checked Temps = Fine. No OC and a 240mm CLC meant no problems. Same for the GPU with three 120mm intake in a high airflow case. - Swapped the Motherboard for a ASUS TUF Z590 Plus Wifi. Now the system boots, but only with One or Two sticks of RAM. Three or Four sticks cause it to not POST with the Diagnostic lights on the board going from CPU to RAM, holding for a few seconds, then resetting the system. It seems as if the board is doing first boot memory training, but it just keeps looping with no end. - Tested RAM in another system. All four sticks tested fine all at once on my TUF X570-Gaming Plus Wifi w/ R9-5900x. At this point I'm thinking the problem is the CPU's memory controller. Just wanted some second opinions before I jump into the Intel RMA process PS, the system boots to Windows with the new motherboard and two sticks of RAM, but crashes to a BSOD after about 20 seconds.
  9. Yeah the fan is now pulling air away from the HDD's and I've disabled the VRM Fan because it's pointless given the airflow drawn by the CPU Cooler. Plus the fact that that fan is meant to pull air towards the HDD Cage as it is orientated which would cause a pressure conflict. I also mounted a 90mm Intake Fan to pull Air into the case so that it is not relying on the Negative Pressure of the Rear 2x60 and 1x90mm fans. I honestly failed to fully vet the compatibility of this Motherboard and Case and have to work around that or else order 5 new cases.
  10. Starting with the good stuff, here's a shot of the CPU, RAM and SSD installed with the small CPU Cooler installed. Really if I didn't know they had no interest in pushing the hardware past strictly Stock, I'd have pushed to fit something a bit larger. Then again the number of TR4/TRX40 coolers that fit a 2/3U Server are limited. Low Profile cooling for 32+ Cores anyone? As for the Case and the build as a hole? Well the Case was picked because it fits a full ATX Motherboard, Full Hight PCIe Cards and an ATX PSU. The first problem I found was the stalk layout of the case mounts the PSU in the Front Right of the Case and when installed that puts the back of the PSU 5mm or so from the SATA headers... Not only does that block the SATA headers that are needed for the 3x8TB HDDs, it also blocks the Modular Headers on the PSU itself. Thankfully the Front Mounts both Left and Right are Swappable with a few screws letting me move the PSU to the other side of the case. Now that still leaves both of the PCIe Power Headers on the PSU blocked by the Motherboard's Primary EPS12v Header, but there is no plan that I know of to install Computer grade GPU's into this system. There is also no plan to install External 3.5/5 so flipping the front bays around won't matter. As for stress testing and Stability? I'm running a full cycle of MemTest86. With 256GB's of RAM that's just a good idea.
  11. You say that as I notice that I'd posted the last build for this customer in the build log section. Is there a function to move this post or would it be simpler for me to simple repost there and close this one?
  12. Build log. I've got the parts and figured like I've done before I'd post here as I go.
  13. Country: United States Games, programs or workloads that it will be used for: Linux Computer Node? Other details: I don't know what the customer wants these for, but I'm building them! As ever my job has steered me into building something amazing. Something not quite Epic, but truly Ripping... That's so bad I'm sorry. We have a customer that has in the past asked for some really beefy rigs as detailed on the forums last June. Mid March of this year they came back asking for more. Not more of the same, but simply More. By that I mean they want systems with more Cores/Threads, RAM, Storage and all of it in a 2/3U Rack mount... Now I told them point blank that I can't go past 128GB's of RAM on anything short of HEDT and they said Yes. After a few quotes we came down to the following specs. CPU: R9-3970X (32c/64t) Cooler: Dynamix A26 (Small and fits.) RAM: 8x32GB Crucial Ballistix 3,200Mhz CL16 (They wanted 512GB but didn't want to pay for ThreadRipper Pro) Mobo: ASRock TRX40 Creator (Does what they want +10Gb NIC for inter node connection) Storage- SSD: 1TB Samsung 970 EVO (Because they don't need a 980PRO) HDDs: 3 x 8TB Seagate Ironwolfs in RAID 5 (More Storage, More Better) GPU: GT710 (No GPU Compute in these rigs.) PSU: BeQuiet PurePower 11 500w (Enough power, 80+ Gold and Modular) Case: iStar D-Storm 300-PFS (3U, Full ATX, Rack mount) + Extra Noctua fan for airflow. The only thing I'm woried about is the small CPU Cooler, but it's rated for the job given enough airflow. Here is the Parts Picture less the case, I'll upload the build when It's all done.
  14. Update for you as of 4/2/2021. They came back last week and asked us to build two new 3U Thread Ripper 3970x systems and re-shell these R9-3950x systems into the same 3U cases with 10Gb NICs so they can build a Rack Mount Cluster... New build will start Monday April 5th.
  15. I dropped the voltage on the SOC as you said and just for kicks tried setting the timings to 14,15,15,15 because getting 16,15,15,15 when I set 15,15,15,15 was messing with me. Funny enough perfectly stable with that one tighter timing. Honestly I'm going to put OC'ing back in the box for now. My H100i's keeping the R9-5900x under control, the 2000Mhz CL19 kit of RAM is holding at 1,900 14,15,15,15 and all of the gamed that had been CPU bottlenecked on my 7700k are no very much GPU limited. No complaints. Thank you for your advice. As I said before I've got OC experience with Intel and older AMD CPU's but no experience with Memory OC and Timing. This has been a learning experience. I will no doubt revisit this at a later date as I'm still waiting for a stable UEFI update with the new AGESA but until then I'm done.
  16. I'll give it a shot. Temps are not a problem honestly I've only pushed it as high as 81c looping Cinebench r20. Not going to turn down the advice though.
  17. I just updated the post using HWInfo64 right before you responded.
  18. I feel stupide on that. Offsets are nice to know but need a known base value. In any case here's a screen cap of HWinfo64 with a V-Ray benchmark running to load the CPU.
  19. I'll have to double check when i get home but I have both CPU and SOC set with + offsets not set voltages. I avoid set voltages even if just for testing OC's.
  20. So I knew the changes to the CPU Dies with Zen 3 was going to effect the stability with higher speed memory and fclock, but I didn't expect it to be this much. I planned to wait for a finished BIOS with the new AGESA code supporting 2,000Mhz fclock, but jumped the gun a wee bit. Same timings, up voltage from 1.375v to 1.4v and up fclock/dram to 1,900Mhz/3,800Mhz. Result perfectly stable. A few testes in Dram Calculator later I think what about timings? Faster is good but tighter? Well up the voltage to 1.425v, keep the faster clocks and tighten to 15,15,15,15(Dram Calculator shows it at 16,15,15,15?) Same result. Perfectly stable running even when paired with my 4.8Ghz All Core OC. Now I may try and push this a bit more when ASUS finalizes a BIOS with the new AGESA code to support 2,000Mhz fclock, but even still I just pushed this 4,000Mhz CL19 kit to 3,800Mhz CL15. Did I get something of a Gold Sample with either this kit of RAM or this CPU? Or is this just an indicator of what AMD's managed to do to improve generation over generation. Remember I was using this same Mobo and RAM with a R5 3600 until now and I couldn't get it stable at 3,600Mhz CL15 even at 1.425v...
  21. Update: The BIOS reset to defaults and refused to load my saved OC Profile after installing my new CPU, but that's not much of a problem. I simply retune the memory to the same 15,15,15,15 I've been running. Until a stable build of the BIOS using AMD's new AGESA 1.1.9.0 comes out(ASUS dropped and pulled the Beta BIOS already) I'll try and push the clock up to 3,800Mhz. Until then I'm sticking with what I know works teamed with twice the cores running a 4.7Ghz all core OC.
  22. My Ryzen 9 5900x just arrived today. I'll install it tonight after work and once that is done I'll restart from scratch with my Memory. I want to test what timings I can get with the kit running 4,000Mhz and 3,800Mhz. From what I've read and seen so far I may as well stick with the 3,200mhz CL15 I've got now as the unified CCX's are far less dependant on Infinity Fabric and Memory Latency.
×