Jump to content

The_JF

Member
  • Posts

    19
  • Joined

  • Last visited

Everything posted by The_JF

  1. @jcdutton @CommanderAlex Both 1 and 3 passed the memtest, this is the second pass for DIMM 3, but first for 1
  2. @jcdutton celebrated too early. Had to do a restart to my PC and after that it BSD 3 times right after booting up and getting auto-start applications running. It later crashed again and started to crash multiple times upon start up. BSD code: Kernel_security_check_failure Guess I will skip the full 4 DIMM test and go for DIMM 3 single slot test.
  3. @jcdutton I updated the BIOS, there were no marks about RAM compatibility in the versions in-between, but I will do a test with all 4 DIMMS tonight now that I have updated the BIOS and see what happens. And I'm aware of having the possibility of BSD with Non-ECCs, but prior the upgrade I made last autumn and winter, I didn't encounter that many BSDs as I do now. The only thing left from my old rig is the now old NVMe boot drive and another SATA SSD. I would get BSD before I swapped the RAM as well. First I upgraded my GPU to a 3080 while having a i7-8700K and 48gb (32+16) of 3000MHz of RAM. Once I upgraded to Ryzen 9 5900X and changed the mobo, I would start getting crashes in Warzone for example. Difference was that now the GPU could actually utilize it's full power and would trigger my poor 650W PSU and the PC would shut off without BSD. However, I would get some BSD as well every now and then. Then I received my water cooling parts + RAM, so I swapped these out. BSD and hard shut downs continued. After upgrading my PSU the hard shut downs disappeared, but BSD remained. By that time I was already mining as well and due to so many BSD the windows has lost a bunch of dll files and registry items, so it would corrupt itself and throw a BSD tantrum. Anyway, I will test with all 4, see what happens and then test DIMM 3 a few more times to see if it passes. It's been a few days since I've only ran 1 and 3, been mining, playing different games, editing all without shutting down the PC and I haven't gotten a single BSD.
  4. @CommanderAlexBut's interesting that the 2 mismatching sticks are giving almost no errors and the other 2 are failing main tests. I noticed it when I ran them together that it was much more stable. Now I've been mining, playing video games, installing programs, editing in premiere and haven't had a single BSD in 2 days, which is a record since the last half a year.
  5. @CommanderAlex After many painstaking test later, I now have some results. XMP is off on all of these Stick 1 and 4 are a pair Stick 2 and 3 are a pair All tests are done in A2 and B2 except single slot tests, those are in A2 Test with 2 and 3 FAIL - 1977 errors Test with 1 and 4 FAIL - 5852 errors Test with 1 FAIL - 65 errors Test with 2 FAIL - 10017 errors, test aborted, 9/11 tests passed with only 1 run Test with 3 PASS Test with 4 FAIL - 10015 errors, test aborted, 1/11 tests passed with only 1 run
  6. @CommanderAlex Good tip, I think I'm currently running them from different sets. Tonight and tomorrow morning I will do single slot tests as well.
  7. @CommanderAlex Will do. I will do a few more runs with different configurations. But could it be possible for all 4 sticks to be faulty? I bought them as 2x packages. RAM usually lives the longest.
  8. Makes sense. I re-did the test with the other 2 RAM sticks (call them 1 and 3) and got 365 errors. After disabling all XMP I got 198 errors with 1 and 3
  9. So I built a custom hard tube rig in January this year. Was getting some weird BSD occasionally and I misplaced them to be the fault of a cryptominer. Now that the crashes became too frequent I decided to upgrade my SSD and re-install Windows. The BSD still continued even on fresh install. I ran windows memory diagnostic tool and got the result back that the memory is bad. So I did a memtest86 with just 2 out of 4 sticks in slot A2 and B2, result came back with 5678 errors. I placed the same sticks in slot A1 and B1 and the PC wouldn't even POST. Tried with the remaining 2 sticks of ram that I had removed for the test and again, slot A1 and B1 won't POST, but does in A2 and B2. Clearly it's not the 4 memory sticks that are in fault, but either the CPU or the motherboard. Now the problem begins to try to identify which one of these components is actually faulty, so I don't have to send both of them back. Plus since it's a hard line rig, I really don't have the mental capacity to rip everything apart to send in the mobo just to find out that the CPU was bad. A bit easier to remove 2 tubes for the CPU than 8 tubes to get to the mobo. Ideally I would like to determine the culprit before I rip my system apart. System specs: Win 10 64x Enterprise 21H1 MSI X570 MAG Tomahawk WiFi AMD Ryzen 9 5900X MSI RTX 3080 Gaming X Trio 4x16GB G.Skill Trident Z Royal F4-3600C19D Seasonic Prime TX 1000W Samsung 980 Pro 1TB Samsung 960 Evo 250GB Samsung 870 Evo 1TB These are the waterblocks for the CPU and GPU EK-Quantum Velocity RGB - AMD Nickel EK-Quantum Vector Trio RTX 3080 *Note: I have not had issues with temps or stresstests. GPU is never over 45 and CPU never over 70 MemTest86-Report-20210904-172240.html
  10. I'm trying to deploy a Lenovo P53 through MDT. It manages to only gather local data and validate it, then I get an error "A connection to the deployment share could not be made. The following networking device did not have a driver installed. PCI\VEN_8086 bla bla bla" I've downloaded the driver from Lenovo and pushed it in my out of box drivers and updated the deployment share. Made sure that under Deployment share -> Windows PE -> Drivers and Patches I am pointing at the correct folder for driver injection. But I still get the same error. Am I doing something wrong or am I missing something? Could it be that my LiteTouchPE doesn't feed in the right driver? And if so, how do I force my driver into the LiteTouchPE?
  11. Solved: We opened up a lot of dynamic ports from the MDT server side towards our satellite office. The firewall in our HQ was dropping traffic on some ports.
  12. So I'm trying to PXE boot a Lenovo P53, but I get an PXE-E18: Server response timeout error. It finds the right IP adress and even the file, but the filesize is 0 Bytes. We usually do that over lan where the local mdt server is located and it's working just fine. Now when I'm sitting in another office with two different DHCP. We have a VPN set up through our Meraki firewall for the internal traffic between HQ and satellite offices. The HQ firewall is Clavister. I checked with Wireshark that my connection requests are coming through on port 67, 68, 69 and 4011 and I got 6 requests on port 69. We have also changed the packet size allowance between the mdt server and vpn. Bios settings are set to UEFI boot only, bios version is the latest Any thoughts or ideas on how to proceed in this troubleshooting?
  13. I don't delete on my ssd until I'm done with the project, unless I need to go to another shoot. I rarely have any issues with my windows or pc, so it shouldn't affect me at all. How do I force a trim on a drive?
  14. except that I could start importing them into premiere, even if it sits on ram as cache. It's going to end up there anyway after a while in premiere
  15. copying from a samsung T5 to a corsair force LE
  16. I was transfering 45gb of video files from my external ssd to an internal ssd. First 10gb or so went over with 450mb/s and then the cache on the ssd was full. My internal disk usage was 100% but it only used 29% of ram. Since I had about 35gb or so ram free, couldn't windows use it up to act as cache? If it's possible, then how do I it?
  17. I'm not really worried about algae or corrosion as I will be making sure I have the same metals on all ends and will be using proper coolant. My worry is mainly if I open up the AIO I have and add the GPU in the loop. Would my pump manage it and for better cooling effect I would be adding an additional 140mm radiator. Would the AIO pump manage that?
  18. The thing is that the Sea Hawk model already has the water cooling block. And I already have an AIO for my CPU. Instead of getting rid of both and getting a full kit I thought of saving a buck and breaking the AIO loop with an extra radiator and water reservouar for the GPU.
  19. I haven't done any custom water cooling before and I am buying a 2nd hand 1080 ti sea hawk from MSI. I already have a Corsair H115i for my CPU. My issue is that my case won't fit another 240/280 radiator. Best I could do would be a single 140mm radiator. Not so sure it would be sufficient and that I would get any benefits from having a 1x140 water cooling compared to a factory air cooling on a 1080 ti. So my idea is to combine the AIO cooling with custom loop for GPU. Would it be possible to run 280mm + 140mm radiators with an extra reservouar and perhaps an extra pump (if needed) in the same loop for both my CPU and GPU?
×