Jump to content

joishw

Member
  • Posts

    39
  • Joined

  • Last visited

Everything posted by joishw

  1. So I've narrowed down my case to the Corsair Carbide 600Q for the factors: the 5.25" drive bay, needed for putting in a 16 bay SSD cage EATX support Noise dampened 3.5" drive bays (two minimum) Price I now have concerns around the placement of a drain valve into the loop, I've seen custom loops done in this case but none had visible drain valves. http://www.watercooled.ch/corsair-600c/ Where would you put the drain valve, if any? Also would the system here have an issue with the fact the radiator isn't at the highest point? Meaning an air bubble could form on the top most block (which I think in this system in particular is an NVMe SSD) Apologies if this is a stupid question, this is my first dabble in custom loop cooling.
  2. Ah I understand now, sorry I misunderstood you. I read it as you were using modified LGA 771 CPUs in socket 775 motherboards I would tread carefully when putting LGA 771 procs into 775 mobos, only because of the BIOS compatibility. Saying that the CPU is $10 so there's not much to lose unless you need to buy a mobo too.
  3. What do you mean modified? They are either socket 775 OR 1366. I am not aware of any Xeon CPUs which will fit in both sockets. Are you able to please provide the link or a model number of the CPUs and motherboard you intend on using please? That depends on the type of work you're doing with the PC, in most cases the OS will be more responsive, but if you're constantly loading 60GB render projects into RAM for editing then switching to a VM you'll notice a difference. How long is a piece of string? All depends on your workload, why are you going down the route of a Xeon and not a consumer chip?
  4. It didn’t like 4.4 at 1.296v. Died after 7 seconds, but ran for 90 mins at 4.3 at 1.28v. At this point would you do more or less voltage?
  5. Cheers, low 80 for load right? And with your 4.3, was it by increasing the multiplier?
  6. So I'm at 4Ghz and 2400Mhz via XMP stable for a 9 hour run of OCCT large data sets. https://valid.x86.fr/ufht4n I'm using VDROOP disable and auto VCORE voltage. I've had it at 4.3Ghz using override VCORE at 1.3V but with limited stability. (system hangs, BSODs etc.) I've seen people at 4.5Ghz using voltages as low as 1.25V, I don't have very much experience overclocking. Is there any thing else I could be doing different to achieve higher than 4Ghz with stability? I'm not sure where CPUz is getting the temp of 52C, I'm currently idling at 27 - 32 across all cores and at load this is the hottest core:
  7. Can you pull back some stats on the CPU usage too? Do the dips appear to have a pattern? (could be a scheduled task or scheduled AV scan, or even .NET optimisation tasks) Are all of the drivers / firmware the latest release? Is it Windows 10?
  8. Does it show up in the BIOS? Does the motherboard support PXE boot? If it does support PXE boot, does it appear as a boot option? I'm trying to verify if the BIOS reports it as being connected and usable vs the BIOS just saying it's "enabled"
  9. https://ark.intel.com/products/82932/Intel-Core-i7-5820K-Processor-15M-Cache-up-to-3-60-GHz-?q=5820k
  10. That kinda sounds like the battery on the motherboard is losing charge, but I think if that were the case you would have lost your profiles, unless theyre written to a ROM somewhere. Did you have to re-set the BIOS time?
  11. Virtual machines, multiple background apps / services, bragging rights. To touch on the virtual machine side of things, you can set CPU affinity for a certain machine to only use specific cores. For example, 3 virtual machines on an 8 core CPU, virtual machine 1 can use 0 - 1, virtual machine 2 can use 2 - 5 and virtual machine 3 can use core 6, and leave core 7 for the host/system.
  12. Wow ok, so I tapped a couple of times on the pump and....
  13. Apologies for coming back to a closed thread, but does this look right to anyone? Should there be dips in the pump RPM? And I've left the PC idling for an hour, with an interesting graphing result on the temps too. I've enabled logging for the CPU package, individual cores and the temps / pump RPM within iCUE so shall report back with those. I'll leave the system at idle overnight. I've returned the CPU to stock speeds, still getting highs of 64 with simple loads, (web browsing, viewing pictures and Outlook)
  14. Shall use Aida now. Cheers. I overclocked to 4Ghz after finding the pump settings. I'm OK with the temps when using it with the extreme pump setting, it makes no noise difference as far as I can tell. I would have liked higher than 4Ghz, I may revisit it.
  15. Windows. I haven't had a chance to play around with Windows storage spaces, I may just do that before building and committing to the RAID card. I saw one which can sustain 6GB/s and got reeled in.
  16. Cheers, the SATA ones are going to be connected via the RAID card I need to think of a strategy for that too. I have 14 and would like to use them as a safe storage medium, but don't like the idea of shoving them all into a RAID5 volume for two reasons: My experience with RAID5 write performance is awful, the writes just don't scale like the reads to. NAND endurance, the parity would burn through the DWPD.
  17. Sure thing, I already have the SSDs to hand, I have two of them in my current PC, four in my lab server and four sat gathering dust. I want to consolidate the compute and storage.
  18. The price gap is acceptable to warrant the extra cores, (£98). It's the jump from the 1920X and the 1950X I don't like, (£310)
  19. So I'm planning out my next build, I wanted some opinions. I'm planning on using 12 (56 total PCIe lanes) PCIe devices on a Threadripper platform. Here are the devices I'm planning on using: Mobo: ASUS Zenith Extreme CPU: Threadripper 1920x (this may change, but seeing as all Threadripper CPUs have 64 PCIe lanes I guess this doesn't matter, please correct me if I'm wrong) PCIEX16_1: ASUS Hyper M.2 with 4xNVMe drives (16 lanes) PCIEX8_1: GPU (8 lanes) PCIEX16_3 : ASUS Hyper M.2 with 4xNVMe drives (16 lanes) PCIEX8/4X_4 : RAID card (8 lanes) DIMM.2_SLOT : 2xNVMe drives (8 lanes) I may add an Optane drive onto the U.2 port which will push the PCIEX8/4X_4 down to an X4 slot, but the Optane will run at X4 so it'll still be 56 lanes. Does anyone see any problems with this or see anything I'm overlooking / missing? I read somewhere that the chipset uses x4 of the 64 lanes for communication between the CPU, is this true?
×