Jump to content

Gorgon

Member
  • Posts

    3,431
  • Joined

  • Last visited

Everything posted by Gorgon

  1. You can easily limit the CPU usage by setting the number of cores manually to around 1/2 of what is available. So instead of setting CPU to "-1" set the thread count manually. Fore the GPU you can clock or power-limit it to reduce the average power and improve efficiency. Easy to do with nvidia using nvidia-smi -pm 1 #set persistence mode (only needed on Linux and only once) nvidia-smi -i <0|1|2|...> -pl <1/2 max wattage> # Only thing that works for Pascal but also works on later or nvidia-smi -i <0|1|2|...> -lgc 0,<1440|2205> #1440 for Ampere & Turing; 2205 for Ada
  2. No such thing as a stupid question. Just being stupid for not asking same team 223518
  3. My co-worker Jason (1974-2023) diagnosed just a few months ago with Colo-Rectal Cancer Rest in Peace Buddy - you are sorely missed
  4. Well, that was exciting - do we have a dead hardware post yet? Went to fire up one of my Rigs with old GPUs in it (3080s ) that was working a couple of weeks ago when I staged everything and it was giving me FSCK errors - luckily had a spare SSD which had Ubuntu 22.04.2 LTS on it so got it up and running. Started it folding and - bam - crashed again. Swapped the GPUs into another Rig that has a 2070 Super Hybrid and just got it Folding. So, down an 128GB SanDisk SSD (ancient) and a Aorus x570 Gaming.
  5. Likely just a combination of WUs not reporting in time and/or CPU WUs taking a long time to finish. It should all balance out in a few days. You did make sure your folding under your username and not as Anonymous - correct?
  6. Typically running a single GPU will yield more PPD than 2 vGPUs because of the Quick Return Bonus so as long as the GPU is well utilized your better off just running bare metal and, as a bonus, you free up one CPU thread. Yes, You can just delete the CPU slot. You can run CPU or GPU or CPU+GPU. Just edit the config.xml or use the GUI
  7. Yes, Ubuntu would be a solid choice but I wouldn't recommend attempting to change over this close to the event nor starting with a system with 5 GPUs. I'd start with a single system with 1 GPU to get your feet wet. BiFurcation shouldn't make it more complicated. If it works in Windows on your hardware then it should work under Linux. I don't bother with the Advanced Options usually except "pause-on-start"="true" when testing.
  8. I must admit I have no idea never having attempted to use one of these devices. You'll have to do some research but, be aware, that as it is just a single 6Gbps eSATA connection you'll be limiting access from the OS to the Disks and likely limiting performance of the Array. For the DX517 expansion unit, for example, Synology states in the Specifications:
  9. Likely NOT. These devices only have one eSATA port and thus to access all the disks they must have a Port Multiplier. These are problematic as they share a single 6Gbps eSATA connection. What you likely want is a real 4-disk JBOD (Just a Bunch of Disks) SAS/SATA enclosure with a SFF Connector and a Host Bus Adapter (HBA) with an external connector as well as the appropriate cable.
  10. The problem is finding a good AIO cooler for Threadripper has been challenging. The vast majority of AIO coolers are based on older designs and their coldplate is way smaller than the IHS of the Threadripper CPUs. Alpha Cool have AIOs specifically for Threadrippers, but you'll have to wait for a mounting kit upgrade for the new socket. Custom loop is the way to go if you must squeeze the most performance out of it. Do note that Wendel uses mainly Noctua Air Coolers on his Threadripper Systems, yes you may not get the full performance, but they will last forever and NOT leak over your very expensive system.
  11. I would wait for Wendel to test socket TRx50 motherboards and non-Pro 7000-series CPUs. The non-pro motherboards should have no problem supporting huge amounts of memory on just 4 sockets as they only support RDIMMs so no plain DDR5 or UDIMM support but remember that DDR5 is sort of dual banked already. Your NH-D15 might work but will, likely require offset mounting which will depend on where the chiplets are on the specific processor but a TRx50 specific cooler will work much better and handle more of the the 350W load.
  12. It comes down to the Panel installed in the Display and the electronics supporting it. As @FloRolf says Rtings does a fantastic job testing Displays. Here's their Fall 2023 Roundup of 65" Displays Also important to know is budget? what kind of Content (4k? HDR?) do you want to consume, is it a bright or dark room? and do you want to game on it? I bought a 55" LG C2 OLED recently on Clearance and Wow!, what a difference compared to the 55" Samsung 1080p panel it replaced. I had to replace the Apple TV it was connected to with a Apple TV 4k (wife appeal factor) and new Ultra 4k HDMI cables to get the eARC working properly between the panel and Sound Bar.
  13. A 4070ti has a TDP of 285W and while each 8-pin PCI Express PEG Connector is rated at 150W, running both cards at 50% this should not have been an issue even assuming neither card used a significant amount of the 75W available from the PCIe slot itself (Most modern cards apparently don't). I suspect there was a physical problem with the fit on the PSU or Cable's connector at the PSU end on one or more of the connectors/cables. Can you tell if the wire on the Cable is 16 or 18AWG? (It should be 16)?
  14. Ooof - out of curiosity which GPUs were connected to these three cables? (and I've added the above to my wall of Fame Shame Flame)
  15. Well after a few iterations the current setup is: Lian-Li O11D Air Mini Aorus x570 Master 1.1 (new open-box) Ryzen 9 5950x (Harvested) 4 x 32GB XPG Gammix DR4-3200 CL16 (new) 2 x 500GB Samsung 980 Pro PCIe3 nVME (OS) 4 x 3TB Seagate NAS (Non SMR) (RAID 10 in Storage Spaces) EVGA 240mm CLC FirePro W4100 (Display Out) Asus TuF RTX 4070 OC (Folding) Corsair TX650m (Harvested)
  16. Nice! Hit it (hard) with a Hammer there Spec! Without going for an EK or AlphaCool AIO that's about the best option short of going Custom Loop.
  17. I suspect also that things get a bit slow between the top of the hour and about 10-15 minutes after the hour when statistics are being collected and possibly other housekeeping is done on the servers. I'd noticed in my Time-of-Use testing that if I configured all my Rigs to start up again at 19:00 EST when the electricity rates get much cheaper here that it could take 5-10 minutes for all slots to finish downloading and start a WU. I adjusted the start time to 18:55 and they seem to start quicker.
  18. Not a problem. The only dumb questions are the ones not asked. Yes, the solder mask is the green or brown coating applied over the outer sides of the circuit board. It serves to protect the traces on the outer layers from the flux used in the soldering process as well as act as an insulator protecting the traces from electrical and physical damage. If any important traces were damaged you'd likely have noticed it by now. It appears this is a iTX (or mTX) motherboard and the traces likely affected are the ones going to the sound headers on the back I/O. Generally, if a trace was broken you'd notice something immediately and it would be much less likely that it would take weeks or months to appear. If it's working, I'd just relax and enjoy your new build. It is unlikely that any such damage would impact any other core component other than the motherboard (i.e. not the CPU, GPU, memory etc.) so worst case you'd have to replace the motherboard. I had a couple of motherboards with dead 2nd memory channels that I ran like that for a number of years. The memory performance was worse but they still did what they needed to do.
  19. Oh, OK. From here it looks like the solder mask was scraped off, possibly from the stand-off when removing the motherboard but, I can't really tell if any of the traces are damaged due to poor image quality. Without a good Macro image, and even then, likely without a microscope it would be difficult to tell visually if any traces were damaged. If it boots and is stable I wouldn't worry about it
  20. Agreed, especially if the iGPU will be used for the primary display output but adding a second pair of 3600MTS memory later may or may not work depending on how good the Memory Controller on that particular CPU is (Silicon Lottery). The safest bet, if your planning on adding more memory later is 3200MTS.
  21. 5600g should have no problem with DDR4-3200. You can always check to see if that RAM is on the compatibility list for that motherboard with that processor (It isn't but it should be OK)
×