Jump to content

Gorgon

Member
  • Posts

    3,431
  • Joined

  • Last visited

Everything posted by Gorgon

  1. Likely there is an issue with the Bridging configuration. It sounds like the LAN (wired network) can forward packets to both the 2.4 and 5GHz Bridge Groups but the Bridge Groups aren't configured to allow forwarding to each other or likely also to the LAN. Here is my router using FreshTomato: The above basically allows the LAN (bridge0) to talk to the WiFi (Bridge1). If I wanted the WiFi to talk to the LAN I'd have to add a policy allowing Src: br1 to Dest:br2. Under the hood, most routers use Linux and thus IPtables and all these rules do is allow Layer 2 forwarding of packets between Bridge Groups.
  2. You won't. F@H only uses a tiny bit of VRAM (<1-1.5GB)
  3. First make sure there is at least two backups (3-2-1 - three copies of the data in 2 locations, one off-site) before you start. Normally I'd recommend replace the existing 4TB disks one at a time with an 8TB disk, wait until the vdev is re-built then repeat until all are replaced then you should be able to expand the vdev from 24TB to 48TB. This may take longer than just creating a new RAIDZ2 and transferring the data but the system will still be usable during the 8 rebuilds. With respect to the MetaData vDev. Is it useful? - you'd have to do some testing. At any rate having just one SSD as the MetaData vDev is a single point of failure. When the MetaData SSD fails all the data in the Pool will be gone. If it's using a Consumer SSD (like the MX500s) rather than enterprise SSDs (Optane, Kyoxia ...) then that would be even more sketchy as the MetaData vDev will burn through their limited write lifetime pretty quickly. See this thread and others at TrueNAS. For video files increasing the recordsize to a larger value (1MB) as mentioned in the thread above might help both MetaData and pool performance. So, if you want to use a MetaData mirror then I'd get some Enterprise SSDs and create a new vDev with the new drives with the MetaData vDev then transfer the files over (which will create the new MetaData on the fly) but you'd have to change all existing fileshares to point to the new vDev. Now would also be a good time to asses if the existing memory is adequate for the new configuration and add more ECC Memory if required.
  4. Ryzen 7 5800X3D would be a no-brainer as it should will work in the existing motherboard with the current RAM or see if you can find a used 3600X or 3700X or a deal on a 5600X. Also, you didn't list the capacity of the Current Power Supply. If it is under 500W that might impact how large of a GPU can be supported by it. Your 1080, for example, has a TDP of 180W and a recommended Power Supply Size of 450W and uses 1 8-pin PCIe PEG Power Connector. The 4070, on the other hand has a TDP of 200W, and a suggested 550W Power Supply and uses 1 12vHP Connector which will, likely, require the use of a splitter to 2 8-pin PCIe PEG Connctors.
  5. Congratulations to @Mxyzptlk for a great showing here. @Shlouski sorry that @jakkuh_t beat you out for 2nd place but they did have a bit of an advantage. @Justaphf decent showing in this event. Keep up the good fight! @Alex Atkin UK well played managing your output. Trust me you don't want to know my electricity rate. I'm humbled by the support you've shown. And @miker07 as usual a great performance. Please go back to your other Teams now - J.K. take that flag, pass me and run with it. And @Baha and @ShampooTime you both did an excellent job and however it shakes out thank you for contributing so much for such a worthwhile cause. And to all the other participants I humbly thank-you for helping out in a cause so dear to my heart. Fsck Cancer!
  6. Whoops! I'll trade you weather though. It's -5C here with some snow on the ground. God I hate winter.
  7. I want to see how the race for 10th position between @Baha and @ShampooTime settles out. Those two have been neck and neck the last few days. Nevermind - looks like @ShampooTime stopped folding this morning so @Baha will likely slide into 10th at the last moment. I've already started shutting some of those inefficient Turing GPUs down. I was hoping to get a 1070, 2070, 3070 and 4070 all folding during the event but ... hardware failures.
  8. Perhaps you should change the end time on Post #1 here to 00:00 December 6th to attempt to contain some of the confusion?
  9. Like I said - I'm fine with Lucky #7 I'm only running the Turing GPUs 16 hours on weekdays to avoid the really expensive Time-of-Use rates.
  10. I've been running clock-limited since the start of the event. 2205MHz on Ada and 1440MHz on Turing. Do you really want me to go to stock settings at this point? I'm pretty sure I could gain a rank that way.
  11. Actually the Zotac AMP 4070tis I have work just fine in 3-slot spacing. Only about 5mm gap between the cards but if you run them power of clock-limited the lower runs at 48-53C and the upper at 68-72C. Removing the back plate on the lower card would widen that gap another 2-3mm and should improve things. The Asus ProArt 4070tis (non OC) look to be about the same thickness as the Zotac and likely have better build quality but they weren't out when I bought the Zotacs
  12. I didn't notice it until I went looking for it and enabled the display of "username" in HfM which highlights inconsistent names and Teams in Orange.
  13. LOL - was wondering why my points were consistently lower than expected. Sure enough one of my slots (RTX 3070) had a typo in the username. So while I was contributing to the Team it was not adding to my total. 99,567,609 points to the wind. At least it was my slowest GPU
  14. It's working but some of the assigment servers are having issues but that's not too unusual 17:25:32:WU01:FS01:Connecting to assign1.foldingathome.org:80 17:25:33:WU01:FS01:Assigned to work server 206.223.170.146 17:25:33:WU01:FS01:Requesting new work unit for slot 01: gpu:10:0 AD104 [GeForce RTX 4070 Ti] from 206.223.170.146 17:25:33:WU01:FS01:Connecting to 206.223.170.146:8080 17:25:33:ERROR:WU01:FS01:Exception: Server did not assign work unit 17:28:10:WU01:FS01:Connecting to assign1.foldingathome.org:80 17:28:10:WU01:FS01:Assigned to work server 131.239.113.97 17:28:10:WU01:FS01:Requesting new work unit for slot 01: gpu:10:0 AD104 [GeForce RTX 4070 Ti] from 131.239.113.97 17:28:10:WU01:FS01:Connecting to 131.239.113.97:8080 17:28:41:ERROR:WU01:FS01:Exception: Not connected 17:32:24:WU01:FS01:Connecting to assign1.foldingathome.org:80 17:32:24:WU01:FS01:Assigned to work server 206.223.170.146 17:32:24:WU01:FS01:Requesting new work unit for slot 01: gpu:10:0 AD104 [GeForce RTX 4070 Ti] from 206.223.170.146 17:32:24:WU01:FS01:Connecting to 206.223.170.146:8080
  15. AM4 is a dead platform with no upgrade path. AM5 will, AMD assures us, be around for at least one or two more generations.
  16. @GOTSpectrum I thought the wrong spreadsheet was linked today but It's just the Day 16 Starts are on a Tab before Day1
  17. Those are the diagnostic lights on the Motherboard - see page 1-22 in the Manual these show the boot progress and if any gets stuck lit they indicate where in the Power On Self Test (POST) the issue is. As long as it boots and all end up off your OK.
  18. My NetWare CNE in 3.11 is the only certification I was really proud of. I challenged all the exams and passed them on the first sitting. Took a year of study to do that. I ran a NetWare 3.12 Server at home for years along with Windows Server NT then 2000 before I finally gave up on Windows Server and NetWare and went with a Single SuSE Linux server with a ISDN Connection, IPTables Firewall, BIND DNS, a Web Cache and Samba Server (Dual 500MB Seagate Hawk SCSI Drives). The Linux Server was replaced with a DSL Modem connected to a LinkSys WRT54G running OpenWRT and a Synology NAS then that was replaced with vDSL Modem in Bridged Mode connected to an Asus RT-A66U running Tomato and the Synology with a FreeNAS Server.
  19. We have clusters at work with years of uptime still. As long as you stay within the Major SW Version you can upgrade the Secondary nodes, promote and upgrad one to the Master role and finish the job. These are Network Appliances tho not servers.
  20. I had a NetWare 3.11 Server with over 5 years of Uptime
  21. I actually have to have the heat on here or the rest of the house gets cold. Only 23C in the Basement, 22C on the ground floor and 21C on the 2nd. 3C outside currently and going down to -5 overnight and there's that white stuff on the ground
  22. You can do a couple of things to reduce your Electricity costs. You can power and/or clock-limit your GPUs to greatly improve their efficiency. Reducing the Power Limit on Pascal or later cards by 40-50% typically only decreases their yield (PPD) by 15-20%. Limiting the maximum GPU clock to 1440MHz for Ampere or Turing GPUs or 2200-2400MHz for Ada GPUs has a similar boost in efficiency. If you have Time-of-Use rates from your Electricity provider you can also set your GPUs to just Fold during the periods when Electricity is least expensive. I'm all about doing the most amount of work at the lowest Power Consumption and run all my GPUs clock limited and run only my older (Turing) GPUs, which are massively inefficient compared to Ada, during periods when the Electricity is lest expensive. See the Profiling and ToU threads in my Sig for more information.
×