Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About Cmdrd

  • Title

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I changed my BOINC team on WCG and R@H a week or two ago to LTT but I am just wondering how best to track the points earned exclusively while being a member of the LTT team. I have tried digging through the various stats sites and am not able to find a good way to break it down as it looks like when the team is switched it brings over all of the points earned historically to that new team.
  2. No idea how that is possible, Rosetta is not distributing GPU work units:
  3. How long is your queue set for work units? R@H has a lot of short deadlines so if you are keeping more than a day of work in your queue and not getting through it, R@H will be almost always high priority due to the shorter deadlines and you will ultimately fail to meet task deadlines. I dropped my total queue to half a day (0.25 and 0.25) and no problems with having cancelled tasks. Also R@H doesn't use the GPU, so not sure why you're having problems with GPU work units.
  4. IBM/WCG is creating a project for not only the current COVID-19 pandemic but also a framework for future work for these types of research projects: https://newsroom.ibm.com/2020-04-01-Your-Computer-Can-Help-Scientists-Seeking-Potential-COVID-19-Treatments, https://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,42232
  5. Same here, there is lots of valuable science to be computed in other projects as well, might as well make sure that this momentum can be carried forward. If you are wanting to prioritize one project over the other, change the resource share for each project in their respective account pages: Rosetta@Home: "Rosetta@home preferences" -> "Resource Share" (will have to hit Edit at the bottom) World Community Grid: "Settings" -> "Device Manager" -> "Device Profiles" -> Whichever device profiles you are using, probably just Default -> "Project Weight" (right at the bo
  6. The F@H server page has some juicy changes today. Looks like 2 more Azure servers and the software version over the course of the day has been changing from 9.5.2 to 9.6 across their servers, must be what some of the downtime has been. Hopefully that paired with their new Oracle cloud server and potentially other resources means they are gearing up for a dramatic increase in WU deliveries.
  7. I use Proxmox for my server cluster with Ceph running underneath for host failure tolerant storage. Works great and performs well.
  8. Been pretty steady with WUs for the past 24 hours or so, must be lucky or have good timing. Although looking at the server status page, last week when I first got set up they had an assign rate of maybe 50k/hr. Those Azure VMs must have helped scale up as the total assign rate is more often than not over 100k/hr. Hopefully that is a good sign of things to come, especially if LMG gets their WU server up this week as per the comment about asking Linus on his Beat Saber stream.
  9. Unfortunately not without DDOSing the FAH servers. You could also run Rosetta@Home at the same time as they are CPU only work units and are running COVID-19 WUs right now. They have a decent amount of work units available.
  10. Looks like they have 1.05M work units currently being worked on with another 22k in the tank. Lots of work to go around. Also at an estimated 347 PFLOPs. https://boinc.bakerlab.org/rosetta/server_status.php
  11. You have to change one of the graphs to CUDA on Nvidia or Compute_1 on AMD in the task manager window for the GPU to see GPU utilization or use something lie GPUZ.
  12. You could also run Rosetta@Home as they are CPU only work units and they are pushing out lots of COVID-19 WUs.
  13. You could run Rosetta@Home in the meantime, they have COVID-19 WUs as well and they are a CPU only project. It plays well with FAH from my experience with it.
  14. About the same amount of attempts here before finally getting one. It also took a long time to download so I would say the work servers are pretty saturated with bandwidth. Hopefully @LinusTech are able to get the infrastructure up that they mentioned in the last WAN Show relatively soon.