Jump to content

ffbr

Member
  • Posts

    193
  • Joined

  • Last visited

Everything posted by ffbr

  1. Still having problems with universe choking my servers. Currently sitting on About 800 jobs trying to upload. Had to move 2 servers (48cores) to Rosetta as they were not getting jobs from universe. Hope it normalize soon to try to make a break for it.
  2. It works, that is exactly what i did for the previous throw
  3. I signed up directly from boinc manager when adding the project (it let you choose between connecting or create a new account). Then you can connect to the website and change the settings. Never managed to sign up directly on the website
  4. Couple of notes/question about universe@home: - in order for our contribution to be taken into account in the pentathlon, do we need to check that : Or is it working without? - I noticed that the ULX tasks are a pain and have a point/second that is way below BHspin task (on the epyc server I have 0.26 pts/s vs 0.066pts/s) You can choose which project to give priority : On top of that results for ULX are big and a pain to upload given the current load on the servers.
  5. But as you already have one, you might as well add 2 as it would stay on the same line
  6. I am keeping all in on universe but if the upload situation is not sorted out tomorrow I will switch back to rosetta. Currently some system are finishing WU faster than it can upload ... Just realized that I had a server idle for a couple of hour as universe was refusing to send job (too many uploads).
  7. If you go for fire and forget rosetta is the way to go. As @Windows7ge said, make sure to have join the LinusTechTips_Team on the rosetta website.
  8. universe is starting clearing my WU. What a disaster start it was literal ddos with the bunkering...
  9. Apparently they only have a 1gbs line to their servers and its totally saturated by the debunkering I am sitting on 1-2 millions for universe... wish it would upload
  10. Never used discord on my end, would not mind. Seems the penthatlon broke universe. Impossible to dump my task. According to the log their server are not responding. Seems 40 out of 1300 have uploaded. Side effect, I have 50 cores sitting idle as universe refuse to send more task due to too many upload waiting. Gonna grab a coffee and switch to Rosetta while it calms down
  11. At spot price they run cheaper, price fluctuate depending on the demand. Don't have the price in front of me but I think two 64vcpu in compute configuration (limited ram) would run around 1$/h.
  12. Yes, I managed by registering directly through boinc manager after adding the project. You can then connect normally to their website.
  13. Yes makes sense. I am stockpiling since yesterday (600 ready to upload as of now, no idea if its good or not) and plan to do controlled releases depending on what the other teams are doing (no need to be too visible :D) A shame that boincstats does not manage to connect to universe@home that would have been good for my overall stats (yes yes for science and glory)
  14. Anyone running/planning to run universe@home?
  15. Had that last week or so with a 140mm case fan that decided to split in half. Luckily it was at the bottom of the case and no other components were affected.
  16. I have been looking at the numbers for numbers@home for the fun. My GPUs (T4 and 2070) average at 6min per task so one system can run approximately 10 tasks per hour. On average one task gave me 65points. So if I take the current winning team, over the last 12hours they produced about 92795 points/hours, or 1428tasks/h. Which represent 143GPUs. For our team we have about 35 GPUs equivalent.
  17. And if graphic card are available crunch javlin (numbers@home) on them. bunkering in between throws
  18. Just noticed the color of the bird ! Well hello from the north
  19. I would like to know what they have to eat those cows ... Our results on javlin are not so good. Seems we did not manage to convert the movement from f@h
  20. Finally got 5min to launch it, its beautiful: Not certain I provision enough disk space ...
  21. Update on ressources use on my end: - maintaining 32 cores on Rosetta - one 2070 for numbers (my amd cards are slower than my CPU so I stopped them, better use the CPU) - 4 T4 on numbers - starting a 32 core epyc server on the city run (don't remember the name) Unfortunately (or not) I have to take the t4 out tonight. We have a pretty cool project to run. Basically a clustering study to assess local testing capabilities for covid19. If strategically @Ithanul think I should switch project let me know.
  22. My 2070 eat about on task every 7-9min
  23. Tried deployment on the WSL, appear to work for the CPU task, didn't get access to the GPU (and no time to go check if its possible :D)
×