Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About Windows7ge

Profile Information

  • Gender
  • Interests
    Computer networking
    Server construction/management
    Writing guides & tutorials


  • CPU
    AMD Threadripper 1950X
  • Motherboard
  • RAM
    64GB G.Skill Ripjaws4 2400MHz
  • GPU
    2x Sapphire Tri-X R9 290X CFX
  • Case
    LIAN LI PC-T60B (modified - just a little bit)
  • Storage
    Samsung 960PRO 1TB & 13TB SSD Server w/ 20Gbit Fiber Optic NIC
  • PSU
    Corsair AX1200i
  • Cooling
    XSPC Waterblocks on CPU/Both GPU's 2x 480mm radiators with Noctua NF-F12 Fans
  • Operating System
    Ubuntu 18.04.3 LTS

Recent Profile Visitors

17,657 profile views
  1. Bringing up a topic I made over two years ago? To whomever it may concern (I haven't read OP's question) I did run that anti-freeze 50/50 mixture in that loop and it went for well over a year with only one real issue. Bubbles. The anti-freeze behaves kind of like soap. The bubbles don't pop fast they stick around and build up. Be careful what pump/ress you use as it can exasperate this problem (mainly bay ress/pump combos). If we're talking about putting it in a computer I wouldn't. The stuff stinks (smells) and the toxic properties don't make it something you want around pets or small children.
  2. This is part in where I had the idea of virtualization for bunkering purposes since you could just save an instance of the vDisk then create and run as many VMs as time you have available to bunker. Unfortunately Oracle VM Virtualbox has way too much overhead. Windows in general has way too much overhead and not everyone would be comfortable using Debian+QEMU and optimizing the VMs for performance. So that may be one benefit to virtualizing BOINC. Bunkering would be much easier/efficient. Overhead for being a VM should be minimal at least with debian+QEMU as the hypervisor. It'd give you a full additional layer of control over sharing resources to projects. Add a GPU, switch a GPU, add CPU's, etc. I'll probably just go ahead and build it anyways. Maybe I'll come up with another purpose for it some day that can seriously leverage the VM aspect of it.
  3. So that's how that works. Up until now I thought it only let you run one project at a time. So there's no immediately apparent benefit to virtualizing the process. Really just a 1-3% performance reduction with no return. Surprisingly not actually. Install Debian Install virt-manager (or Cockpit) & ovmf (QEMU+UEFI support for VMs) Enable IOMMU groups on the south bridge (AMD at least in the case of TR) or VT-d (Intel) Enable IOMMU in Grub. Disable the GPU driver from loading on system startup. Can be done on a per device basis or just blacklist the driver system-wide. Pass-though the GPU and everything else in the IOMMU group to the VM. Run lstopo and edit the VMs .XML file to pin the vCPU cores to physical cores. This helps performance quite a bit especially when you have more than one NUMA node like on TR or a multi-socket motherboard. Enable huge pages. From there you're pretty much done. You can setup the VM with drivers (OpenCL/OpenGL/CUDA), install BOINC then duplicate the VMs virtual disk and connect as many instances as physical GPUs you have to give them and like that you're done. Assuming nothing goes wrong (something always goes wrong) I could do this in a day or less. It's on my to-do list to write a guide on how to setup low overhead/high performance VMs on Debian. Gaming in a VM is extremely doable with the right hypervisor & configuration. Yeah, that would go over well. "Dear Santa, I've been good (mostly) all year and I just want one thing. To build a crazy VM server. So could you maybe drop a EPYC 7601P, 128GB of DDR4 RAM, a SP3 motherboard with 8 PCIe slots, a pair of Corsair 1000Ws, and eight R9 series high-end AMD GPUs in my stocking? That's all I ask. p.s. And a NVMe 2TB SSD. p.s.s And a 225V/15A circuit breaker. 115V/10A isn't going to cut it. Thanks, Windows7ge"
  4. I don't know how happy the OS would be running 4, 5, 6+ instances of the BOINC client. Not to mention the BOINC manager would be confused connecting to localhost unless each boinc-client instance was accompanied by a unique port number. Configured normally I'm going to guess each project would just fill the queue based on the % you configured (how much based on the work you request) and it'll just run every project at once.
  5. That's what I'm trying to figure out. Yes each GPU would pick up a task but would they pick up tasks from different projects at once or would it only run one project at a time switching between projects periodically? I'm basically trying to justify playing with EPYC, Virtualization, & GPU pass-though but if there isn't any real benefit to segregating the projects & GPUs then that kind of ruins the plan I had going...
  6. Welp, if anyone wants to gun for my position on WCG (@Gorgon being the most likely candidate) put it into over drive as my productivity is now down ~32,000/credit per day (I'm telling you before BOINCstats even does) as my largest server is now offline and will remain offline until further notice (I'm going to guesstimate 2 weeks) for reasons I will post about in a Status Update. So while I'm down, GO GO GO!
  7. Hypothetical scenario, let's say I had 4 GPUs and I signed up for four projects setting the distribution to 25% for each. Would each GPU pick up a task for each project or would all the GPUs run a single project for a set period of time then switch to another project, and so on and so forth...
  8. The way I understand the BOINC software you can only run one project at a time. I believe it does allow you to set it to switch between projects on a time basis but you can't run two different projects at once (which is why I'd like to explore virtualization). Check your logs. If it says it tried to fetch work but there's no work available then I believe it's out of your hands (unless your preferences on the site say not to give you work for the jobs they have available).
  9. There isn't a significant amount of things to go wrong. Port Forwarding as a feature is fairly simplistic except as Oshino Shinobu mentioned it could be firewall related (even though the point of port forwarding is to circumvent the firewall). The one thing that stands out as odd to me in your included image is that the WAN interface says eth4 when the WAN is your DSL connection. Now from experience even though it says WAN it could very well just be referring to the LAN but by any chance was this a setting you configured manually? Or was that just there when you hit OK/Apply?
  10. The only things still powered while the system is off would be the 5V standby for PSU power on. This keeps power going to quite a few components on the board when the system is "off". So standoffs isn't a bad thing to check but I'd start to question if the motherboard is good or not. Nothing else should be receiving power.
  11. If you're running directly off of that then it shouldn't be a double NAT issue. And this also means your ISP connection type is DSL. I'm going to assume port forwarding on DSL works like any other since I've only heard people having serious issues when using LTE connection types due to CG-NAT. Did you try using an online port tester? They've worked for me when I can't directly test the open port. https://www.yougetsignal.com/tools/open-ports/
  12. What modem & router, or modem/router combo are you using?
  13. A number of things can cause this. What is your ISP connection type? Fiber/Coax/DSL/LTE? Some ISP's & connection types don't allow Port Forwarding. Have you checked to make sure you're not behind a double NAT? What can cause this is when your Modem doubles as a Router and you've connected your own Router to that one.
  14. Assuming you're using a paste that doesn't turn into chalk it shouldn't be necessary. I have had pastes where it did kind of lock itself on there and I had to "break" the paste to get the cooler off (like the cooler wouldn't move. Like it was glued.) but persistent enough rotating side to side and gently tipping the cooler side to side will break the suction holding it to the CPU without pulling the CPU out of the socket.
  15. Agggh, I see what you did there. Although that would just be the upload date not the actual filming date but being a tech channel I would imagine they upload them not long after filming so it shouldn't make a real difference.