Jump to content

bullet308

Member
  • Posts

    20
  • Joined

  • Last visited

Awards

Profile Information

  • Gender
    Male
  • Location
    South Carolina, US
  • Interests
    Computers and various other things...
  • Biography
    Just an old-time computer geek and Linux dude - first distro was Yggdrasil, after tinkering with Minix a bit...Like I said...OLD time...
  • Occupation
    I teach office computing skills to disabled adults

System

  • CPU
    2x Intel Xeon X5675 Hex-Core 3.07GHz
  • Motherboard
    Dell OEM PowerEdge R710 rack server repurposed as workstation
  • RAM
    48Gb ECC DDR3 1333Mhz
  • GPU
    Zotac Mini GTX 1070
  • Case
    Modded OEM 2u rackmount server case
  • Storage
    128GB cheapie SSD for boot
    5x 2TB spinners in 2x RAID 1 and one RAID 0
  • PSU
    2x Dell OEM 875w
  • Display(s)
    AOC 27"
    Viewsonic 23"
    Samsung 22"
    Samsung 21"
  • Cooling
    Modded Dell OEM fans
  • Keyboard
    Vintage IBM Model M Spacesaver bought new for $15 back in the '90s...
  • Mouse
    Ancient Microsoft jobbie
  • Sound
    HDMI to headphones
  • Operating System
    WIN 10 Pro/Linux Mint 20 dual boot running on bare metal
  • Laptop
    Dell Precision M4500 Mobile Workstation 1st gen i7 quad core 16gb RAM
  • Phone
    Samsung J7

Recent Profile Visitors

276 profile views

bullet308's Achievements

  1. Cleared 50,000,000 today. Cool. If I can keep the GTX 1070 and both GTX 970s online and running, I can steadily gobble up 1.4 million PPD, which is pretty nice. I keep losing production to various hardware reshuffles and upgrades, though, as reflected in the graph attached. It also seems like some WUs just plain run less efficiently than others. The GTX 1170 generally wants to run at about 800k PPD, but sometimes it will run not too much better than half that, and the only variable I can account for is the difference in WUs. Too bad there isn't a "silver-plus" badge to put in for at 50 million. See y'all at 100 mil!
  2. I imagine that is how most people here started: with just whatever was laying around, then whatever they could dig out of closets. I happen to have access to a lot of older server and workstation gear, so that is where I went next. Lots of old Xeon CPUs making lots of heat. Then, I discovered the joys of GPU folding and have come to focus exclusively on that. I suspect that some of the higher flyers in this world have either repurposed somebody else's server farm (and power bill) or have dug out their old mining rigs. I will be lucky to ever even break into the top 1000, given some of the folding capacity that people have developed here. But, I am happy to do what I can do.
  3. We have not gotten that hot here yet, but it is inevitable. That is a big part of why I gave up on CPU folding altogether. My wife fronted me some money as an early birthday present, and I ordered a GTX 1070 today. That, combined with my two GTX 970s and my old GTX 750ti should give me something around 1.7 million PPD. That will just have to do for now. From what I am told, I can even use these things to play games!
  4. Oh, COVID-19 itself is going nowhere anytime soon. People here in the US seem determined to help spread it as far and as widely as possible. The only thing fading with COVID-19 is the novelty of it being among us and the novelty of folding proteins to try and cure it, hence the vast number of inactive users that are accumulating like arterial plaque in the folding world. 3/4 of the people that were folding back in April are inactive now. Personally, I am doubling and redoubling my efforts as we get deeper into this thing, since a vaccine or something along those lines is the only thing that is going to help us shake it anytime soon. Next, probably a GTX 1080 or something in that class.
  5. With the novelty of the whole COVID-19 wearing off, fewer people are participating in the F@H effort than a few months ago. However, those that are left (including me) have buckled down and are cranking out more points than ever, averaging around 36 trillion points or 1.5 million Work Units per day. What that adds up to in petaflops or super computer equivalents, I am not sure, but can say with some confidence "a LOT"... I have retired all of my older Folding machines (Socket 771/775 boxes) and have ceased all CPU folding, as it just is too inefficient and makes too much heat, and I have added several GPUs (a pair of GTX 970s) and am looking to add one more shortly (a GTX 1070ti, most likely), which would put my production up to over 1.5 million points per day. All of these reside in various Socket 1366 machines of roughly 2012-vintage, all running hex-core Zeons (not that this matters all that much Folding purposes any more...) I should have my F@H Linus Tech Tips Team Silver (25 million point) certificate by Thursday or so. Only 75 million more to go (!) to get my Gold.
  6. Still picking up steam, still adding GPUs, still cutting back on CPU cores devoted to folding. Still having fun. Currently have the following up and running: 1x Dell Precision T5500 w/ 1x X5660 @ 2.8Ghz, 12GB of RAM, 1x Zotac GTX 970 1x Dell Poweredge R710 w/ 2x X5675 @ 3.09Ghz, 48GB of RAM, 1x Zotac GTX 970 1x HP Z400 w/ 1x L5640@ 2.2Ghz, 8GB of RAM, 1x EVGA GTX 750 Ti 1x Dell Precision M4500 laptop, Quad Core i7M, 15GB of RAM, 1x Quatro 880M This adds up to some 900,000PPD, so right now I am blowing past bunch of people that are no longer active and a fair number that are. Entering the top 8,000 on the team and the top 50,000 of FAH as a whole. Getting the GTX 970 (which seems to be about the best value going right now, at just over $100 per on US Ebay) up and running in the R710 was actually quite simple, really...You just have to do everything in a particular sequence to get the card detected, the onboard video turned off and then have the server running on the GPU. The next frontier will be trying to get the two GTX 970s running in SLI on the R710, which from what I can tell by looking around on the web has not been done before. It may prove to be impossible, and if that is the case, I will try it again with both cards in the T5500, the only downside there is that I can't get it up and running on two processors (a flaky riser interface, I figure). If that works, I may go back and try a better one-card solution for the R710.
  7. I am getting a lesson just now on the power and relative efficiency of GPU folding vs. CPU folding. I have read about same, but I am actually seeing it here in person now. At first, I was folding everything under Linux, which apparently does not support GPU folding very well if at all, so I was crunching away with my CPUs and burning a lot of electricity for not that many points. Everything I could throw it, but my office was starting to resemble a toaster-oven, and I was a Pop-Tart within... Then, I switched to Windows 10 on one machine, and it instantly picked up and put my GTX 750 Ti to work, which added a quick 100,000 PPD all by itself. I had no idea... Suitably impressed, I bought a used GTX 970 and put IT to work...350,000 PPD! I am at around 600k PPD total now. And now, for some reason, the client in my machine with the GT8800 can suddenly see and use that card as well. I have retired all my dedicated CPU folding machines and cut back some on the CPU cores dedicated to folding. It does not seem to make all that much difference, really. This is addictive...I'd like to add a GTX 1060 or something now...
  8. Hi, all. I have been a longtime fan of the Dell Precision T3500 (Socket 1366), having run one as my main rig for the past five or six years now. It has done everything I have asked of it and it is still fabulous. I didn't particularly need to upgrade from it, but the bug bit me and so I bought a T5500 motherboard and installed it into a spare T3500 case I have. Combined with a T5500 wiring harness and a 1000w T7400 power supply, it all went together and functions perfectly... as a single processor machine. But a T5500 with one processor is pretty much just a T3500, and so I ordered in the appropriate riser board and installed it along with a matching processor and RAM. Plugged everything up and went to start it up. It POSTs okay, but it keeps throwing an error about something being wrong with one of the DIMMs on the riser, and it seems specific to SLOT 2 and not a particular stick of RAM. Looked at it and cleaned it a little bit. Still with the RAM error. But that is not the big problem. I hit F1 and it continues to boot normally, then it starts to load the OS and suddenly, the power is cut. <click>, and off. I don't think it is thermal, because I can run the entire pre-boot memory test without issue and it will idle indefinitely at the BIOS screen, but the first time it tries to load Windows or Linux off of either a hard drive or a flash drive, it gets to the same point in loading the OS and just shuts off. Any ideas? Thanks
  9. Still chugging along, but slowly picking up speed with hardware additions and upgrades... Have now cleared 5 million points and 1000 WUs. I am another one that is working his way up into the top-ten...thousand on the team and the top-one hundred...thousand overall. :-) Battery now includes: 1x Dell R710 with 2x X5660 Xeons 1x Dell T3500 with 1X X5675 Xeon and 1x EVGA Geforce GTX 750 Ti OC 1x Dell T5500 with 1X X5670 Xeon (riser board and second X5670 on-order) 1x HP Z400 with X5670 1x Dell M4500 laptop with 1x quad core i7M 1x Acer crappy netbook that manages to crank out 500 PPD somehow or another... Mostly LGA1366 stuff. I have disposed of all my LGA771/775 hardware now. At this point, the T3500 is by far my most productive machine, mostly because of the reasonably modern GPU that stays stocked with work to do pretty much 24/7. The other machines have various older GPUs (a GTX880, a Quadro 3700, that sort of thing). The Windows FAH clients in those machines see the GPUs, but they cant get any work, while the GTX750Ti gets WUs to crunch on continuously. Weird.
  10. Its a full VM under qemu. I suppose I can try the Container option as well. I suppose I would also have to look at either installng a GUI on ProxMox or figuring out how to run F@H from the command line. Either way, I will give 'er a go. Thanks.
  11. Hi: I recently purchased a Dell PowerEdge R710 server with 32gb of RAM and a pair of slower quad-core processors with the intention of planning to play with some virtualization stuff on it. I also want it to do some a good bit of folding on it as well. When running a Linux Mint Mate 19.3 install on bare metal, it will crank out around 30k points a day quite steadily. However, with Proxmox as a hypervisor and running Ubuntu server in a VM with all 8 cores devoted to it, its given me all sorts of numbers, first 70k for a while, then 30k for a bit, then finally it drops down to about 12k points per day where it has been pretty steady for about an hour now. I am assuming the first number was some kind of anomaly. I was expecting to take some kind of hit from the overhead of the hypervisor and all that but not THIS big a hit. Is this what I should be expecting, or is there something I should be looking for in my Proxmox configuration or what? Brand new at VMs, so any help would be appreciated. >>>BULLET>>>
  12. Hi: I have accumulated some LGA1366 server and workstation parts and complete systems, and find myself intrigued by the HP, Dell and other server blades that are available on the used server market. Full-fledged dual Xeon servers that cost little to buy, little to ship, and I have spare CPUs and RAM on-hand to get up and running? Sign me up! Well, one little problem... They are of course designed to plug into a proprietary chassis that provides power, connectivity, etc, etc. And while the blades are cheap, those housings are very much not. :-/ Does anybody have any experience with rigging these (hopefully of the LGA1366 generation) to where they can operate independently of the chassis? Or a pinout for the proprietary connectors that mate the two? A bit of a stretch, I know, but I like projects. :-) TIA >>>BULLET>>>
  13. Still chugging along, clawing my way up towards a million points... Had to pull two of my workstations offline for a bit to try and "upgrade" the processors to make less heat and consume less power. Tried to increase my core count in one case but the machine didn't like the low-power hex-core I put in it and wouldn't boot. Might be a BIOS issue. Something I have noticed...Compared to other peoples scores, my impression is that I am racking up points at a reasonable rate but my WU count is proportionally higher that most. Any explanation for this phenomena? Am I getting fed more but smaller WUs than average?
  14. Clawing my way slowly up the ladder running a bunch of old and odd hardware; my main Precision T3500 hex-core Xeon x5675, another hex-core Xeon HP Z400, this one an L5420, a HP XW8600 dual X5450 Xeon workstation, all running an assortment of comparably vintage video cards. Add to these an old Dell Inspiron somebody gave me running an Athlon X2, an Optiplex 380 with a Core 2 Quad, and a first-gen i7 quad-core laptop, and last, but most certainly least, an Acer netbook running a Celeron N3050 (slow but steady wins the race, right?), all of them running wide-open 24/7. What does all this buy me? Well, I am about to crawl my up into the top 25,000 on the LTT Team and the top 200,000 overall, so I figure I am probably pulling my weight, if not exactly setting the world on fire...
×