Jump to content

NolanW

Member
  • Posts

    3
  • Joined

  • Last visited

Awards

This user doesn't have any awards

NolanW's Achievements

  1. Wow, that was fast. Thanks for the quick replies. I am not entirely sold on commercial grade hardware, I don't think it is the best value. Teslas and Xenons are great but it is a huge price jump for a marginal improvement. I don't need floating point math at all in my work so higher precision is a waste of $$. Dual CPU is ideal but if the price isn't right then I will consider single CPU boards. The idea is to get the most cores for the money. I am literally dividing the cost by the core count. I just looked, the 32 core Threadripper is half the cost of the Epyc. I don't see the value in the Epyc.
  2. Hi guys, I would like input on a build I am doing that is going to possibly be painful if I don't know what I am getting into. I am a bioinformatician, which basically means I am developing and running software that consumes horrific volumes of data and does relatively simple computations on it. Generally my software runs on Cedar, Canada's largest super compute cluster, Linus actually recorded a tour of it. I also do my own independent research that I generally can't use Cedars resources without a huge amount of red tape. My goal for price is open ended but basically best bang for my buck, not necessarily the best. I am even willing to go with used hardware if it is worth it. The software I am looking to run is embarrassingly parallel, core counts are king. I am working to move to GPU compute to leverage its extremely high core counts. The hardware I had in mind was the Threadripper class of CPUs, but I can't seem to find a motherboard that does what I want. The original plan was to locate a dual CPU motherboard with as many PCIx16 slots as possible. I want to load it up with those cheap Nvidia GTX1060's that are supposedly floating around after the cryptocurrency crash. It would be great if I could get 7 or more of them into this computer like that 7 gamers 1 CPU build. The big thing is throughput though, I know the miners had these PCI breakout boards that split up the lanes to support a bunch of PCIx1 GPUs but I can't do that. I need to slog data into and out of the GPUs as fast as they can consume them, A decent amount of GPU RAM would be good too, at least 4GB per card. Support for large amounts of CPU RAM is important as many of my programs are bottle necked by the IO speed of the hard-drive and swap. I mostly need help sourcing these oddball parts and advice on viability of this project. One thing I would like to have is the ability to do GPU pass-through to a Windows VM. All work and no play.. something..something..
×