Jump to content

Artesian

Member
  • Posts

    4
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Artesian's Achievements

  1. Hi folks, a client has expressed interest in a 4TB NVME drive from samsung using the NF1 form factor. This more or less requires using a server platform designed for these drives, OR using an experimental adapter. The client wants this in a data center anyway, so why not start with a backplane? Supermicro makes some interesting products like the https://www.supermicro.com/en/products/system/1U/1029/SSG-1029P-NMR36L.cfm -- but we don't need that many drives or two Xeons. Their requirements: +4TB NVME drive approved for 5+ year life of 5PB/year +128GB RAM +1 or 2 or 4U server size +No GPUs +At least an 8 core processor with as high a single core clock speed as possible What existing product works for this medium duty server? Or is it better to suggest an array of smaller normal 2TB+ drives on a standard ATX board of some kind? Adapters or not. We're usually building big GPU servers so this request kind of threw us for a loop. Thanks! -Artesian
  2. An equivalent server on AWS costs >$40,000 per year. If you have at least half a week's worth of rendering to do during the business week, this rig pays for itself within the first year!
  3. Appreciate that note. There's a lot of wiring around the master supply and it can become a real rat's nest if you aren't careful. Will work on this even more next time. Opting for 2 individual cables for each GPU increases safety but definitely adds bulk.
  4. We just finished & photographed the build of our 7 GPU Hybrid Supercomputer Render Rig with 7X RTX 2080s from Evga. Designed for Octane Render, it's powered by a Xeon W chip on a Gigabyte server board; 128GB of ECC RAM and 2X 1600W PSUs. If it's not a world first of its type, it's at least quite rare -- as getting all 7 gpus functional at maximum theoretical [16x8x16x8x16x8x16x8] PCIE bandwidth is a huge, annoying challenge. It involves tweaking some in-depth settings related to PCIE compatibility. Usually we build high end desktops, but we couldn't resist trying this out. It's going to save the final user A LOT of money on AWS render costs. Why liquid cool? It's going to be run in an office so being extra quiet is helpful. Why 7 GPUs? If you put 4 gpus in two rigs, you pay twice for the motherboard, case, RAM, AIO, and drive... so this saves a few thousand bucks. It easily outperforms an amazon cloud server that costs MANY times more than it does. Basic SPECS & PARTS: 7X EVGA RTX 2080 hybrids, for a total of around 24,000 CUDA cores. XEON W 2125+ (any W series will work on this board) Gigabyte MW51-HP0 dual gigabit [not 10gig] networking, 7x16 pcie slots! 128GB Samsung ECC server RAM Corsair H100 AIO cooler (large air coolers would not fit under the thick risers!) 7X Thermaltake risers 2X EVGA 1600W PSUs Custom aluminum frame (adapted from mining & modified to support the huge heavy radiators) 3D printed PSU adapter bracket and PSU adapter to join the two PSUs. Master PSU activates the slave when turned on. Posted this to Reddit last week and got lots of positive response; a few folks suggested we post here as well! So here we are. Questions/comments let me know... Thanks for looking, -Artesian
×