Jump to content

VanBantam

Member
  • Posts

    25
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About VanBantam

  • Birthday May 27, 1910

Profile Information

  • Gender
    Male
  • Interests
    Deep Learning Computer Vision SDR
  • Biography
    I train neural nets both organic and artificial
  • Occupation
    Grad Student

System

  • CPU
    AMD 1950x 16c
  • Motherboard
    Asus Zenith Extreme
  • RAM
    128gb Corsair Vengeance LPX 2666
  • GPU
    4 x Gigabyte 1080 ti 11GB OC
  • Case
    Thermaltake core x5
  • Storage
    Samsung 970 250gb m2 Samsung 970 1tb m2 2 x Crucial mx500 2tb ssd 2 x 250gb ssd
  • PSU
    EVGA T2 1600
  • Display(s)
    headless
  • Cooling
    CLC
  • Keyboard
    Apple full size bluetooth keyboard
  • Mouse
    Logitech m320
  • Sound
    whatever is onboard
  • Operating System
    CentOS 7

Recent Profile Visitors

655 profile views

VanBantam's Achievements

  1. WHOA WHAT ON EARTH IS THAT THING? Dude it can handle 128 GB of ram!!!!! In a miniITX form factor. Have you used this motherboard before? Here's a review from serve the home.
  2. TEC are pretty amazing things. I use to work with a large format x-ray ccd camera, 2in x 2in sensor. We'd hook it up to an ultra high vacuum when calibrating or using it. The cold side got down to something like -40C and the hot side had to be water cooled otherwise it'd easily get above 50-60C. We had to hook up a water chiller to it, set it to about 4C. That would get the backplane temp, the hot side, down to about 20-25C. I really wish I knew the part number that the camera company used for the TEC cooler
  3. Dunno why it took me so long to think of this but why not use a distributed file system? Each editing PC would be a storage node. Data would be replicated across the different editing workstations. If you feel like burning cash you could use something like Spectrum Scale, aka GPFS, which works quite well. I've had less experience with Gluster but for the price it's a much more reasonable solution. With either solution you could use data affinity to colocate data with where the data will be processed. Anyways just an idea.
  4. A Gigabyte 1080 ti Gaming OC with an alphacool m19 waterblock.
  5. Has anyone else had this issue where their Zenith Extreme just won't retain the setting of the fourth slot? I've set it PCIE x8 and it keeps reverting to x8/x4. Going to GPU post, it shows my fourth GPU at x4. I've cleared the CMOS with no success. Running BIOS version 1601. I think this is related to some stability issues I've been having with the motherboard. With the fourth slot enabled I'll get random crashes when I'm training a neural network on one of the other gpus. I get an odd shared memory address error from cuda; googling it yields unrelated answers. When I turn the slot off using the dip switches the system is rock solid. Note I've plugged in that horrid additional power molex connector and am running a EVGA T2 1600 power supply. When I'm having this problem I cap the power limit of all 4 gpus to 250 watts. Any thoughts on either issue would be greatly appreciated.
  6. Honestly it was just one of those random brain glitches. I was with some friends and thought about Johann Sebastian Bach as a chicken cause chickens say "bock". For whatever reason this reminded me of a scene in the movie Blood Sport where Jean Claude VanDam is kicking a dude in the head. VanDam sounded at the time like VanBantam. No drugs involved. Just a badly wired brain.
  7. Have you thought about using a framework like Apache Ignite? I know you'd have to sort out the same sort of policies regarding data aging and prioritization but it is a scalable solution. Ignite is an in memory data fabric. From what I understand Ignite can work with a distributed file system like HDFS, GPFS (aka Spectrum Scale), etc. Then again 7k for ~8.4 TB of nvme drives sounds less expensive than however many servers you would need to have 8.4 TB of available memory. I know Ignite is used by a major SaaS provider to serve their MySQL DB in memory for their clients. Anyways just thought I'd throw it out there.
  8. fww... So one of the problems I have with learning about the terminal from a book or guide or website is that it just doesn't stick for me. I've learned the most when I have a specific project or task I want to do and then go and read about the pieces that are needed to make that work. SO I would say pick a couple of your everyday workflows and see if you can write a bash script to do most of it. Or if you're a programmer type rather using Python or Visual Basic or JavaScript or Fortran, lol, to do some quick calc write a bash script. Or for that matter... along similar lines to @shura try linux from scratch.
  9. In general you'll want to post the exact output of your terminal or a screenshot that is legible. I disagree with the first part our fellow member @Lukyp response. Unless you have a specific reason I would caution against just updating to the latest version of linux. Keep in mind I'm coming from the perspective of stability over cutting edge. My dev rig is running CentOS 7, the kernel is frozen at 3.10 which very old. The second half I agree completely, I've had this issue on CentOS 7, Ubuntu 18.04 desktop and server, and Ubuntu 16.04. If you search for your error message and your distro you'll find specific guidance as to how which kernel parm to use.
  10. [content removed] Substance fww: You will most likely not want to use CentOS. CentOS kernel is frozen at 3.10 and fixes are back ported. Maybe you'll want to try linux from scratch. It's a great learning experience and you'll definitely improve your bash chops. That said I would recommend something like mint or fedora. I would also recommend buying a cheap 120gb ssd and install linux on that, don't bother with the dual boot madness., I'm pretty sure other folks have gotten it to work fine but I've just dodged the whole process. I have a t420s that has windows 10 and linux mint, I think it's 17, installed on separate drives.
  11. A bit late to the party here so I guess more for posterity than helping the OP. I would caution against getting an external gpu and instead use a cloud service like paperspace. For the amount of money you'll be spending on the beefy laptop you can get a good chunk of time with a modern server grade gpu. For the amount of money it would cost to buy an external gpu you can build a small form factor pc with a 1080 ti, 2080 ti, or titan V and not deal with the added layer of complexity and bottleneck that thunderbolt will cause.
  12. A bit late to the party... for future folks that come across this thread here are my two cents. In general, as of this writing, I would caution against getting a Titan V and for that matter a burly cpu and instead get two 1080 ti 11gb, and think REALLY hard about thermals. The reason for this is that you can train two models at the same time with different hyper parameters. In such a setup I would recommend picking a mobo that has two pcie rev 3 x16 slots. In my own dev rig I have 4 x 1080 ti 11gb, yes two of them are on x8 slots. I also have a 1tb nvme ssd to keep my gpus fed with data; sata ssds are a bottleneck for the work I do. Remember for deep learning it's the gpu that is going to be doing the heavy lifting not the cpu, unless you're doing deep reinforcement learning which is a different thread all together. Side note I would also recommend making sure your psu can source enough watts to your gpu given everything else you have plugged in. For example I can set the power target for my gpus to 300 watts, so x 4 that's 1200 watts. So yeah I have a 80+ Ti 1600 watt psu, yes my electricity bill is high.
  13. I went Threadripper for PCIE lanes. I want to run as many GPUs at x8 or x16 as my hardware can handle. For my budget a Threadripper build made sense. I don't know about the performance specs for the i9 7900x compare to a 1950x. With respects to cpu performance and deep learning my understanding is that a cpu does not perform as well as a gpu. I'm sure there may be an edge case where cpus are preferred. I do know that cpus outperform gpus for reinforcement learning. However for let say computer vision a cpu just doesn't compare. Search for Tim Dettmer's guide about choosing gpus for deep learning.
  14. So where I've seen peltier coolers effectively used is in x-ray ccd camera applications. The ccd chip-package has a peltier cooler attached to it and then a cooling block attached to the peltier cooler. We used a recirculating chiller, like the kind made by VWR or ThermoFisher, to cool the peltier cooler. The cooler was able to get the chip down to -20 C within about 10 minutes, which isn't bad. I personally set the chiller to no less than 4 degrees C. The device integrator, i.e. the camera manufacturer, were the ones who were responsible for mounting the chip on one of their boards and provide cooling elements needed to cool the ccd chip. So what does this mean for you? You'll need to get your peltier cooler, thermal couple, cooling block, some sort of industrial control system (like a plc or an arduino), power supply, and a recirculating water chiller (or something similar) to cool the peltier cooler. Your feedback loop will be based on your thermal couple placement and your throughput variable will be the current being sourced to the peltier cooler. You'll want to throttle the current to the peltier cooler so as to prevent condensation and dropping the temperature too much when cpu isn't under load. So yeah a lot of broad strokes here but it'll give you a little bit more of an idea as what's involved.
×