Jump to content

SwiftySteve

Member
  • Posts

    7
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SwiftySteve's Achievements

  1. @CBojorges I think I like M.2 because of the overall speed. Being able to read and write at high speeds would allow the overall system speed to increase. I'm wondering if I can have the finite control to move Datasets from one storage medium to another. So maybe I can have the system download a dataset to Secondary Memory (Western Digital Hard Drive). While that is happening, I can be using the M.2 Storage medium to store a dataset I am about to use to train a model, then move the dataset to RAM once I am using the dataset to train. Does that make sense? SO basically it goes: Download Dataset -> HDD -> Load Dataset to M.2 -> Dataset moved to RAM for training -> Train Model The reason I want the dataset in "Secondary Storage" (HDD) first is because its high capacity and cheap. That would allow for multiple datasets (each dataset being large) to be "warehoused" in "Secondary Memory". Does that make sense?
  2. @Dash Lambda Ok, I understand the purpose of SLI now. So lets say I do this build with 1 GPU for now, then 6 month down the road want to add a second. How to I get the system to recognize/use that it can us both GPU's at the same time? I thought SLI was the way, but I guess not. Does the system know you use both of the GPU's just by plugging it in?
  3. Oh gotcha! I was under the impression that Multiple GPU's would allow for the NVIDIA ML software (DIGITS) to take advantage of the additional resources and train the model faster, etc. If you look at the NVIDIA Dev Box they have 4 Titan X's in it I believe. So I made the assumption that NVIDIA has found a way to make multiple GPU's efficient, and powerful when applied to Machine Learning use cases. I'll do my research for sure. What's your thoughts on M.2 for Storage in this machine?
  4. Sooo yea those chips are OLD. Plus I wasn't able to confirm that they actually work. From the file I have on the unit the original machine came into a repair shop for Temp issues. The diagnostics lights were showing that CPU A and CPU B were running hotter then what was allowed basically. So there might actually be a problem with these CPU's. That is why for this build I would just buy an i7
  5. @Dash Lambda Why Intel? I really don't have a great reason. I would say its a mix of its the Brand Name that I know and trust, I know it will run windows no problem, and it seems like the easiest processor to start with (like other people are using Intel). This decision is based on the fact that this is my first build and since I am still learning whats "good" and whats "bad" I'll cut a corner on the processor for simplicity. SLI From some of the research I have done on Machine Learning Boxes, they all have multiple GPU's. While I understand I may not see 2x performance increase, it will create a bigger computing pool for the algorithms to pull from. The more CUDA cores I can have available the more parallel computing I can do. Why the Case I am an Apple Fan boy. I'll be the first to admit that. I hope that doesn't discourage anyone from helping me out...I really like the design of this case, and want something that stands out, that most people haven't done yet. I think it would be cool to have a Mod Mac Pro Case. So bottom line its an ascetic.
  6. @straight_stewie the model of the Xeon's is 5150. the ARK can be found here: http://ark.intel.com/products/27218/Intel-Xeon-Processor-5150-4M-Cache-2_66-GHz-1333-MHz-FSB
  7. Hello All, This is my first post to the forum, so please take it easy on me. I recently bought a 2006 Mac Pro (bought it for a $1). The unit has a bad motherboard in it, so most of the components are either too old or not worth reusing. It did however have 2 Xeon processors inside, I plan on using for a later project. The parts I plan in reusing are the case itself, the power supply 980 Watts, and the fans. The rest is junk Here's my idea: I want to build a Machine Learning Box that I can use for my Master program. I want to use it to train models, be used in some neural net applications (training/testing), things like that. I have been looking all over the internet, for tips and guides on what makes the best bang for your buck machine. I have used some gaming builds as a starting point, then tweaking to my use case. The machine will run Ubuntu 14.04 or 15.04 primarily with possible dual boot support for Windows. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Build Requirements: Intel Chip: i7 or Xeon (Based on the total budget for the project I believe i7 is the only option) NVIDIA GPU(s): For the initial build phase I will buy 1 GPU, with the intention of buying a second down the road. Thus SLI is important. Plus the developer support with DIGITS, CUDA cores, etc are important. Supporting Motherboard: Ideally having a motherboard that would drop into the Mac Pro case would be AWESOME! But I am almost certain some mods to the case will have to be made. Also, having fast I/O, PCI expandability, and M.2 support are feature requests. Somethings I found that should be important for this build include: PCI Express Lanes -> I believe that 40 lanes is a requirement because I plan on having 2 GPU's (Not sure yet which NVIDIA card I want yet), and I would like the lanes to be divided equally between the 2 GPU's (16x16). That way I avoid any possible bottlenecks. RAM -> From what I have read for this use case, it is best to have enough RAM (memory) to hold entire datasets. My plan is to start with 32 GB initially, then move up to 64 GB and a step 2, then possible to 128 GB as a step 3. So expanding RAM is important. At the time of writing this I believe that quad-chanel Memory is the way to go, the benefit being speed. What do you guys think? M.2 -> I want the system to be extremely fast. I would like to use a M.2 solution for holding the OS (Ubuntu), and OS related file, and applications. All the datasets will be held in Secondary Storage (Western Digital Red Hard Drive) Question I have: 1) When deciding in a CPU, should I go with more cores and a slower clock? Or fewer cores and a higher clock? (Based on this use case) 2) Does my assumptions for PCI Express Lanes hold true? Would there be a bottleneck if 1 GPU is getting 16 lanes and the other is getting 8? 3) Is there any guides that you guys know of that I could follow? 4) What NVIDIA GPU's have the best SLI support? Or which GPU have been found to work best in this use case? (I know Titan X is great, but please keep the budget in mind) 5) Do you guys have any suggestions on the build? Summary I have made 2 different PC Part Picker lists that can be found at: https://pcpartpicker.com/user/sprichard/saved/#view=8TXRBm They are listed under the "Mac Pro $1000 Build" and "Mac Pro $1500 Build" As you can see from the link I have build for $1000 budget and $1500 budget. I am working on a $2000 budget build. I would say $2000 is the limit for this build. Let me know what you guys think! I am open to all suggestions as i have not purchased anything, and I am still trying to get my head wrapped around all that I need to know to make a badass Mac Pro ML Box.
×