Jump to content

Epic Workstation, RAM, Deep Learning, Advice?

Goal: Building a strong workstation PC that I will regularly use for training Deep Learning (DL) models with Pytorch and a decent amount of personal time spent gaming.

 

Scroll to bottom to skip to tech questions.


Currently I am running a FX-8350 with a 980ti Hybrid (overclocked) and 16gb of ddr3 ram. This has served me well but has 2 key caveats, the CPU is old and crappy, and I don't have enough RAM. There are many scenarios where I am training a model and since I can't load my entire dataset (20gb, so not that big) into memory then I'm reading from my SSD thousands of times every iteration (which obviously hurts my SSD lifetime), but also means I am spending 80% of my time waiting for a sample to read. My models train about 4-6x faster if I have the dataset in memory. This is a huge bonus for me, and I'm also going to be working with some bigger datasets this year and would love to keep as many as possible in my memory while training. Last note, I am considering getting a new graphics card, now or a little bit later depending on my questions below with PCIE 3.0 vs 4.0 and any upcoming NVIDIA launches.

 

Note on CPU usage: I am using it for games as mentioned. But for DL I typically do computations involving the regular read/writing/preparing the data for the GPU but also sometimes computations like FFTs and other matrix manipulations in NumPy. Morale of the story: I don't believe my cpu usage is that strenuous at the moment, but grabbing a 16 core or above would give me more headroom for future projects.

 

This is my current idea of a build.

 

 

Assumptions:

-Ryzen 9 3950x supports up to 128gb of DDR4 RAM
-I9-10980XE supports up to 256gb of DDR4 RAM

-If I use 256gb of DDR4 RAM I would need to upgrade from windows 10 Home to 10 Pro


TIME FOR THE TECH QUESTIONS:

1) Paying a few hundred dollars more for I9-10980XE gives me the possibility for double the RAM in the future... Is it worth it? I know the 3950x is very powerful and much better at boosting all cores at once, but the intel chip might give me better performance in some games. Opinions?
2) Quad Channel Memory with the I9 Intel Chip would technically have more bandwidth than the Ryzen 9 Dual Channel memory... Correct?

3) PCIE 3.0 vs 4.0. If I choose to get the I9 Intel Chip and limit myself to PCIe 3.0, would I be screwing myself in possible GPU upgrade options in the near to mid future? Any other advice?
4) I was considering getting a GTX 2080TI right now... but instead I was thinking of maybe waiting for NVIDIA announcements later this year. Based on rumors does it seem like the next series of NVIDIA GPUS are gonna be a big bump and worth the money? Worth it for me to maybe wait for a GTX3080ti (If that's a thing)?

 

5) OPINIONS?!?!?!

 

Thanks so much for your time everyone! Cheers!

 

Link to comment
Share on other sites

Link to post
Share on other sites

A Threadripper system makes more sense for you.

Quote me to see my reply!

SPECS:

CPU: Ryzen 7 3700X Motherboard: MSI B450-A Pro Max RAM: 32GB I forget GPU: MSI Vega 56 Storage: 256GB NVMe boot, 512GB Samsung 850 Pro, 1TB WD Blue SSD, 1TB WD Blue HDD PSU: Inwin P85 850w Case: Fractal Design Define C Cooling: Stock for CPU, be quiet! case fans, Morpheus Vega w/ be quiet! Pure Wings 2 for GPU Monitor: 3x Thinkvision P24Q on a Steelcase Eyesite triple monitor stand Mouse: Logitech MX Master 3 Keyboard: Focus FK-9000 (heavily modded) Mousepad: Aliexpress cat special Headphones:  Sennheiser HD598SE and Sony Linkbuds

 

🏳️‍🌈

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Canto666Hades said:

5) OPINIONS?!?!?!

I've done RNN (Recurrent Neural Networks) in depth.

Here is my experience:

 

No AMD. The tools are built to take full advantage of nVidia for your GFX card, while the tools are there for AMD, they are sorely lacking IME.

I have used (granted, non Ryzen) AMD CPUs and the results weren't all that stellar compared to Intel ones of the same build period, so I stuck with Intel.

Have you considered Dual Xeons in a workstation? Uses ECC DDR3 RAM for the older set, but the RAM is dirt cheap (I'm currently using 64GB ECC DDR3 in quad channel and it set me back less than 100$)

 

PCIe 3 vs 4...well it depends on how much you want to "future proof" your build and how much you invest in DL (in other words, are you making a living doing DL, or is it just a hobby)

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Radium_Angel said:

I've done RNN (Recurrent Neural Networks) in depth.

Here is my experience:

 

No AMD. The tools are built to take full advantage of nVidia for your GFX card, while the tools are there for AMD, they are sorely lacking IME.

I have used (granted, non Ryzen) AMD CPUs and the results weren't all that stellar compared to Intel ones of the same build period, so I stuck with Intel.

Have you considered Dual Xeons in a workstation? Uses ECC DDR3 RAM for the older set, but the RAM is dirt cheap (I'm currently using 64GB ECC DDR3 in quad channel and it set me back less than 100$)

 

PCIe 3 vs 4...well it depends on how much you want to "future proof" your build and how much you invest in DL (in other words, are you making a living doing DL, or is it just a hobby)

I'm curious to know more about this CPU aspect of this in DL. Since most of the DL I've worked with is image/signal processing, the heavy workload is on my GPU and I've been unaware of any CPU based tools for DL. Specifically the difference in the tools between the AMD and Intel chips.

Although I'm not surprised non-Ryzen cpus turned out bad results for you. Considering the horrible track record AMD had up until Ryzen.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Canto666Hades said:

Considering the horrible track record AMD had up until Ryzen.

That's probably the crux of the matter right there.

Before I started in on the GPU RNNs, I ran it purely bound to some CPUs I had about. I could get a few iterations a second on the data (I ran this under Ubuntu using this link: https://karpathy.github.io/2015/05/21/rnn-effectiveness/) from an Intel C2D chip, but it took several seconds per iteration on an Athlon x4. So I then switched to a compatible GPU, first going the Radeon route, then comparing to an nVidia.

 

This was some years ago when ML was still in the infancy stage of a private citizen's ability to work with, but since ML has skyrocketed in popularity, everything I've seen is all aimed at the nVidia/Intel side.

 

YMMV naturally, but if you are dumping mega money into this, I'd play it safe and avoid AMD products.

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 9 months later...
On 1/1/2020 at 1:27 AM, Canto666Hades said:

Goal: Building a strong workstation PC that I will regularly use for training Deep Learning (DL) models with Pytorch and a decent amount of personal time spent gaming.

 

Scroll to bottom to skip to tech questions.


Currently I am running a FX-8350 with a 980ti Hybrid (overclocked) and 16gb of ddr3 ram. This has served me well but has 2 key caveats, the CPU is old and crappy, and I don't have enough RAM. There are many scenarios where I am training a model and since I can't load my entire dataset (20gb, so not that big) into memory then I'm reading from my SSD thousands of times every iteration (which obviously hurts my SSD lifetime), but also means I am spending 80% of my time waiting for a sample to read. My models train about 4-6x faster if I have the dataset in memory. This is a huge bonus for me, and I'm also going to be working with some bigger datasets this year and would love to keep as many as possible in my memory while training. Last note, I am considering getting a new graphics card, now or a little bit later depending on my questions below with PCIE 3.0 vs 4.0 and any upcoming NVIDIA launches.

 

Note on CPU usage: I am using it for games as mentioned. But for DL I typically do computations involving the regular read/writing/preparing the data for the GPU but also sometimes computations like FFTs and other matrix manipulations in NumPy. Morale of the story: I don't believe my cpu usage is that strenuous at the moment, but grabbing a 16 core or above would give me more headroom for future projects.

 

This is my current idea of a build.

 

 

Assumptions:

-Ryzen 9 3950x supports up to 128gb of DDR4 RAM
-I9-10980XE supports up to 256gb of DDR4 RAM

-If I use 256gb of DDR4 RAM I would need to upgrade from windows 10 Home to 10 Pro


TIME FOR THE TECH QUESTIONS:

1) Paying a few hundred dollars more for I9-10980XE gives me the possibility for double the RAM in the future... Is it worth it? I know the 3950x is very powerful and much better at boosting all cores at once, but the intel chip might give me better performance in some games. Opinions?
2) Quad Channel Memory with the I9 Intel Chip would technically have more bandwidth than the Ryzen 9 Dual Channel memory... Correct?

3) PCIE 3.0 vs 4.0. If I choose to get the I9 Intel Chip and limit myself to PCIe 3.0, would I be screwing myself in possible GPU upgrade options in the near to mid future? Any other advice?
4) I was considering getting a GTX 2080TI right now... but instead I was thinking of maybe waiting for NVIDIA announcements later this year. Based on rumors does it seem like the next series of NVIDIA GPUS are gonna be a big bump and worth the money? Worth it for me to maybe wait for a GTX3080ti (If that's a thing)?

 

5) OPINIONS?!?!?!

 

Thanks so much for your time everyone! Cheers!

 

You may find this PCIE 3 vs 4 comparison helpful.

 

Majority of the AI/DL/ML applications are optimized for Intel and NVIDIA and so you may want to stick to this combo.

 

For me the sweet spot would be a 10900k paired with 128 GB RAM and a TITAN RTX or 3090... Them CUDA/TENSOR CORES ❤️

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×