Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Elimcn1

Is a Ryzen 3100 enough for deep learning?

Recommended Posts

Posted · Original PosterOP

Thinking of building a proper rig for programming, particularly data science. As such, I will most likely be utilizing some form of deep learning. I'm on a tight budget, so I was wondering if 4 cores and 8 threads is enough. 

Link to post
Share on other sites
3 minutes ago, Elimcn1 said:

Thinking of building a proper rig for programming, particularly data science. As such, I will most likely be utilizing some form of deep learning. I'm on a tight budget, so I was wondering if 4 cores and 8 threads is enough. 

Coding isn't a big deal in terms of required performance in general, it's more about how big your dataset is.

Link to post
Share on other sites

If you are planning on learning the concepts of deep learning and implementing the basics then sure. If you plan on using deep learning for anything or want to dabble more than puddle deep you will need a gpu. For the smooth and easy path you need an nvidia GPU for CUDA. The vast majority of deep learning toolkits are designed to run on gpgpu. CPU works but is dreadfully slow.

Link to post
Share on other sites
42 minutes ago, Elimcn1 said:

Is a Ryzen 3100 enough for deep learning?

This kind of a question kind of belies a certain kind of misunderstanding: you can do data-science and deep-learning even on a practical potate. The specs only decide how fast the process will be, not really whether you can do it or not. This is to say, yes, a Ryzen 3100 can do deep-learning and data-science, but whether it can do the kind of a project you have in mind depends on the project -- how much data you have to shift through, what you have to do with it, and how fast or slow is acceptable to you.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

@Elimcn1 I'd highly suggest on buying a good enough general purpose laptop, or building a good enough computer, don't spend a lot in a computer for programming, so yes that CPU is completely fine.

 

Whenever you need a lot of extra power, my man, there are cloud services for that :)

Link to post
Share on other sites

You really don't need much. If you work with lots of data like let's say you are doing large image categorization you will be bottleneck by the amount of RAM you have and storage read speed before hitting the CPU limits. so the 3100 is WAY more than you really need for programming.

 

If you go CUDA dev then you can cut on CPU in favor of more RAM and CUDA But i don't think there is lower than 3100 i nthe 3k series. if you go (and if they still sell) 2XXX series like 2500u is great. I don't know the difference price wise.

 

I do run a 2300U and do machine learning, C++, C# dev on it without any problem. It more than enough for programming anything.

Link to post
Share on other sites
8 hours ago, Elimcn1 said:

Thinking of building a proper rig for programming, particularly data science. As such, I will most likely be utilizing some form of deep learning. I'm on a tight budget, so I was wondering if 4 cores and 8 threads is enough. 

As other people said, sure. I go by with a fx6300 without much problems. If you want to get serious about it, then what you'll actually need is a GPU. Even the cheapest of the GPUs is way faster than a top-end CPU for that kind of stuff.

 

If you're just learning, you can simply use free cloud services such as Google's Colab. I used to use it a lot while on my chromebook, so hw specs doesn't really matter that much.


FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites

Like every other reply has said, you do not need a beast of a PC to get into developing, even for deep learning. Well not getting started at least. The amount of processing required scales with the complexity of the task at hand, and the scale if your data set. I do a lot of deep learning projects for work, but I work from home, and I have a few PCs. For development you can really use anything, it not that hard. But really accelarating a deep learing application will not scale well on even the best of CPUs. My main workstation that I use for work (recently convinced the boss to pay for my upgrade hehe) is a based on a Threadripper 3970x which is 32 whole cores, and even that is kinda lame for like deep learning work loads. Most APIs and other tools for deep learning accelaration rely on CUDA accelaration, so nVidia GPUs are the way to go. Even a 1060 would out perform a 32 Core CPU, because of how they work fundamentally. CPUs have very powerful cores but not a lot, GPUs have thousands of CUDA cores, which are very weak and are not very flexible as far as the work loads that they are compatible with, as they are not designed for that purpose in mind. GPUs are very good at doing lots of math very very fast, and so in this sense they are the king for GPU accelaration. IMO just start with what you have and learn, as its not like deep learning is something that a developer picks up over night. Then you can always use AWS spot instances to get a box to run your code, and then shut down if you get to the point where your code is too hard to run on your machiene. Then if the time comes and you want to build a workstation get a few GPUs for you system, and lots of RAM for virtualization since Windows is not the best for deep learning. The APIs that really are cutting edge exist mostly on Linux as of now. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×