Jump to content

CUDA for programming

Hi P

Could someone plese explain to me in simple terms what's CUDA used for in programming? I've seen answers like... for deep learning, vectors, heavy computing, etc.

 

1.- So in simple terms, what is it?

2.- Where could you use it aside from anything related to Machine / Deep learning?

 

I did try to do some research on it, but with my current knowledge I couldn't really understand what was going on aside from faster output of a bunch of numbers in the terminal... or something.

 

It's just pure curiosity at this point, thank you very much :D

 

Link to comment
Share on other sites

Link to post
Share on other sites

You can use it for deep learning but in fact it is machine learning. Nearly any kind that benefit from many node computation at a time. Note that this not not more efficient that a normal CPU but the benefit are that if it's 1000 times slower but you can run 10000 more calculation per cycle then you are 10 times more efficient in reality.

 

My basic usage so far was dealing with image composite and finite element analysis. Must not be bad for point cloud computing either or octree / bounding box but i have never touch the subject with Cuda. It is not as simple as it look to me. Luckily i worked on already developed project which we had to optimize.

Link to comment
Share on other sites

Link to post
Share on other sites

I am not an expert on CUDA or anything like that, but as far as I know CUDA is just an API to interact and work with the graphics card on your computer. Graphics cards are heavily optimized for calculations made with matrices or vectors and it can do them in parallel. This is of course very often needed when doing graphics calculations, but can also be used in other areas, like machine learning. Here some sources for further reading:

https://www.infoworld.com/article/3299703/what-is-cuda-parallel-programming-for-gpus.html

https://en.wikipedia.org/wiki/CUDA

Link to comment
Share on other sites

Link to post
Share on other sites

GPU's are just specialised processors. Specialised in relatively simple (for a chip) but massively parallel computations. Stream processors and CUDA cores are miniature compute cores. CUDA (and OpenCL) are basically standardised API's (as in tools, separate programming languages even) to harness those cores in a useful and comfortable manner. Applications that want/need to use that ability use CUDA/OpenCL to implement their use case. That's when the "oh it's used for machine learning" come in, that's just an application of CUDA/OpenCL, they can be used for anything they are useful for, just need to code that in.

 

There is a bunch more stuff to GPU's than just those cores, but they are most useful for general purpose computing.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, DevBlox said:

They can be used for anything they are useful for, just need to code that in, most useful for general purpose computing.

Could you give me a few examples on where to use them? (aside from ML, I'm still a bit lost)

 

CUDA caught my attention but I'm nowhere close learning Machine Learning, so I want to know where else could I make use of such knowledge :)

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Hi P said:

Could you give me a few examples on where to use them? (aside from ML, I'm still a bit lost)

 

CUDA caught my attention but I'm nowhere close learning Machine Learning, so I want to know where else could I make use of such knowledge :)

 

Some more examples from the top of my head:

  • Video/Image rendering, transcoding
  • Molecular physics simulation (I once had the honour to work with synthetic biologists that used those simulations to try and predict if their models work)
  • Bitcoin mining
  • Big Data occasionally uses CUDA I think, depends on the workload
  • Material science simulations
  • Aerospace engineering simulations
  • I bet CERN uses CUDA to churn the massive amounts of data they generate (don't quote me on this one)
  • Accelerating CAD software
Link to comment
Share on other sites

Link to post
Share on other sites

1. Compute Unified Device Architecture: CUDA.   CUDA was an extension of C that added parallelism into the language.  The parallel processing capabilities of GPUs are a big part of why GPUs can do things faster than CPUs.  You can write C programs that have parallel processing capabilities, but it's laborious.  CUDA makes it easy to do parallel processing in C.

 

It used to be that CUDA in other languages meant you could use libraries written in CUDA.  It also meant you wrote code that looks a lot like C.  It's really painful.  C is as close to the metal as you get unless you write Assembly or machine code.  For example, Python was written so you could articulate computational ideas rather than figure out how to make the computer actually compute.  You weren't supposed to need to say, or know, that one variable is a short unsigned integer with a certain binary length and another is a signed double integer with a certain binary length.  You do need that in C because it helps you figure out how much memory to allocate.  Painful.  CUDA python requires you to define that sort of thing.  So CUDA language x means you basically write C in that language.  You might as well use C.

 

NVIDIA has tons of awesome libraries in CUDA that let you access blocks of the chip they designed.  Adding ray tracing capabilities to a chip means that the algorithms are hard wired in the chip.  The easiest way to let you use close to the metal stuff like that is to write the libraries in their extension of C, i.e. CUDA.

 

2. Machine learning and AI use a variety of complex algorithms, which are made up of other more basic algorithms.  I haven't taken the time to explore what NVIDIA means by Machine learning and AI.  They could mean that they've embedded the complex algorithms into the chips.  Or it could mean that that embedded the more basic numeric algorithms needed for machine learning and AI.  I figure it's probably the later. They probably have CUDA libraries that make it easy to write things like neural nets or support vector machines (SVM) and that sort of thing.  Neural nets are pretty specialized.  SVMs and other things use traditional mathematical and computational elements they already NVIDIA already builds into their chips.

 

The whole Machine Learning/AI GPGPU is more about advertizing than specialized design.  Intel came out with the neural net USB dongles.  Those make neural nets faster, and aren't useful for other things.  NVIDIA's GPGPUs still do all the other stuff they did before.  The GPGPUs have complex mathematical libraries built in as well as libraries for complicated and complex computation.  I do network stuff.  The math and algorithms used when analyzing networks get big fast.  You know P vs NP?  That's about the amount of computations needed as you deal with larger and larger problems.  If P ≠ NP, some network problems become bigger than a computer can ever compute.  The work I do could literally take forever.  NVIDIA has some of the problems built into the hardware.  A problem that takes my crappy computer at work that isn't NVIDIA and, therefore, I can't use CUDA,may takes days to do a calculation (if it doesn't crash) that I could do in a few minutes, or a few seconds, on a NVIDIA Tesla card.  (AMD cards are really awkward to program for.)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×