Jump to content

CPU vs. GPU for offline rendering

japers

Ive been wondering for a while now if anyone could provide an answer to this question and it would seem that this might be a great place to ask.

With the development of these high core count CPUs from AMD and the possibility of that core count expanding a lot more with further development, are CPUs once again better value than GPUs for offline rendering in 3d rendering applications? For those that arent initiated into this side of the graphics industry GPU Renderers include VRay RT, Fstorm and Octane. Some CPU Renders include Vray, Corona and Renderman. Historically CPUs were used for rendering for any CGI that isnt gaming related. Over the last few years more GPU renderers have appeared and they are very very fast. Exponentially faster than most CPU renderers. CPU Renderers however have had decades of use and are pretty reliable and usually have a codebase thats strictly CPU reliant.

 

An RTX 2080Ti might cost 1400 GBP and a Threadripper 2990WX costs about the same. But Does the trajectory of the CPU core count graph, overtake that of the cost of vRAM in the future? Is the development of infinity fabric going to accelerate CPU core counts a lot faster than GPU development can lower costs? 

 

It would be a really interesting video to see made but its a bit specific for LTT given that its pretty industry specific.

Link to comment
Share on other sites

Link to post
Share on other sites

Although that GPU data seems off, RTX2060 Super performs better than RTX2080 Super ?

Tag or quote me so i see your reply

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Juular said:

Thanks, so looking at these, the 64 core 3990x should almost outperform the Titan RTX given the jump from 16 cores to 32 gave an almost linear improvement.

Link to comment
Share on other sites

Link to post
Share on other sites

Probably, but looking at GPU data this relatively new major version are still getting optimized so it might change. In any case, there's also hybrid rendering mode, utilizing both CPUs and GPUs, but AFAIK it's still somewhat loses in quality to pure CPU\GPU renderer.

Tag or quote me so i see your reply

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Juular said:

Probably, but looking at GPU data this relatively new major version are still getting optimized so it might change. In any case, there's also hybrid rendering mode, utilizing both CPUs and GPUs, but AFAIK it's still somewhat loses in quality to pure CPU\GPU renderer.

Well the issue with alot of GPU renderers is they dont have the features because they havent got decades of CPU developed libraries to pull from. I guess this is a question that might never be answered given that both technologies are in constant development.

Link to comment
Share on other sites

Link to post
Share on other sites

Take a look at Nvidia's Siggraph presentation from 2018. An RTX server at $500,000 isn't cheap, but supposedly it has better performance than a traditional $2M render farm.

 

This isn't meant for home office use, of course, but I think its a good indicator where the industry as a whole is headed.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Eigenvektor said:

Take a look at Nvidia's Siggraph presentation from 2018. An RTX server at $500,000 isn't cheap, but supposedly it has better performance than a traditional $2M render farm.

 

This isn't meant for home office use, of course, but I think its a good indicator where the industry as a whole is headed.

This is true, But it is NVidia saying that. The people who make GPUs. And that might be true at scale where youd need a $2m render farm or specialist nvidia nodes. But what about the small studios where we have a single server cabinet full of nodes or freelance users where we render locally. Theres likely a sweet spot where CPUs are more cost effective especially if that 64 core sku performs as expected. Id love to see that mpath data for the Quadro RTX 8000.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, japers said:

This is true, But it is NVidia saying that. The people who make GPUs. And that might be true at scale where youd need a $2m render farm. But what about the small studios where we have a single server cabinet full of nodes or freelance users where we render locally.

Sure, you need to take that with a grain of salt, it is marketing after all.

 

As long as you're not VRAM constrained I'd expect GPUs to dominate CPUs when it comes to anything that lends itself well to SIMD. High-end GPUs have thousands of stream processors whereas the top-end CPU now has "only" 64 cores/128 threads. CPUs are a lot more flexible, so if you have lots of independent decision making, I'd expect CPUs to be faster. But if you have a huge data set where you need to perform the exact same operation again and again, GPUs are still king.

 

I think you also need to take into account that GPUs are gaining additional specialized cores (like Nvidia's RTX cores) that can have a big benefit for tasks like ray tracing and it'll probably take a while before software catches up and makes optimal use of these cores.

 

I know this isn't a direct answer, but we can't exactly predict the future ? I doubt CPU core counts will continue to grow at an exponential rate, simply due to size constraints and infinity fabric can only do so much. At the same time, if DXR takes off, I think we're going to see more optimizations in that space for a few generations.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Eigenvektor said:

Sure, you need to take that with a grain of salt, it is marketing after all.

 

As long as you're not VRAM constrained I'd expect GPUs to dominate CPUs when it comes to anything that lends itself well to SIMD. High-end GPUs have thousands of stream processors whereas the top-end CPU now has "only" 64 cores/128 threads. CPUs are a lot more flexible, so if you have lots of independent decision making, I'd expect CPUs to be faster. But if you have a huge data set where you need to perform the exact same operation again and again, GPUs are still king.

 

I think you also need to take into account that GPUs are gaining additional specialized cores (like Nvidia's RTX cores) that can have a big benefit for tasks like ray tracing and it'll probably take a while before software catches up and makes optimal use of these cores.

 

I know this isn't a direct answer, but we can't exactly predict the future ? I doubt CPU core counts will continue to grow at an exponential rate, simply due to size constraints and infinity fabric can only do so much. At the same time, if DXR takes off, I think we're going to see more optimizations in that space for a few generations.

This is very true. Theres only so many times you can use the excuse of accuracy vs speed because im sure theres a sweet spot that DXR could theoretically achieve.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×