Jump to content

CPU instead of GPU

Lately, I have been using Fusion 360. It's been great, but when I render a model, it would pin my CPU at 100% instead of my GPU. Why is that and how change it to render using my GPU instead? Thanks       

Link to comment
Share on other sites

Link to post
Share on other sites

Even the more full featured Inventor has no GPU support iirc (or only supports some workstation cards or something, don't exactly recall), so I highly doubt fusion even offers GPU rendering.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Lots of similar software either doesn't support GPU rendering at all or simply stopped supporting it, like Adobe After Effects. So yeah, rendering machines need a big ass CPU.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

Fusion has very little gpu acceleration. You can enable directx in the settings, but barely does anything. Also ray tracing processes are cpu only, until that process becomes available on gpus. I remember gamers nexus made a video on it, but unfortunately most CAD softwares simply rely on cpus that have good single thread performance. 

ASU

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Hackentosher said:

Fusion has very little gpu acceleration. You can enable directx in the settings, but barely does anything. Also ray tracing processes are cpu only, until that process becomes available on gpus. I remember gamers nexus made a video on it, but unfortunately most CAD softwares simply rely on cpus that have good single thread performance. 

I've never understood this.  Seems like a lie told just to cover up the laziness of developers.  GPUs of all brands have been able to perform generic compute tasks for quite a long time now, so I see no reason ray tracing couldn't be done on them.  Perhaps if they had a dedicated, specially designed hardware component for it, it would be even better, but even without that it should be massively faster than doing it on the CPU.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

On 20/07/2018 at 8:34 AM, DannyTechTips said:

Lately, I have been using Fusion 360. It's been great, but when I render a model, it would pin my CPU at 100% instead of my GPU. Why is that and how change it to render using my GPU instead? Thanks       

 

20 hours ago, Ryan_Vickers said:

I've never understood this.  Seems like a lie told just to cover up the laziness of developers.  GPUs of all brands have been able to perform generic compute tasks for quite a long time now, so I see no reason ray tracing couldn't be done on them.  Perhaps if they had a dedicated, specially designed hardware component for it, it would be even better, but even without that it should be massively faster than doing it on the CPU.

This is one point of view: https://stackoverflow.com/questions/38029698/why-do-we-use-cpus-for-ray-tracing-instead-of-gpus

It's focussed on movie production, but similar arguments apply to modeling I guess.

 

Nvidia's been working on real time GPU ray tracing lately though, with RTX.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, tikker said:

This is one point of view: https://stackoverflow.com/questions/38029698/why-do-we-use-cpus-for-ray-tracing-instead-of-gpus

It's focussed on movie production, but similar arguments apply to modeling I guess.

 

Nvidia's been working on real time GPU ray tracing lately though, with RTX.

That certainly is an interesting point of view.  Here's my take on it:

Quote
  • GPUs only go fast when everything is in memory. The biggest GPU cards have, what, 12GB or so, and it has to hold everything. Well, we routinely render scenes with 30GB of geometry and that reference 1TB or more of texture. Can't load that into GPU memory, it's literally two orders of magnitude too big. So GPUs are simply unable to deal with our biggest (or even average) scenes. (With CPU renderers, we can page stuff from disk whenever we need. GPUs aren't good at that.)

This starts out sounding reasonable but he kills it with the last lines.  Sorry but paging from disk is super fucking slow no matter what, the difference from CPU to GPU is not going to matter.  Furthermore, this is relevant for movie production but not for something like Fusion where your whole project can actually fit entirely in VRAM, or at worse, partially in VRAM with paging to system RAM as needed.

Quote
  • Don't believe the hype, ray tracing with GPUs is not an obvious win over CPU. GPUs are great at highly coherent work (doing the same things to lots of data at once). Ray tracing is very incoherent (each ray can go a different direction, intersect different objects, shade different materials, access different textures), and so this access pattern degrades GPU performance very severely. It's only very recently that GPU ray tracing could match the best CPU-based ray tracing code, and even though it has surpassed it, it's not by much, not enough to throw out all the old code and start fresh with buggy fragile code for GPUs. And the biggest, most expensive scenes are the ones where GPUs are only marginally faster. Being lots faster on the easy scenes is not really important to us.

This one I have no expertise on and so I can't really say one way or another if it is true, and if it is, this alone would be enough of a reason for using CPU over GPU I guess, despite the validity of any other point.  However, it's just extremely hard to believe that this could be true, particularly when I consider the many other kinds of rendering that you can do on either CPU or GPU, where GPU pulls away with a 10:1 lead if not more.

Quote
  • If you have 50 or 100 man years of production-hardened code in your CPU-based renderer, you just don't throw it out and start over in order to get a 2x speedup. Software engineering effort, stability, and so on, is more important and a bigger cost factor.

This is just stupid.  Sure, if GPU rendering is not better, there's not reason to go for it, but if it was better, ignoring it just because of the sunk cost of all that existing CPU code is how you end up like Blockbuster.

Quote
  • Similarly, if your studio has an investment in a data center holding 20,000 CPU cores, all in the smallest, most power and heat-efficient form factor you can, that's also a sunk cost investment you don't just throw away. Replacing them with new machines containing top of the line GPUs vastly increases the cost of your render farm, and they are bigger and produce more heat, so it literally might not fit in your building.

Similar idea to above, but this is specific to big data centres used for movies anyway.  Specifically for the purposes of home rendering like Fusion, totally irrelevant. 

Quote
  • Amdahl's Law: The actual "rendering" per se is only one stage in generating the scenes, and GPUs don't help with it. Let's say that it takes 1 hour to fully generate and export the scene to the renderer, and 9 hours to "render", and out of that 9 hours, an hour is reading texture, volumes, and other data from disk. So out of the total 10 hours of how the user experiences rendering (push button until final image is ready), 8 hours is potentially sped up with GPUs. So, even if GPU was 10x as fast as CPU for that part, you go from 10 hours to 1+1+0.8 = nearly 3 hours. So 10x GPU speedup only translates to 3x actual gain. If GPU was 1,000,000x faster than CPU for ray tracing, you still have 1+1+tiny, which is only a 5x speedup.

I believe this also is entirely movie-centric.  Fusion does not work like this.  I know because Inventor can begin rendering within seconds of the request, not 1 hour.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ryan_Vickers said:

-snip-

Yeah I don't think the memory and such being an issue for this sort of thing. Maybe with paging from disk he meant you wouldn't have to unload A and load B or something? Still not relevant to this I agree.

 

Could it maybe be that were just a bit stuck in the CPU paradigm? We have invested much effort in refining these CPU techniques, so maybe because the real big projects like movies don't see much gain, they don't work on it? The second point seems the most important, he paints the picture that it's not as parallelizable as it may seem, which I'm not at knowledge to say.

 

Hopefully the next generation GPU technology with ray tracing will give us new toys :D Nvidia's RTX stuff makes it sound like it's already perfectly doable on GPUs, just not yet in real time which a render wouldn't really care about anyway.

 

[Edit] Here's a more relevant response from a Fusion employee from 2 years ago: https://forums.autodesk.com/t5/fusion-360-design-validate/gpu-rendering/td-p/6630826

 

Quote

I consulted with John Hutchinson who is our Chief Visualization Platform Architect on this subject and here was his response.  (John can always chime in if he wants to expand on the conversation).

"Our renderer is currently CPU only. GPUs have the benefit of much higher parallelism, (10 - 50x more cores), but many limitations (scene size, memory bandwidth, practical core utilization, energy cost, limited availability in the cloud). In practice, the CPU approach provides greater flexibility, consistency across platforms and reasonable performance across a broader spectrum of scenes. We periodically benchmark the renderer against other CPU and GPU implementations and we are very competitive. A recent algorithmic change has results in 2-3X performance improvement for many Fusion scenes. Some of limitations on GPUs will relax (memory size, memory bandwidth, cloud availability). As the landscape changes we continue to evaluate this choice."

 

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, gabrielcarvfer said:

I don't know how about you, but I also think that they don't include the GPU rendering on purpose to sell their cloud based rendering. Other more expensive products (like Maya and 3ds-Max, also from Autodesk) include the 3D rendering.

That could definitely be it as well

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×