Jump to content

TopHatProductions115

Member
  • Posts

    622
  • Joined

  • Last visited

Blog Comments posted by TopHatProductions115

  1. What if GPU rendering used ML/AI hardware (ASIC most likely) to perform object recognition on any given scene, and allocated more processing units/VRAM to more complex objects in the given scene (to speed up rendering)? Years ago, I was considering that idea over traditional rendering methods (due to the very issues you mentioned). Or, maybe use a combination of the previous hypothetical technique with ray-casting or radiosity (in opposed to rasterisation)? Using a new method to divvy the workload in a more logical manner could minimise the issues you mentioned in the last portion...

     

    P.S. Just took a quick look at this as well.

  2. Can threads running on the OS (that is utilising the OS's built-in multitasking APIs, methods, etc.) be required to send an initial signal/interrupt when it becomes idle (No-op/ halt loop), and a second signal when it leaves the idle state? Then the CPU/OS could mark that thread with a flag/marker, indicating its state. Or maybe I'm wrong :| 

×