With NVIDIA's RTX cards out and the company pushing ray tracing, I figured I have a look around at what I could in the graphics community through blog posts and whatnot about ray tracing itself. Then interacting with the community it seems like there are some misunderstandings and perhaps a warped interpretation of what's going on. So this post is a random assortment of thoughts regarding the encounters of others discussing this topic and what my input is.
Ray tracing describes a type of algorithm, but it's not necessary a specific algorithm
The first thing I encountered with looking through literature is that what's called "ray tracing" can be vague. Does it describe a specific algorithm such as heap sorts or fast inverse square or does it describe a class of algorithms like selection sorting or binary search algorithm? Or in another way of thinking, does ray tracing describe something like "storage device" or does it describe something like "NAND-based, SATA solid state drive?"
As far as the usage of the term goes, I'm led to believe that ray tracing is describing a type of algorithm. That is, the basic algorithm is shooting some ray out that mimics how a photon is shot out, then tracing it along some path and seeing how it interacts with the world. To that end, I've found several forms of ray tracing that exist:
Ray Casting: This is the most basic version of ray tracing, where the first thing the ray intersects is what the final output is based on. One could argue this is the basic step of ray tracing in and of itself.
Ray Marching: The most common implementation of this is the ray is the path generated by spheres that originate at some point. At the first point, a sphere grows until it hits something, then the the next point of the ray is at the edge of the sphere in the ray's direction. Then another sphere is generated that grows until it hits something, creates a new point at the edge in the direction of the ray, and so on. An object is considered "hit" when the sphere is small enough.
(Taken from http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/)
Path Tracing: When someone usually talks about ray tracing without any other context, this is the algorithm they're usually referring to. Path tracing attempts to trace the path of the ray from the camera to a light source. On top of this, each sample point use a ray that's pointed in a random direction. The idea is the more samples you use, the closer you get to the actual image.
Some industry folks may consider "ray tracing" itself to be the original algorithm devised by J. Turner Whitted while "path tracing" is the algorithm described by Jim Kajiya.
Ray tracing is also solving a problem with rasterizing
What rasterized rendering does today is it dumps a ton of information about the scene before proceeding to work on it. One of the first thing it dumps is all of the geometry the camera view cannot see. Next, pieces of the scene are built up one by one. They either add on top of each other right away like in a forward renderer or the parts of it are assembled on the side to be combined later like in a deferred renderer. (https://gamedevelopment.tutsplus.com/articles/forward-rendering-vs-deferred-rendering--gamedev-12342)
One issue with traditional rendering is also the order in which things are rendered. This can lead to weird artifacts like light spilling onto areas where there's no obvious light source, like in these example:
By using ray tracing, the rays bring back information of what's visible, what isn't visible, and how light can indirectly affect other objects in a realistic manner.
Real-time ray tracing isn't a relatively new thing for games
The funny thing is, ray tracing has been used for some time. Some games, like Guerilla Games' Killzone Shadowfall uses ray tracing to do screen-space lighting (slide 84), mostly in reflections and what appears to be ambient occlusion.