Jump to content

Pathtracing Vs. Raytracing: Disambiguation

       As I've been browsing YouTube and Reddit lately, I've noticed something. As technologies such as Nvidia RTX and SonicEther's SEUS PTGI for Minecraft are becoming more mainstream, people are getting bombarded with terminology they don't understand, and then misinterpreting and spreading misinformation. I'm not blaming the people who are reading about RTX, or watching a youtube video about SonicEther's latest Minecraft shaders. I'm blaming Nvidia, the youtube channels, and the uninformed web journalists who just want ad revenue.

       So I'm going to start at the very beginning, the definition of raytracing. What is it? Raytracing, simply put, is a mathematical procedure that calculates the intersection of a ray with a scene defined with solid geometry. Wait; that's not simply put at all! So let me clarify: What is a ray? A ray is basically like a line on a graph, but it's in 3 dimensions instead of 2. A useful way to think about a ray is like "point-slope" line format, instead of "y = mx + b." You can usually assume that when you're dealing with raytracing, the "point" of the point-slope will be the position of the camera in 3D space. This point is called the "origin" of the ray. The "slope" of the ray is called the direction of the ray. The direction of a ray determines, well, what direction the ray goes in. The direction and origin both have an X, Y, and Z value to represent their positions in a 3D world. It's more likely that the term that confused you was "scene defined with solid geometry." A scene basically just specifies that something exists, but it doesn't matter what it is. The solid geometry part means that there's a physical object there, like a rectangular prism, a sphere, or a Honda covered in pink... something. Basically, anything that you could touch in the real world is solid geometry.

       If raytracing is that simple, why hasn't anyone used it until now? Well, they have. Pretty much any three dimensional game you can think of probably uses some kind of raytracing to render it's graphics. For example, any game using: RAGE, Unreal Engine, Unity, Game Maker, and loads more engines all use some form of raytracing. How does that work then? This is really complicated and has been discussed by a lot of smart people who know more than I do, so I'm just going to talk about the basics. Almost all game engines up until recently relied on using math to sort of guess what color each pixel should be. The simplest example of this is shadows. When the first ray, or camera ray, is cast into a scene, you can figure out what color the object that the ray hit is. But really, that's going to look terrible and we all know it. Everything would just look like weird colorful blobs on the screen. That won't do. What computer graphics programmers figured out is that you can have a second ray, but instead of using it to figure out what color to put on the screen, you use it to figure out if part of an object has a direct line of sight to a light source. For example, let's say you hit a spot on the ground next to a tree when you calculate the camera ray. You can then figure out to color that part of the scene green, if the tree is surrounded by grass, or brown if it's surrounded by dirt. Next, you take a second ray and aim it from the spot under the tree up toward the sun. If the ray hits something on the way to the sun, you can say that the spot under the tree is in a shadow, and decrease the brightness of that spot on the screen, by reducing the brightness of the pixel for which the camera ray was cast. There are a lot more methods to try to figure things out like reflections, smoke, fog, water, and anything else that might need to be in a game. What's interesting about this is that you'll notice we're going to be calculating, or "casting" multiple rays instead of just one in order to figure out how the light from a lamp or the sun might interact with a scene. There are also more ways than just raytracing to figure out how a 3D object will look on your screen, one of the most prominent of which is called rasterization. I'm not really going to explain rasterization, because it's not really on topic and it would waste your time. So now you might be asking... what if we tried more power?

        Pathtracing. It's the holy grail of computer graphics, it features mind-bending math, needs ridiculous computers to render it, and... looks almost indistinguishable from real life. This is what RTX and SEUS PTGI are doing, and while it uses raytracing, it's not just raytracing. So what the hell is it then? Pathtracing is raytracing, but someone thought it would be nice to throw the guessing part out the window. With pathtracing, instead of just simulating 5 or 6 light rays to get reflections and shadows and whatnot, you simulate thousands, or even millions, per pixel. Pathtracing simulates what would happen if you took the solid geometry of a scene and put it into the real world with a camera by trying to simulate every light ray that would hit the camera in a scene. Pathtracing looks at all the objects in the scene, and it uses even more math to figure out how light rays will reflect off an object. For example a mirror might reflect the light straight off, but a white wall will reflect in all different directions. The simulation of all the light rays in a scene is called "global illumination." For a nice video explaining the basics of pathtracing, view Disney's Practical Guide to Pathtracing.

        So now I'll go over the two examples I brought up, Nvidia RTX and SEUS PTGI. First, RTX, because it's a little more straightforward. (Not the chip development, I'm not trying to insult Nvidia here). What RTX does is instead of simulating every single light ray in a scene, which can sometimes end up giving you say, 20 hours-per-frame instead of 60 frames-per-second, RTX simulates only about 4 light rays per pixel, and then uses some clever hardware to make really good guesses about what the scene would look like if you were to continue calculating the light in the scene to try to give the most realistic scene possible. Nvidia's RTX cards are specially built to make these guesses, which make the scene look much more realistic, but not quite as good as Pixar or Illumination's animation softwares.

        SEUS PTGI. Sonic Ether has been writing shaders for Minecraft for quite some time, using guessing tricks. More recently he started using global illumination pathtracing to make his "Sonic Ether's Unbelievable Shaders - Path Traced Global Illumination" shaders for Minecraft. Cody (Sonic Ether) uses a few of the same tricks that Nvidia uses, but he also uses some of his own. On his YouTube channel he has a great video that explains the most basic parts of what he's doing. His shaders use methods that, while very realistic, are technically not global illumination. Why? Sonic Ether came up with a technique that allows a computer to draw a frame over a period of time, like Pixar or another animation studio. The difference is that Sonic Ether figured out a way to move the camera and move objects in the scene while it's drawing. This means that while his new Minecraft shaders are still Pathtraced, they aren't exactly what you would see in the real world because he has to use some shortcuts to make it run on a regular graphics card at a playable framerate. And for clarification: On Cody's website, he specifies that his  PTGI minecraft shaders DO NOT use RTX or require RTX graphics cards to run.

        I hope this clarified a few things, and next time you see an argument on reddit or a youtube comment section, or twitter, or anything else, you can kindly correct someone.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Real_NC said:

       As I've been browsing YouTube and Reddit lately, I've noticed something. As technologies such as Nvidia RTX and SonicEther's SEUS PTGI for Minecraft are becoming more mainstream, people are getting bombarded with terminology they don't understand, and then misinterpreting and spreading misinformation. I'm not blaming the people who are reading about RTX, or watching a youtube video about SonicEther's latest Minecraft shaders. I'm blaming Nvidia, the youtube channels, and the uninformed web journalists who just want ad revenue.

       So I'm going to start at the very beginning, the definition of raytracing. What is it? Raytracing, simply put, is a mathematical procedure that calculates the intersection of a ray with a scene defined with solid geometry. Wait; that's not simply put at all! So let me clarify: What is a ray? A ray is basically like a line on a graph, but it's in 3 dimensions instead of 2. A useful way to think about a ray is like "point-slope" line format, instead of "y = mx + b." You can usually assume that when you're dealing with raytracing, the "point" of the point-slope will be the position of the camera in 3D space. This point is called the "origin" of the ray. The "slope" of the ray is called the direction of the ray. The direction of a ray determines, well, what direction the ray goes in. The direction and origin both have an X, Y, and Z value to represent their positions in a 3D world. It's more likely that the term that confused you was "scene defined with solid geometry." A scene basically just specifies that something exists, but it doesn't matter what it is. The solid geometry part means that there's a physical object there, like a rectangular prism, a sphere, or a Honda covered in pink... something. Basically, anything that you could touch in the real world is solid geometry.

       If raytracing is that simple, why hasn't anyone used it until now? Well, they have. Pretty much any three dimensional game you can think of probably uses some kind of raytracing to render it's graphics. For example, any game using: RAGE, Unreal Engine, Unity, Game Maker, and loads more engines all use some form of raytracing. How does that work then? This is really complicated and has been discussed by a lot of smart people who know more than I do, so I'm just going to talk about the basics. Almost all game engines up until recently relied on using math to sort of guess what color each pixel should be. The simplest example of this is shadows. When the first ray, or camera ray, is cast into a scene, you can figure out what color the object that the ray hit is. But really, that's going to look terrible and we all know it. Everything would just look like weird colorful blobs on the screen. That won't do. What computer graphics programmers figured out is that you can have a second ray, but instead of using it to figure out what color to put on the screen, you use it to figure out if part of an object has a direct line of sight to a light source. For example, let's say you hit a spot on the ground next to a tree when you calculate the camera ray. You can then figure out to color that part of the scene green, if the tree is surrounded by grass, or brown if it's surrounded by dirt. Next, you take a second ray and aim it from the spot under the tree up toward the sun. If the ray hits something on the way to the sun, you can say that the spot under the tree is in a shadow, and decrease the brightness of that spot on the screen, by reducing the brightness of the pixel for which the camera ray was cast. There are a lot more methods to try to figure things out like reflections, smoke, fog, water, and anything else that might need to be in a game. What's interesting about this is that you'll notice we're going to be calculating, or "casting" multiple rays instead of just one in order to figure out how the light from a lamp or the sun might interact with a scene. There are also more ways than just raytracing to figure out how a 3D object will look on your screen, one of the most prominent of which is called rasterization. I'm not really going to explain rasterization, because it's not really on topic and it would waste your time. So now you might be asking... what if we tried more power?

        Pathtracing. It's the holy grail of computer graphics, it features mind-bending math, needs ridiculous computers to render it, and... looks almost indistinguishable from real life. This is what RTX and SEUS PTGI are doing, and while it uses raytracing, it's not just raytracing. So what the hell is it then? Pathtracing is raytracing, but someone thought it would be nice to throw the guessing part out the window. With pathtracing, instead of just simulating 5 or 6 light rays to get reflections and shadows and whatnot, you simulate thousands, or even millions, per pixel. Pathtracing simulates what would happen if you took the solid geometry of a scene and put it into the real world with a camera by trying to simulate every light ray that would hit the camera in a scene. Pathtracing looks at all the objects in the scene, and it uses even more math to figure out how light rays will reflect off an object. For example a mirror might reflect the light straight off, but a white wall will reflect in all different directions. The simulation of all the light rays in a scene is called "global illumination." For a nice video explaining the basics of pathtracing, view Disney's Practical Guide to Pathtracing.

        So now I'll go over the two examples I brought up, Nvidia RTX and SEUS PTGI. First, RTX, because it's a little more straightforward. (Not the chip development, I'm not trying to insult Nvidia here). What RTX does is instead of simulating every single light ray in a scene, which can sometimes end up giving you say, 20 hours-per-frame instead of 60 frames-per-second, RTX simulates only about 4 light rays per pixel, and then uses some clever hardware to make really good guesses about what the scene would look like if you were to continue calculating the light in the scene to try to give the most realistic scene possible. Nvidia's RTX cards are specially built to make these guesses, which make the scene look much more realistic, but not quite as good as Pixar or Illumination's animation softwares.

        SEUS PTGI. Sonic Ether has been writing shaders for Minecraft for quite some time, using guessing tricks. More recently he started using global illumination pathtracing to make his "Sonic Ether's Unbelievable Shaders - Path Traced Global Illumination" shaders for Minecraft. Cody (Sonic Ether) uses a few of the same tricks that Nvidia uses, but he also uses some of his own. On his YouTube channel he has a great video that explains the most basic parts of what he's doing. His shaders use methods that, while very realistic, are technically not global illumination. Why? Sonic Ether came up with a technique that allows a computer to draw a frame over a period of time, like Pixar or another animation studio. The difference is that Sonic Ether figured out a way to move the camera and move objects in the scene while it's drawing. This means that while his new Minecraft shaders are still Pathtraced, they aren't exactly what you would see in the real world because he has to use some shortcuts to make it run on a regular graphics card at a playable framerate. And for clarification: On Cody's website, he specifies that his  PTGI minecraft shaders DO NOT use RTX or require RTX graphics cards to run.

        I hope this clarified a few things, and next time you see an argument on reddit or a youtube comment section, or twitter, or anything else, you can kindly correct someone.

Thats useful but most of the people here do know what ray-tracing is.But, I did not know what Pathtracing and the new MC shaders where a thing.

Link to comment
Share on other sites

Link to post
Share on other sites

I do apologize if there was a better subforum to post this in, I didn't really think it fit in GPUs or any of the other subforums, but I'm not very experienced around here.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, ManosMax13 said:

Thats useful but most of the people here do know what ray-tracing is.But, I did not know what Pathtracing and the new MC shaders where a thing.

I've seen people on this forum and in the LTT discord who don't know what it is, I figured that more people would know here then in other places, but I just wanted to make sure to explain it before trying to explain RTX and SEUS PTGI.

Link to comment
Share on other sites

Link to post
Share on other sites

Also, the SEUS PTGI looks REALLY nice! However, there are some downfalls as some places are so bright, its almost impossible to see what block you're looking at and torches bathe everything in a orange-yellow hue. And the fact that gobbles up frames like cookie monster eats cookies ?

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, inteli7.Ti said:

Also, the SEUS PTGI looks REALLY nice! However, there are some downfalls as some places are so bright, its almost impossible to see what block you're looking at and torches bathe everything in a orange-yellow hue. And the fact that gobbles up frames like cookie monster eats cookies ?

I would have to agree with you on the looking really nice part. As for the torches, find a resource pack where the torches are whiter and it should fix your problem according to his original method of calculating lighting. And make sure to bring lots of cookies.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Real_NC said:

 On Cody's website, he specifies that his  PTGI minecraft shaders DO NOT use RTX or require RTX graphics cards to run.

I feel like this is the biggest confusion when it comes to ray tracing. People think you need RTX hardware or hardware accelerated ray tracing to run anything. And I do partially blame NVIDIA for this since Jensen Huang likely said at some point "You can only get this on NVIDIA RTX hardware" (well, maybe strictly speaking the RTX features).

 

Also I think the general public also doesn't understand software can do (almost) anything hardware can. It'll just take a lot longer to do it.

 

43 minutes ago, ManosMax13 said:

Thats useful but most of the people here do know what ray-tracing is.But, I did not know what Pathtracing and the new MC shaders where a thing.

I like to think "path tracing" is more of a technical term and the industry uses "ray tracing" regardless when speaking to the general public, because it's a term they're familiar with. Similarly with things like "anti-aliasing" and "ambient occlusion", although in those cases the catch-all term for them is in the more "technical" terms.

Link to comment
Share on other sites

Link to post
Share on other sites

I would honestly say path tracing adds ambiguity to the subject...at least for the layman's terms. Path tracing for a layman is simply brute force ray tracing instead of a single ray your adding in thousands just to produce a coherent image. All reflecting off surrounding surfaces adding to their color until you finally reach either nothing and disregarding  the ray or it hits a light source confirming the ray exists and it uses it.

 

 

Side note.

 

I still would be limiting usage of path tracing as its biggest benefit is GI but at a massive cost even with dedicated hardware. Some things can easily faked and therefore heavily optimized E.g. an object next to a coloured wall a simple weak light can create a very convincing effect of GI without going through the massive computational cost of PT.  The light in a deferred lighting or even in newer forward renderer's can be next to free computationally.

 

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, trag1c said:

I would honestly say path tracing adds ambiguity to the subject...at least for the layman's terms. Path tracing for a layman is simply brute force ray tracing instead of a single ray your adding in thousands just to produce a coherent image. All reflecting off surrounding surfaces adding to their color until you finally reach either nothing and disregarding  the ray or it hits a light source confirming the ray exists and it uses it.

Completely disregard the light ray... ahh, the simplicity of naive Monte Carlo pathtracing.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Mira Yurizaki said:

I feel like this is the biggest confusion when it comes to ray tracing. People think you need RTX hardware or hardware accelerated ray tracing to run anything. And I do partially blame NVIDIA for this since Jensen Huang likely said at some point "You can only get this on NVIDIA RTX hardware" (well, maybe strictly speaking the RTX features).

I definitely agree with you there, pathtracing is more of an industry term. But I also think it would be nice if it wasn't anymore, because now that there's such widespread use of pathtracing I think people could learn one more word. You could run a full naive monte carlo pathtracer on a laptop from 1975, easily, but it wouldn't be very useful. I think people don't really understand that what RTX is doing is taking the cleverest solutions to complicated problems and making chips that solve those solutions as fast as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, trag1c said:

I would honestly say path tracing adds ambiguity to the subject...at least for the layman's terms. Path tracing for a layman is simply brute force ray tracing instead of a single ray your adding in thousands just to produce a coherent image. All reflecting off surrounding surfaces adding to their color until you finally reach either nothing and disregarding  the ray or it hits a light source confirming the ray exists and it uses it.

If talking about ray tracing, I've never heard it described as a single ray. It's always been described, as far as I can recall, as something similar to path tracing. It's just that it's never been called path tracing. The only time I've heard single rays being used is to describe ray casting.

 

4 hours ago, trag1c said:

Side note.

 

I still would be limiting usage of path tracing as its biggest benefit is GI but at a massive cost even with dedicated hardware. Some things can easily faked and therefore heavily optimized E.g. an object next to a coloured wall a simple weak light can create a very convincing effect of GI without going through the massive computational cost of PT.  The light in a deferred lighting or even in newer forward renderer's can be next to free computationally.

But there are still limitations to raster rendering. The biggest one is raster rendering drops a lot of information about the scene for the sake of efficiency. The other thing is the order in which things are rendered, which can cause graphical inaccuracies such that unless you want to track down every corner case, you're not going to fix. Like the biggest one to me is related to global illumination, when parts of the world are lit up somehow without an obvious source of light, like in these pictures:

2019041914055800-659B13F48903294AE2B3FA4F12DA9898-2.jpg.02a5c61b31cb5dcb4d7c65e71dc70ef9.jpg2019041914055200-659B13F48903294AE2B3FA4F12DA9898-2.jpg.a3fb247173eb42bb1dcbc20d36070b42.jpg

 

This tells me there's something wrong with the way the renderer is lighting the scene. It may not be considering surfaces beyond the one that's being lit. You could go back and fix the renderer, but how many other corner cases might there be that can produce jarring artifacts?

 

This is also the case with most non-cube map based reflections. Screen space reflections look good a lot of the time, but then they fall flat rather quickly. This is especially the case when I see something trying to do a screen space reflection and ends up reflecting something impossible. Like for example when the camera is facing the character's face and the reflection in the water below and behind them is showing their face.

 

The other thing is that our GPUs have to be generic enough to handle any sort of computation that rasterized rendering needs. This makes it impossible to accelerate any one algorithm that rasterized rendering uses. But with ray tracing, since you can get all of the lighting information you need from how the ray interacts with the world, what algorithms you use is limited, meaning you can afford to build fixed-function accelerators that work solely on that. So while rasterized rendering can only benefit from software optimizations (which can only go so far), ray tracing can benefit from both software and hardware optimizations. Unless we go back to fixed function hardware a la pre GeForce 3 days, which nobody seems to be in a rush to do, rasterized rendering is about as good as it's going to get. Plus developers are still generating "best practices" and whatnot to get the most out of ray tracing, so there's room for improvement.

 

And while sure, the argument that ray tracing (or rather path tracing) is computationally expensive, I'd like to see a raster renderer's performance when it isn't allowed to drop information like clipping everything that can't be seen by the camera.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Mira Yurizaki said:

If talking about ray tracing, I've never heard it described as a single ray. It's always been described, as far as I can recall, as something similar to path tracing. It's just that it's never been called path tracing. The only time I've heard single rays being used is to describe ray casting.

It is supposed to be defined as a single ray, and it's pretty weird that the definition of that doesn't really exist any more. A raster is still doing what a raytracer does, it just has a less elegant way of doing it. The problem is that once people are using rasters, it's harder to build in decent lighting approximations so they just do everything screen-space instead. It doesn't make a whole lot of sense to me, to be honest. It's totally possible to use a rasterizing function to trace secondary rays for shadows and such, and get much better results than shadow mapping or screen space would supply.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Real_NC said:

It is supposed to be defined as a single ray, and it's pretty weird that the definition of that doesn't really exist any more. 

Because it wasn't actually the definition to begin with. Ray tracing is a family of algorithms. It doesn't describe a single specific one.

 

It's like saying anti-aliasing is a specific algorithm when there's a half dozen different methods of it, all of them achieving more or less the same thing.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mira Yurizaki said:

Because it wasn't actually the definition to begin with. Ray tracing is a family of algorithms. It doesn't describe a single specific one.

 

It's like saying anti-aliasing is a specific algorithm when there's a half dozen different methods of it, all of them achieving more or less the same thing.

It's a definition used by primarily CG programmers. So to pretty much everyone, raytracing refers to anything that involves tracing at least one ray, probably many more. Minecraft use(s/d) a raytracer with only one ray at one point, and you would still call it that. Although some might refer to it as a 2 layer octree voxel traverser.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Real_NC said:

It's a definition used by primarily CG programmers. So to pretty much everyone, raytracing refers to anything that involves tracing at least one ray, probably many more.

I'm having a hard time believing that "ray tracing" itself refers to a specific algorithm. Everywhere else I'm seeing it, even in professional circles, they say ray tracing even though the methods specifically point to path tracing.

 

I'll be more willing to believe it if the author of the original algorithm specifically said something.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Mira Yurizaki said:

I'm having a hard time believing that "ray tracing" itself refers to a specific algorithm. Everywhere else I'm seeing it, even in professional circles, they say ray tracing even though the methods specifically point to path tracing.

 

I'll be more willing to believe it if the author of the original algorithm specifically said something.

It doesn't. Download POVRay and play with it.

 

How most 3D games are designed is that the models are self-lit. They are not ray-casted. If something is ray casted you will see these effects:

 

- Light reflects off reflective surfaces like a mirror, matte surfaces, no pure-black anywhere except in un-lit locations

 

- Light transfers through items that are glass, water, plastic, etc. Most games don't even bother with having anything resembling a window, it's just an opaque wall unless you can break it.

 

- Effects that sunlight does like Crepuscular rays, aren't possible to do in a game because there is no object to self-light. It's computationally expensive and has to be faked.

 

- Most "underwater" lighting effects in computer games adjust the global lighting on depth rather than ray cast into the water because it's immensely computationally expensive. 

 

There's two things under Ray-tracing, Caustics and Radiosity, which are responsible for the glass/water effects. Radiosity is what allows for shadows to look like shadows from indirect lighting.

 

In a 3D game, you want the highest frame-rate possible, so this comes at the expense of faking everything. The first to go is depth of field effects, Instead of rendering out to a vanishing point, it renders out to where the smallest mipmap'd texture is viable and then stops rendering anything beyond that. Some games actually use smaller polygon models for objects that are beyond the DOF effect so that it doesn't result in things popping into view.  Many 3D games also use very small maps to keep the geometry small in the system ram and send as little to the GPU as possible. The next to go are shadows, play ANY game, and look around to see if there are shadows for things like trees and grass or debris. You'll probably find they're absent, even if running on the highest settings. The next to go are lighting surface effects. Turn HABO+ off and watch the lighting effects disappear, then dial it down again and you'll see things resort to just being evenly lit.

 

Prior to DX12/Vulcan, it also wasn't possible to configure the render pipeline to do raytracing anyway. If you look at nVidia's tutorial here:

 

https://developer.nvidia.com/rtx/raytracing/vkray

 

You'll note that the raytracing is an entirely separate extension. Same with DirectX12, it's an extension.

 

 

When it comes to film and TV, the terminology is used differently:

Path tracing, is Ray-tracing. It works exactly as described (emitting rays from the camera) in Raytracing software like PovRay, and in the nVidia developer documentation.

 

With certain kinds of game engines, there is confusing terminology used. That Minecraft raytracing mod? Certainly looks convincing, but what is it actually doing? Well not path-tracing like described by the Disney short. It doesn't work on AMD cards, why is that? It's more fakery actually. "Path Traced Global Illumination", what does that sound like? That sounds like Radiosity and Caustics, features of ray-tracing. That certainly can be done with shaders. "shade" is right there in the name. But you actually need to replace the textures in the game to get the full effect, the existing textures are designed for the self-lighting.

 

And you'll note, it tanks the performance of Minecraft. So what does the RTX stuff do?

https://www.nvidia.com/en-us/geforce/news/geforce-gtx-dxr-ray-tracing-available-now/

 

They're physically separate logic blocks in the GPU, what does this remind me of? Hardware T&L.

 

Remember back in 1999 when nVidia released "hardware transform, lighting and clipping", which later was just called "Hardware T&L" ? That's because back then, we were dealing with fixed-pipeline 3D graphics, not programmable. This was the core feature of DirectX8. It's also what put 3DFx out of business. By making the hardware more general purpose over time, giving developers lower-level access to the hardware, more can be done without the high-level API getting in the way. As it is, setting up Vulkan, Metal or DX12 is very very different from the previous OpenGL/ES, DX11 methods. So to make use of ray-tracing hardware requires a re-think of how the game engine operates, so I would not expect it to be retro-fitted to any existing game engine. Sure you can retrofit an existing game engine to have DX12 or Vulkan support, but that isn't going to automatically open up the Raytracing door.

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Kisai I think something's wrong here:

Quote

Path tracing, is Ray-tracing. It works exactly as described (emitting rays from the camera) in Raytracing software like PovRay, and in the nVidia developer documentation.

While they may be related, they are not the same. Also, in reference to this:
 

Quote

 

Yeah, I've seen that before. Let's put a stop to the confusion and clean things up a bit. How do you like the following terminology:

  • Ray Tracing 
  • Path Tracing (general - as a subcategory/evolution of Ray Tracing)
  • Path Tracing (cinematic - as a reverseal of Ray Tracing)

Like that so far?

 

@Mira Yurizaki I concur - Ray Tracing and Rasterisation sound more like general rendering methods than specific algorithms. And cinematic Path Tracing may count as one as well ?

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, TopHatProductions115 said:

 

Yeah, I've seen that before. Let's put a stop to the confusion and clean things up a bit. How do you like the following terminology:

  • Ray Tracing 
  • Path Tracing (general - as a subcategory/evolution of Ray Tracing)
  • Path Tracing (cinematic - as a reverseal of Ray Tracing)

Like that so far?

 

@Mira Yurizaki I concur - Ray Tracing and Rasterisation sound more like general rendering methods than specific algorithms. And cinematic Path Tracing may count as one as well ?

If we're being specific, "Ray Tracing"'s assumption is to ray cast from all lights sources, there is no global illumination, nor self-illumination except from these objects. Path tracing is the reverse, casting rays from the Camera so it doesn't calculate anything outside the camera.

 

Rasterization means "drawing", both a computer monitor and a inkjet/laser printer are considered raster devices. Old school plotters actually used pens and were instead vector devices.

 

If you draw polygons, you're drawing vectors. If you draw bitmaps, you're drawing raster. That's all there is to it.

 

Your average 3D game rendering is self-lit, and as I said earlier, existing "ray tracing" done with shaders is just a small set of ray-tracing capability, and requires a high end device because it certainly will tank the performance.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Kisai said:

It doesn't. Download POVRay and play with it.

 

How most 3D games are designed is that the models are self-lit. They are not ray-casted. If something is ray casted you will see these effects:

 

- Light reflects off reflective surfaces like a mirror, matte surfaces, no pure-black anywhere except in un-lit locations

 

- Light transfers through items that are glass, water, plastic, etc. Most games don't even bother with having anything resembling a window, it's just an opaque wall unless you can break it.

 

- Effects that sunlight does like Crepuscular rays, aren't possible to do in a game because there is no object to self-light. It's computationally expensive and has to be faked.

 

- Most "underwater" lighting effects in computer games adjust the global lighting on depth rather than ray cast into the water because it's immensely computationally expensive. 

 

There's two things under Ray-tracing, Caustics and Radiosity, which are responsible for the glass/water effects. Radiosity is what allows for shadows to look like shadows from indirect lighting.

 

In a 3D game, you want the highest frame-rate possible, so this comes at the expense of faking everything. The first to go is depth of field effects, Instead of rendering out to a vanishing point, it renders out to where the smallest mipmap'd texture is viable and then stops rendering anything beyond that. Some games actually use smaller polygon models for objects that are beyond the DOF effect so that it doesn't result in things popping into view.  Many 3D games also use very small maps to keep the geometry small in the system ram and send as little to the GPU as possible. The next to go are shadows, play ANY game, and look around to see if there are shadows for things like trees and grass or debris. You'll probably find they're absent, even if running on the highest settings. The next to go are lighting surface effects. Turn HABO+ off and watch the lighting effects disappear, then dial it down again and you'll see things resort to just being evenly lit.

 

Prior to DX12/Vulcan, it also wasn't possible to configure the render pipeline to do raytracing anyway. If you look at nVidia's tutorial here:

 

https://developer.nvidia.com/rtx/raytracing/vkray

 

You'll note that the raytracing is an entirely separate extension. Same with DirectX12, it's an extension.

 

 

When it comes to film and TV, the terminology is used differently:

Path tracing, is Ray-tracing. It works exactly as described (emitting rays from the camera) in Raytracing software like PovRay, and in the nVidia developer documentation.

 

With certain kinds of game engines, there is confusing terminology used. That Minecraft raytracing mod? Certainly looks convincing, but what is it actually doing? Well not path-tracing like described by the Disney short. It doesn't work on AMD cards, why is that? It's more fakery actually. "Path Traced Global Illumination", what does that sound like? That sounds like Radiosity and Caustics, features of ray-tracing. That certainly can be done with shaders. "shade" is right there in the name. But you actually need to replace the textures in the game to get the full effect, the existing textures are designed for the self-lighting.

 

And you'll note, it tanks the performance of Minecraft. So what does the RTX stuff do?

https://www.nvidia.com/en-us/geforce/news/geforce-gtx-dxr-ray-tracing-available-now/

 

They're physically separate logic blocks in the GPU, what does this remind me of? Hardware T&L.

 

Remember back in 1999 when nVidia released "hardware transform, lighting and clipping", which later was just called "Hardware T&L" ? That's because back then, we were dealing with fixed-pipeline 3D graphics, not programmable. This was the core feature of DirectX8. It's also what put 3DFx out of business. By making the hardware more general purpose over time, giving developers lower-level access to the hardware, more can be done without the high-level API getting in the way. As it is, setting up Vulkan, Metal or DX12 is very very different from the previous OpenGL/ES, DX11 methods. So to make use of ray-tracing hardware requires a re-think of how the game engine operates, so I would not expect it to be retro-fitted to any existing game engine. Sure you can retrofit an existing game engine to have DX12 or Vulkan support, but that isn't going to automatically open up the Raytracing door.

 

 

 

 

 

 

It is nice to see that there are knowledgeable people on ray tracing here.

 

I have been ray tracing for a very long time but I consider myself a user with limited technical knowledge. I was wondering if you mite answer a few questions?

 

The first raytracer I used was it the 80s. It was called Turbo Silver. The program later evolved into a raytracer called Imagine. Do you know what type of ray tracing these programs used back then?

 

The only program I used to create 3D content for games was 3D Max and I only did this between 2001 and 2005. When you say "self lit" do you mean ambient lighting or was there something in the process of converting it to a game file that added the property?

 

To show windows in game objects I used alpha channels that were part of the texture process in PhotoShop. When I created a window to be ray traced it was a separate object with its own properties that usually used double sided polygons and had to have the same amount of surfaces as in real life to reflect properly.  Is that the sort of thing a new gaming engine would have to take into consideration or are there other things? 

 

 I have used Nvidia ray tracing technologies for a long time. When I see RTX I am reminded of iray. Dose RTX use that type of rendering technology? 

 

Thanks.

 

 

 

 

 

 

 

 

   

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, jones177 said:

 

It is nice to see that there are knowledgeable people on ray tracing here.

 

I have been ray tracing for a very long time but I consider myself a user with limited technical knowledge. I was wondering if you mite answer a few questions?

 

The first raytracer I used was it the 80s. It was called Turbo Silver. The program later evolved into a raytracer called Imagine. Do you know what type of ray tracing these programs used back then?

 

The only program I used to create 3D content for games was 3D Max and I only did this between 2001 and 2005. When you say "self lit" do you mean ambient lighting or was there something in the process of converting it to a game file that added the property?

 

To show windows in game objects I used alpha channels that were part of the texture process in PhotoShop. When I created a window to be ray traced it was a separate object with its own properties that usually used double sided polygons and had to have the same amount of surfaces as in real life to reflect properly.  Is that the sort of thing a new gaming engine would have to take into consideration or are there other things? 

 

 I have used Nvidia ray tracing technologies for a long time. When I see RTX I am reminded of iray. Dose RTX use that type of rendering technology? 

 

Thanks.

   

I have not used those older ray tracing programs and my best guess is that until about 1993, everything that could render a 3D computer image may have been called a ray-tracer as you needed a Cray super computer to do real-time ray-tracing at the time.

 

RTX has "RT" cores which are basically just another fixed logic block. Think of it like the video encoder (h264/h.265) block that exists even on the low-end PC's. It's there because it boosts playback/recording of video if the software supports it. If the software doesn't support it, then it goes unused. Period.

 

To use the RT cores right now, you have use software libraries built by nVidia (eg Gameworks/OptiX), you might be able to use it with Vulkan or DirectX12 through extensions but they are extensions specific to nVidia at present.

rtxplatform002.png

https://developer.nvidia.com/gameworks-ray-tracing

 

Rasterization is still just calculating each triangle individually, hence they are self-lit based on the materials the model was defined to have and the shader programs specified by the game engine. This also means you need tricks to account for other objects, including other triangles in the same model.

 

Like one way of describing how rasterization works vs ray-tracing or path-tracing is by comparing a paint-by-numbers painting with a photograph (and this is an over-simplification.) The rasterization process knows that triangle with vertices {x,y,z},{x,y,z},{x,y,x} has a texture from texture 0 from specific u,v coordinates, and should be lit using the shader programs. In ray-tracing, the computer first has to figure out the ambient lighting, so it fires rays or photons from locations in the scene deemed light sources until they are absorbed or sent outside the scene. So all the objects are "not lit" during that phase (see the path-tracing video), only objects in 3D space that have reflective surfaces will bounce light, and objects that are close to each other will bounce light off each other. You get more accurate ray-tracing by using more rays/photons in the scene, so if I'm guessing correctly, the "RT" cores represent one ray or photon tracing. So in effect, the RT's closest similarity is to using a DSLR with a higher iso, more rays = higher iso can be used.

 

My guess right now is that when you turn on the RT cores, it acts like a specialized shader program that injects itself into the rendering pipeline like a compute-like core. 

https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/ 

 

While we're on this topic, the same thing should apply to audio, and computer games have pretty much given up on audio beyond stereo, and you rarely find a game that actually makes use of surround sound. One reason for that is lack of a true cross-platform Audio DSP API (OpenAL is sorely lacking, when we need something more like Vulkan for Audio) , the other reason is that this has to be integrated onto the GPU since while we're talking about bouncing light rays, the same should be done with audio sources in the scene. Since ultimately the GPU is connected to the monitor via DP/HDMI/USB-C there is no reason to send it to a separate DSP on the motherboard. 

 

So while I think we will see some progress in Ray-tracing-like behavior come to all GPU cards, adaptation of the RT cores to games might not be seen for some time (Note how Unreal Engine and Unity are listed.) Everyone is still licking their wounds from jumping on the VR bandwagon that fell in the river.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×