Jump to content

World's first fully ray traced game (mod) released

Humbug
Go to solution Solved by straight_stewie,
21 minutes ago, CarlBar said:

It's been confirmed they're not using them. What AnandTech has to say about how RT work and what DICE are actually doing are two different things. 

Every article I read about it, either from DICE, from interested bloggers, or from interviews with DICE, says that they are using DXR for their ray tracing.

DXR is the name of the part of the DX12 API that supports ray tracing, which runs on whatever implementation the device drivers are providing. In Nvidias case, that is RT cores when available.

So please, point me to where you are getting your information, because not only can I not find it, I am finding an overwhelming amount of the exact opposite of what you are saying.

38 minutes ago, mr moose said:

There are still other cards on the market and AMD are...

...already running Raytracint stuff without making a fuzz about it. It just works.

Without a fuzz or people in Leatherjackets claiming that its awesome. It just works.

 

Don't believe me? Then read this:

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

https://www.dsogaming.com/news/killzone-4-uses-raytracing-for-its-real-time-reflections/

 

Sadly there is no technological presentation for the Forza Games on Xbox and so. But it is entirely possible that they might use some form of Ray Tracing or Casting as well.

 

So its not new and already in use in some games. But Developers don't make a fuzz about it. It just works.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Stefan Payne said:

Raytracing is overrated.

It makes sense for some effects like Lighting and reflection and will be shown in next generation gaming more (without nVidia Hardware!).


But essentially its shit to replace rendering as it just cost too much performance.

But it doesn't have to be better, it just has to have its use and make sense for certain things. And that it does...

That's what you think because Nvidias marketing is good. You can see the difference in quality between a part traced scene and rasterized scenes. They have more natural lighting, and offer certain effects essentially for free.

Issue is that Nvidia is trying to push a tech they don't know. Sure they poached a lot of people recently, but they didn't work on the useless rtx of today. So it remains that what they showed has essentially been done with low expertise people in MC rendering. And that shows since it's a mostly data scientist/AI cider approach to it. MC rendering is coming, just in a few years I'd say. There have been immense speedups since a few years, and we need more suited hardware as well. For instance you basically would need to subtract a lot of rasterization hardware, to add faster inference hardware, as well as more fast intersection hardware. For th first part Nvidia is in the lead, for the second I'd argue that amd is in the lead. Because Nvidia has invested in AI, while AMD has invested in memory architectures more suited for the data structures involved.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Stefan Payne said:

...already running Raytracint stuff without making a fuzz about it. It just works.

Without a fuzz or people in Leatherjackets claiming that its awesome. It just works.

 

Don't believe me? Then read this:

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

https://www.dsogaming.com/news/killzone-4-uses-raytracing-for-its-real-time-reflections/

 

Sadly there is no technological presentation for the Forza Games on Xbox and so. But it is entirely possible that they might use some form of Ray Tracing or Casting as well.

 

So its not new and already in use in some games. But Developers don't make a fuzz about it. It just works.

you know those aren't quite the same thing.   They use light maps and predefined structures etc which do look good. But that stuff has been around for yonks, hell even my GT620 from 2012 can do all that.   

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, laminutederire said:

That's what you think because

It eats Performance like Crazy and Rasterization is just more efficient at the moment.


That's how some developers told it in well known German Tech Forum. 

And as I said, its already in use here and there, without developers writing it on the box of the game...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, mr moose said:

you know those aren't quite the same thing.   They use light maps and predefined structures etc which do look good. But that stuff has been around for yonks, hell even my GT620 from 2012 can do all that.   

See, its old shit, no reason to pound the drums about this technology.

Especially since it was first implemented in the early 80s...

 

Quote

The first implementation of a "real-time" ray-tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students

 

And if Developers want to use it, they'll do it anyway - where it makes sense...

 

 

Or lets take this:

Quote

On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution.

 

https://en.wikipedia.org/wiki/Ray_tracing_(graphics)#In_real_time

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Stefan Payne said:

It eats Performance like Crazy and Rasterization is just more efficient at the moment.


That's how some developers told it in well known German Tech Forum. 

And as I said, its already in use here and there, without developers writing it on the box of the game...

Yeah because it's not ready yet. Nvidia is marketing not a lot. As you said it's been done quietly for a certain time, they're the only one bragging about it that's all.

But everyone see then as the day tracing gurus now...

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Stefan Payne said:

See, its old shit, no reason to pound the drums about this technology.

Especially since it was first implemented in the early 80s...

 

 

And if Developers want to use it, they'll do it anyway - where it makes sense...

But you are trying to insinuate that what Nvidia is attempting to do here is the same, It isn't.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, laminutederire said:

Yeah because it's not ready yet. Nvidia is marketing not a lot. As you said it's been done quietly for a certain time, they're the only one bragging about it that's all.

But everyone see then as the day tracing gurus now...

Exactly.

Its a nice thing for certain things and people are working on it forever.

 

4 minutes ago, mr moose said:

But you are trying to insinuate that what Nvidia is attempting to do here is the same, It isn't.

Could you stop the strawmanning?! 

I said that its been around forever and already in use here and there  where it makes sense. But hey, it just works...

There is no need to brag about the work others have already put in.

Especially when the performance of the feature is pretty shit right now anyway...

 

Just wait and see how its developing, when someone else is better at raytracing, they won't talk about it no more.

 

Oh and look at that:

https://en.wikipedia.org/wiki/Ray-tracing_hardware

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Stefan Payne said:

Exactly.

Its a nice thing for certain things and people are working on it forever.

 

Could you stop the strawmanning?! 

I said that its been around forever and already in use here and there  where it makes sense. But hey, it just works...

There is no need to brag about the work others have already put in.

Especially when the performance of the feature is pretty shit right now anyway...

 

Just wait and see how its developing, when someone else is better at raytracing, they won't talk about it no more.

 

Oh and look at that:

https://en.wikipedia.org/wiki/Ray-tracing_hardware

you don't know what a strawman is.

It also appears you don't know what RTX is either.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, mr moose said:

you don't know what a strawman is.

It also appears you don't know what RTX is either.

Well he's not wrong on the fact that Nvidia are no path tracing experts. They poached people this year, so the expertise of those people can only be applied for techs in a while, so until then it's just non expert ways of thinking about it. And even then, it'll depend on management as well.

All in all, rtx is alright, but it's not ground-breaking either. It's just applying what everyone knew, with lesser quality to allow for a somewhat realtime rendering. 

DLSS falls somewhat into the same realm. You can trace papers on that going back to 2016/15, and to 2005-2000 for the non deep learning methods of doing it. It just never came to people in any shape or form.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Nowak said:

Why not Skyrim? That game doesn't need a high frame rate to be enjoyable.

No current GPU would be able to run that in realtime.

 

Just now, RejZoR said:

I'd much prefer something like Deus Ex or System Shock 2 being done with ray tracing. Level complexity isn't insane and you'd have time to appreciate the enhanced visuals because game is not a fast paced shooter but an RPG FPS. Plus, just imagine the gameplay possibilities, observing guards through realistic shadows and reflections and sneaking around based on that. But I guess I'll take whatever they offer. 

The quake 2 source code was publicly available as it had been released by id software. So these guys were able to go in and change the rendering pipeline with a custom vulkan renderer which supports ray tracing.

 

link to 2001 forum post ?

https://www.ttlg.com/forums/showthread.php?t=6258

Thank John Carmack

 

This isn't a straightforward graphical mod where you replace textures and models without access to source code...

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, laminutederire said:

Well he's not wrong on the fact that Nvidia are no path tracing experts. They poached people this year, so the expertise of those people can only be applied for techs in a while, so until then it's just non expert ways of thinking about it. And even then, it'll depend on management as well.

All in all, rtx is alright, but it's not ground-breaking either. It's just applying what everyone knew, with lesser quality to allow for a somewhat realtime rendering. 

DLSS falls somewhat into the same realm. You can trace papers on that going back to 2016/15, and to 2005-2000 for the non deep learning methods of doing it. It just never came to people in any shape or form.

 

 

I never said this was nvidias invention nor their sole domain.  All I said was that what nvidia are trying to do with RTX is not the same as what we have experienced before in games with dynamic lighting effects.   RTX, that is the RT cores and tensor cores, is trying to do RT in a whole new way that seems to require less code but a lot of processing power.   What we have had before is almost the opposite, it required careful code but didn't require AI or copious amounts of processing power.   RTX seeks to find a way to improve RT (do it in its rawest form live) without needing a render farm or being limited to small scale applications.

 

 

My responses to stephan are about his insulation that RTX is little more than what they could do in 2012, which is bollocks.  If that were the case Nvidia would be the laughing stock of the entire industry, movie industry and all.  Rather than having pixar studios salivating at what it means for home animators using their software.


EDIT: I also find it a little erroneous to claim they don't know about the tech or aren't experts in RT,  They have been working with industry specialists in rendering animations for a long time now.  Since at least 2012 their optix RT engine has been used throughout the industry.  So they aren't noobies to the technology.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

 

 

I never said this was nvidias invention nor their sole domain.  All I said was that what nvidia are trying to do with RTX is not the same as what we have experienced before in games with dynamic lighting effects.   RTX, that is the RT cores and tensor cores, is trying to do RT in a whole new way that seems to require less code but a lot of processing power.   What we have had before is almost the opposite, it required careful code but didn't require AI or copious amounts of processing power.   RTX seeks to find a way to improve RT (do it in its rawest form live) without needing a render farm or being limited to small scale applications.

 

 

My responses to stephan are about his insulation that RTX is little more than what they could do in 2012, which is bollocks.  If that were the case Nvidia would be the laughing stock of the entire industry, movie industry and all.  Rather than having pixar studios salivating at what it means for home animators using their software.


EDIT: I also find it a little erroneous to claim they don't know about the tech or aren't experts in RT,  They have been working with industry specialists in rendering animations for a long time now.  Since at least 2012 their optix RT engine has been used throughout the industry.  So they aren't noobies to the technology.

Well there have been a few techniques borrowed from path tracing acceleration that I'm almost certain made their way to rasterization. Things like irradiance caching, virtual point lights and some fancy Fourier based methods for volumetric rendering and transparency handling. That and there has been a mod for quake pre-existing rtx which did ray tracing (spoiler: didn't work great, thankfully rtx improves performance thanksnti the intersection acceleration mostly. Denoising here is limited by the fact that it's not limited to shadows, but I'm pretty you don't even need ai considering the game graphics)

 

I'm not saying Nvidia is a joke, I'm saying they're overselling what they have. Sure optix is good. I'm personnally going to use it to try some things out for of a possible publication. My point is that it's nowhere near the readiness of Disney/Pixar in-house renderers. They're slower but they produce much higher image quality and have more effect flexibility. My point is that they have experts in fast gpu implementation of path tracing, but they need experts in solely Monte Carlo rendering like the likes of people from Disney or Pixar, who understand the intricacies of acceleration data structures, as well as fancy algorithms to produce hard light paths and light effects. Those are the one needed to give ground-breaking visuals that would really sell path tracing. Because as of now what rtx can do is just a slightly improved rasterization, but the cost of it is not worth the slight quality bump. And quality bump is all what path tracing is about.

My point being is that it's no secret why they're buying out Disney/Pixar and academic researchers: because they have an expertise they didn't have in house. That expertise is the one of world leading scientists in path tracing. Before they only had world leading scientists in rasterization, physics simulation and AI for the most part, and some people with good knowledge of path tracing. That's why their approach focuses more on deep learning super sampling or denoising, while true path tracing approach would use deep learning for other things. I can link you to a few public publications of Disney on what I mean. Like channel aware denoising, denoisers which can denoise nearly anything, recently they also published about a passuonnating method to learn the unnormalized distribution of directions based on light path relevance, which opens ground for extremely powerful path guiding. They also have a deep learning method to render clouds, and lots of more deterministic approaches to make ray marching faster, make intersections faster, handle caustics faster and so on with multiple importance sampling.

Lots of techs that currently Nvidia hasn't pushed out because they lacked the brains who were familiar enough with all those techs to lead the research. Now they've got more so I expect them to level up their game a bit, even if the researchers they have will probably want to publish their findings for everyone to catch up.

(Hopefully)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, laminutederire said:

Well there have been a few techniques borrowed from path tracing acceleration that I'm almost certain made their way to rasterization. Things like irradiance caching, virtual point lights and some fancy Fourier based methods for volumetric rendering and transparency handling. That and there has been a mod for quake pre-existing rtx which did ray tracing (spoiler: didn't work great, thankfully rtx improves performance thanksnti the intersection acceleration mostly. Denoising here is limited by the fact that it's not limited to shadows, but I'm pretty you don't even need ai considering the game graphics)

 

I'm not saying Nvidia is a joke, I'm saying they're overselling what they have. Sure optix is good. I'm personnally going to use it to try some things out for of a possible publication. My point is that it's nowhere near the readiness of Disney/Pixar in-house renderers.

 

I got to about here and had to stop reading,

1. that doesn't change what I said, RTX does do it differently because it leverages new hardware that hasn't been used before to improve the process.

2. pixar have been using optix for quite some time now.  which kinda leaves me confused, are you trying to compare what pixar do with conventional RT tech (developed by nvidia) to RTX to claim that nvidia don't know what they are doing? Because that is what it reads like, about 3 times you have tried to claim nvidia don't understand the tech or are new at it, they aren't.

 

 

 

 

1 minute ago, laminutederire said:

They're slower but they produce much higher image quality and have more effect flexibility. My point is that they have experts in fast gpu implementation of path tracing, but they need experts in solely Monte Carlo rendering like the likes of people from Disney or Pixar, who understand the intricacies of acceleration data structures, as well as fancy algorithms to produce hard light paths and light effects. Those are the one needed to give ground-breaking visuals that would really sell path tracing. Because as of now what rtx can do is just a slightly improved rasterization, but the cost of it is not worth the slight quality bump. And quality bump is all what path tracing is about.

My point being is that it's no secret why they're buying out Disney/Pixar and academic researchers: because they have an expertise they didn't have in house. That expertise is the one of world leading scientists in path tracing. Before they only had world leading scientists in rasterization, physics simulation and AI for the most part, and some people with good knowledge of path tracing. That's why their approach focuses more on deep learning super sampling or denoising, while true path tracing approach would use deep learning for other things. I can link you to a few public publications of Disney on what I mean. Like channel aware denoising, denoisers which can denoise nearly anything, recently they also published about a passuonnating method to learn the unnormalized distribution of directions based on light path relevance, which opens ground for extremely powerful path guiding. They also have a deep learning method to render clouds, and lots of more deterministic approaches to make ray marching faster, make intersections faster, handle caustics faster and so on with multiple importance sampling.

Lots of techs that currently Nvidia hasn't pushed out because they lacked the brains who were familiar enough with all those techs to lead the research. Now they've got more so I expect them to level up their game a bit, even if the researchers they have will probably want to publish their findings for everyone to catch up.

(Hopefully)

 

I think the thing people forget here is that this type of technology (like nearly all tech in the world) is a slow evolution.  Each company bases new stuff of what already exists.  If you try to use that evolution to devalue the contribution of whoever comes along next by insinuating that they are just copying or rehashing old stuff and ignore the actual contribution, then you are being very disingenuous to the industry.   

 

Quote

Over the past several years, and predating the RenderMan XPU project, Pixar’s internal tools team developed a GPU-based ray tracer, built on CUDA and Optix.

 

https://renderman.pixar.com/news/renderman-xpu-development-update

 

Disney/pixar have been using and developing RT based on Nvidia tech for years.

Every time you claim nvidia hasn't got the brains to do the research you should remind yourself that the people you claimed where doing the work did it of the back of nvidia research and development.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

we need more old games to go through this process to see how better it would look with ray tracing.

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, williamcll said:

Ray tracing for the original DOOM please.

Is that even possible with its engine? idtech1 was sorta based in 2 dimensions, including all sprites. How does that work with all enemies being sprites?

Who needs fancy graphics and high resolutions when you can get a 60 FPS frame rate on iGPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Tech_Dreamer said:

we need more old games to go through this process to see how better it would look with ray tracing.

Imagine Half Life with this quality of Ray Tracing, that would be amazing. Thief too... and many other games. System Shock 2 would be even more absolutely terrifying with even better lighting.

Who needs fancy graphics and high resolutions when you can get a 60 FPS frame rate on iGPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, BuckGup said:

So the game is capped at 250FPS so it could potentially be over 400FPS for all we know and with ray tracing it is bouncing around below 60FPS WITH Vulkan too instead of OGL which in itself provides a tad of performance gains. 

I think the performance gap is far higher than that.

 

The non RTX run doesn't have the GPU fully boosting and is running around 20-25%.

 

An upcapped framerate might be as high as 1000fps if the game engine would allow it.

System specs:

4790k

GTX 1050

16GB DDR3

Samsung evo SSD

a few HDD's

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, DrDerp said:

Imagine Half Life with this quality of Ray Tracing, that would be amazing. Thief too... and many other games. System Shock 2 would be even more absolutely terrifying with even better lighting.

Half-Life would look amazing. And Thief even more so, imagine advantages of using actual physically realistic lighting and shadowing and reflections to sneak around. System Shock 2, quite frankly, with the High-Res texture mod, it looks amazing even today, 20 years later. It's just the way how Dark Engine shades things and immitates bumpmapping while not actually supporting it. Now, imagine smacking ray tracing on it. If anyone does this, I'd buy RTX card this moment and wouldn't give two F about the price or scope of use of RTX tech. Just for this one game because I can already tell it would look jaw dropping. The lighting, the shadows, the reflections in windows and metallic surfaces etc...

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, mr moose said:

We've been over this, the cards have only just come out, it's actually being taken up faster than any other new tech. If we don't see any decent games using it in a year year and a half from now I'll join you in the too much noise and no goods. EDIT:  But my point still stands, had AMD released it they would be making just as much noise and the uptake would take just as long.

 

yeah take a look at this list: https://en.everybodywiki.com/List_of_games_with_DirectX_12_support

 

The first non-microsoft, (they had advance knowledge effectively), big title to support DX12 after DX12 hit was Rise of the Tomb Raider 3 months later. Where still only 2 months after release and we allready have one game. The next set of big games aft RotTR all came 4 months after that, or 7 months from DX1 release. We won't really be able to judge RTX uptake till around june/july time of this year.

 

It's also been pointed out, (i need to go digging for links on this at some point), that initial benchmarks in DX12 games had awful performance, but that the same benchmarks on the same hardware after driver and game engine optimisations ran far better.

Link to comment
Share on other sites

Link to post
Share on other sites

IMO the PathTraced Quake 2 with Titan Xp looks better than this RayTraced one with 2080ti.

Dont even look at the YT video, just download the original file from the video description, YT compression is killing it.

Yes, there is WAY more noise than in the RTX video and the performance is even lower but think it looks a lot better.

Link to comment
Share on other sites

Link to post
Share on other sites

This is mod on an old game to fit nVidia RTX.  There is older ray tracing games out there, to my memort serves mr right was Wolfenstain one of the first games to try ray tracing, without any hardware support.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Stefan Payne said:

It must not however be forced to do things that it sucks at. And this game is in that category.

This is the first time that I've actually heard a large portion of the population arguing that programmers shouldn't experiment with new technologies and see how things work. Well, except for AI, but that's a slightly different scenario and more related to people not actually understanding what it is or does.

This is the first time that it's become near enough real time to start playing with in anything resembling a useful context. Of course people are going to do all sorts of stupid things with it: not enough developers know how to use it or when to use it yet. That's how learning about programming stuff works...

Perhaps the guy that made the mod for this game wanted to learn about ray tracing, and already knew about modding Quake 2. Maybe he never intended for his program to be useful in any context other than his own education about ray tracing. And next time, his things will be more useful because of it. And that is how it is: The vast majority of software in the world, especially freely distributed software, is really just a project undertaken so that the developer can learn. Very few developers actually set out with the goal to create something truly useful, and of those few that do, even fewer actually succeed. For every line of production code written, millions of lines of experimentation is sitting behind it.

 

Even something as ubiquitous as the Linux kernel started out as just a guy experimenting around, as evidenced by the announcement:

Quote

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, an is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

.../...

P.S Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable [actual original typo] (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mr moose said:

 

I got to about here and had to stop reading,

1. that doesn't change what I said, RTX does do it differently because it leverages new hardware that hasn't been used before to improve the process.

2. pixar have been using optix for quite some time now.  which kinda leaves me confused, are you trying to compare what pixar do with conventional RT tech (developed by nvidia) to RTX to claim that nvidia don't know what they are doing? Because that is what it reads like, about 3 times you have tried to claim nvidia don't understand the tech or are new at it, they aren't.

 

 

 

 

 

I think the thing people forget here is that this type of technology (like nearly all tech in the world) is a slow evolution.  Each company bases new stuff of what already exists.  If you try to use that evolution to devalue the contribution of whoever comes along next by insinuating that they are just copying or rehashing old stuff and ignore the actual contribution, then you are being very disingenuous to the industry.   

 

 

https://renderman.pixar.com/news/renderman-xpu-development-update

 

Disney/pixar have been using and developing RT based on Nvidia tech for years.

Every time you claim nvidia hasn't got the brains to do the research you should remind yourself that the people you claimed where doing the work did it of the back of nvidia research and development.

 

 

Well now you misread me on purpose then. I said that they're not doing cutting edge path tracing. There are other people and companies better than they are at path tracing currently, and well... That's life and that's not saying they don't know anything, or are new to it. That just means that no they're not stomping on everyone doing perfect thing that no one can match. Because that's not the case, they're a player in the game and other companies provide really important insights on how it can be done.

 

My point is that they basically implemented routines in hardware for now. It's necessary but it's not enough, because once those base routines can be done fast you need good software to deliver real-time path tracing, which no one has currently. And it'll require more than just accelerating intersection and sampling routines and add a denoiser on top of renders with one or two samples per pixel. You'll need still faster routines, plus a better denoiser with a more robust range of operability, path guiding to make the best out of every rays and so on.

 

Have you had a look at Optix and Disney/Pixar renderers though? Renderman is built on part of Optix but they added so many things to it to actually get way faster rendering times and more importantly many more effects, that I wouldn't say that it's Nvidias work and expertise. They basically used cuda for deep learning stuff and Optix for the basic routines.

My point is that you can idealize Nvidia as gods all you want, it's not the case.

Why is that a f** issue to say that they aren't world leading in something? Come on.

Look at the science behind what renderman can do more than Optix and you'll see that there are a lot of fancy stuff that Nvidia had nothing to do with in that renderer.

And if Nvidia had the brains to do it for rtx, why did they have to poach every university in the recent past?

The reason why they hired a lot of those people is because they understood there is a lot more to Ray tracing than what rtx is currently doing.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

Well now you misread me on purpose then. I said that they're not doing cutting edge path tracing. There are other people and companies better than they are at path tracing currently, and well... That's life and that's not saying they don't know anything, or are new to it. That just means that no they're not stomping on everyone doing perfect thing that no one can match. Because that's not the case, they're a player in the game and other companies provide really important insights on how it can be done.

 

My point is that they basically implemented routines in hardware for now. It's necessary but it's not enough, because once those base routines can be done fast you need good software to deliver real-time path tracing, which no one has currently. And it'll require more than just accelerating intersection and sampling routines and add a denoiser on top of renders with one or two samples per pixel. You'll need still faster routines, plus a better denoiser with a more robust range of operability, path guiding to make the best out of every rays and so on.

 

Have you had a look at Optix and Disney/Pixar renderers though? Renderman is built on part of Optix but they added so many things to it to actually get way faster rendering times and more importantly many more effects, that I wouldn't say that it's Nvidias work and expertise. They basically used cuda for deep learning stuff and Optix for the basic routines.

My point is that you can idealize Nvidia as gods all you want, it's not the case.

Why is that a f** issue to say that they aren't world leading in something? Come on.

Look at the science behind what renderman can do more than Optix and you'll see that there are a lot of fancy stuff that Nvidia had nothing to do with in that renderer.

And if Nvidia had the brains to do it for rtx, why did they have to poach every university in the recent past?

The reason why they hired a lot of those people is because they understood there is a lot more to Ray tracing than what rtx is currently doing.


i see absolutely no reason to believe half of what you have just said.  you are literally trying to twist reality to dismiss nvidia's work in the field.  Is it really that insulting to you?

 

 

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×