Jump to content

Raytracing API

https://videocardz.com/newz/nvidia-to-announce-rtx-technology

 

Quote

NVIDIA will announce RTX technology. This is a real-time cinematic rendering for game developers.

 

 

Although I don't have full information on this since it hasn't be released yet from Microsoft, from what I have seen, I think its based on things nV has done with OptiX, AI driven raytracing.  This doesn't mean this is AI driven. OptiX uses CUDA and compute to do raytracing, the AI portion is used for analysis for future frames and error correcting.

 

I see this as the next step forward to true cinematic gaming.  Lighting, shadows, even physics and interactions can now have a finer grain of accuracy that couldn't have been done before as a single system.  Ray tracing reduces system requirements such as memory but the direct affect of more computational needs. 

 

Find an interesting comment in the picture.  The API was made for Volta GPU's.  Ampere, Turning or what ever the GPU is for gaming, is going to be Volta based.  The pipeline of Volta which each thread is independent from each other, this also means the instructions create those threads are also independent makes raytracing gaming engines possible in real time.

Link to comment
Share on other sites

Link to post
Share on other sites

Please edit your post to follow Tech News posting guidelines or it will be removed.

 

 

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

Don't think I can add any more info to this one, please delete, it should come out on Monday anyways :)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Razor01 said:

Don't think I can add any more info to this one, please delete, it should come out on Monday anyways :)

You can add some more info, all it needs to stay in this subforum is a quote from the article and personal input.

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, huilun02 said:

Great! Can't wait for it to be included in Gameworks so we can all enjoy 15fps gaming regardless of GPU vendor

 

 

They already have a beta SDK of it in game works, for early access partners ;)  And from what I have heard it works very well.  I am assuming its performance is dependent  on how much you want to do with raytracing of course.

 

We have seen these techniques used prior to this API though, Sebbi over at B3D did something similar with lighting and shadows system on a custom engine while maintaining 60 FPS, I don't know the resolutions or specs, but I would guess 1080p and high end of last generation graphics cards, because at the time of doing the work Sebbi was working heavily on consoles.

 

We can see from the OptiX, things nV has shown so far, when doing AI and predicting ray tracing its very fast on Volta.  Just the raytracing portion shouldn't be too intensive on next gen graphics cards.

 

I am surprised AMD was not a partner on this API though, MS usually likes to have both IHV's involved because it does shape the future of the graphics landscape from a hardware perspective.  I don't know what the implications of that would be.  Is it because of the instruction or thread granularity would hurt the performance on this generation and older hardware?  I can see that hit happening.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, huilun02 said:

Yeah me too. Its an Nvidia initiative for their own GPUs. Not sure why AMD would turn down Nvidia's offer to be involved. Oh wait

 

This API is made by Microsoft, with nV's input.  Just like DirectX. 

 

If nV made this API and tried to get MS to adopt it lol, do you remember how that went with AMD, Mantle and Microsoft?

Link to comment
Share on other sites

Link to post
Share on other sites

Really can you show me that, something that, something I can't say its actually due to AMD's hardware weakness compared to nV's?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Razor01 said:

 

Gameworks is one example. Amd is weak on those because everything is proprietary so it's harder for them to optimize their stuff. You call it weakness I call it Nvidia being jerks. Would they be sharing more amd and Nvidia would have both better gpus

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, huilun02 said:

 

 

 

 

Yeah since AMD is not involved what GPU's are they going to use?  Well Volta based right?  Cause AMD has nothing that can actually do it maybe?  I don't think that is the case, its just certain features in the API might not be there in Vega.  Like the thread level independence.

 

lets break this down.  When we use what if statements in programming, we are creating dependencies and if those threads are dependent on each other, it will cause stops or slow downs within the program as those dependencies are filled. 

Raytracing is going to be heavily dependent on this.  Simple example.  All surfaces in raytracing have Reflection, Roughness, Refraction properties.  All of these create dependence with threads because well pixels are going to change based on them.

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, laminutederire said:

Gameworks is one example. Amd is weak on those because everything is proprietary so it's harder for them to optimize their stuff. You call it weakness I call it Nvidia being jerks. Would they be sharing more amd and Nvidia would have both better gpus

 

 

So you can't, I can,

 

Hairworks, why does it do poorly on AMD, tessellation amounts.  AMD hardware is poor at geometry throughput, because of the amount of GS units it has.  This is comparative to nV hardware. 

 

Same with God Rays Gameworks. 

 

These two specific libraries are made based off of tessellation performance and geometry amounts, which directly affect geometry throughput. 

 

What happens with AMD GPU's when software hits that hard?  Their GS units stall the entire pipeline.

 

This does not change with Raytracing.  Raytracing has nothing to do with tessellation or geometry amounts.  Raytracing pushes graphics completely differently as I stated more compute needs.

 

But when we look at AMD GPU's its flops don't match up with nV flops.  1 AMD flop is like 1.5 flops on nV hardware.

 

Why is that, because the pipeline is bottlenecked on AMD hardware prior to the compute performance.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Razor01 said:

 

You do know ganeworks overutilize it on purpose and that Nvidia has yet to publish details on how they do it?

Issue is that they push techs in games without giving a chance to others to adapt, and quite frankly that sucks for the industry as a whole. Because amd could catch up without having to reverse engineer everything they would have time and money to develop more new stuffs that would eventually make Nvidia gpus better as well.

But that's company culture. I've seen some of their researchers and they're pretty reluctant to talk about any of their stuff if it's about how it really works.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, laminutederire said:

You do know ganeworks overutilize it on purpose and that Nvidia has yet to publish details on how they do it?

 

 

 

Does it?  You feel that way ok, but there are differences.  The amount of differences you can say are inconsequential but they are still there. Also you can turn off those gamework features too, so if you want a level playing field when testing you can have that too.  Gamework libs are nV products, they used it to show advantages that their hardware has.

 

Now lets take this a step further.  Current games use X amount of polys prior to tessellation correct?  Next gen games are going to use 2X polys prior to tessellation correct?  how is that going to affect current tessellation factor amounts and geometry throughput?  Well X8 factor be like x16 factor.

 

how does that affect current hardware performance, if there is a geometry throughput bottleneck dropping tessellation amounts must be done even more!

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, laminutederire said:

Because amd could catch up without having to reverse engineer everything they would have time and money to develop more new stuffs that would eventually make Nvidia gpus better as well.

The way I see it, NVIDIA provided game developers with a bunch of tools. AMD cried foul, but didn't have anything to show for it. It took them something like two plus years to come out with an open source competitor to GameWorks. I don't know what AMD was doing before NVIDIA did this, but it doesn't look like to me they were interested in building libraries and API to help game developers optimize for their cards.

 

Besides, you kind of can't do universal optimizations unless they're super generic anymore. NVIDIA and AMD's architectures are different enough that a highly tuned optimization for one architecture doesn't work for the other.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, M.Yurizaki said:

The way I see it, NVIDIA provided game developers with a bunch of tools. AMD cried foul, but didn't have anything to show for it. It took them something like two plus years to come out with an open source competitor to GameWorks. I don't know what AMD was doing before NVIDIA did this, but it doesn't look like to me they were interested in building libraries and API to help game developers optimize for their cards.

 

Besides, you kind of can't do universal optimizations anymore unless they're super generic anymore. NVIDIA and AMD's architectures are different enough that a highly tuned optimization for one architecture doesn't work for the other.

 

 

Interestingly Tress FX 1.0 was being developed prior to anyone knowing about hairworks.  If I remember my history correctly.  AMD showed off Tress FX close to 6 months to year prior to Hairworks demos......  It was actually in games prior to Hairworks announcement.

 

There were some serious drawbacks to Tress FX though.  Lighting and a few other things from art perspective which made it hard to use.

 

This has nothing to do with Microsoft releasing an API for ray tracing though.  Yes nV could have all of its input in it but that doesn't mean its nV's gameworks influence is going to be there.

 

API's are made to be vendor agnostic if those API's are going to be widely adopted.  And for the most part they are.  Hardware design is based on API structures and calls.  Hardware can have benefits or affinities to specific API features based on design.  This has always been the case with Microsoft's API's, whom ever had a better GPU design does better. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, Razor01 said:

Really can you show me that, something that, something I can't say its actually due to AMD's hardware weakness compared to nV's?

 

 

T E S S E L A T I O N

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Bananasplit_00 said:

T E S S E L A T I O N

 

 

Tessellation by itself is not the culprit of the AMD's performance issues ;)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Razor01 said:

 

What I am saying isn't that amd is better or whatever. I'm just saying Nvidia annoy the he'll out of me with the proprietary codes. It's not about what architecture is truly the best in the end, it has became in some cases ( and if this API goes wrong will always be) about who has done the best optimization for a certain algorithm that isn't necessarily the best. That's what bothers me. Things could be led upwards instead of downwards without this corporation stubbornness to try and protect everything.

Honestly if it was Disney giving Microsoft inputs instead of Nvidia I would be much happier because they would hold the industry back less.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, M.Yurizaki said:

The way I see it, NVIDIA provided game developers with a bunch of tools. AMD cried foul, but didn't have anything to show for it. It took them something like two plus years to come out with an open source competitor to GameWorks. I don't know what AMD was doing before NVIDIA did this, but it doesn't look like to me they were interested in building libraries and API to help game developers optimize for their cards.

 

Besides, you kind of can't do universal optimizations unless they're super generic anymore. NVIDIA and AMD's architectures are different enough that a highly tuned optimization for one architecture doesn't work for the other.

As I said in the post just above, ultimately I'd largely prefer something going for universal optimization (because there is a lot to do even today). Because those universal optimization would be open source, they would be clear for anyone and it would prevent Nvidia from continuing abusing their position in the industry to push bad standards

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, laminutederire said:

What I am saying isn't that amd is better or whatever. I'm just saying Nvidia annoy the he'll out of me with the proprietary codes. It's not about what architecture is truly the best in the end, it has became in some cases ( and if this API goes wrong will always be) about who has done the best optimization for a certain algorithm that isn't necessarily the best. That's what bothers me. Things could be led upwards instead of downwards without this corporation stubbornness to try and protect everything.

Honestly if it was Disney giving Microsoft inputs instead of Nvidia I would be much happier because they would hold the industry back less.

 

 

Yeah and what the hell does that have to do with an API coming from MS?

 

What the hell does Disney have to do with graphic technologies?

 

Maybe it it would be better if we had the makers of the porta potty telling MS how a graphics API for ray tracing would be good for the industry?

 

Do you know why AMD isn't involved in this?  Because their hardware right now and in the near future isn't a good mix for it?  Most likely that is the reason.  Have you noticed when ever AMD/ATi or nV have ever talked down a certain API, its because they have issues compared to the competition with it?

 

But again that is just a guess, but based on when nV didn't have hardware that worked well with Direct X they didn't want to work with MS.  Example, nv1, and FX series of cards.  MS had to create a different version of Direct X for the FX series.

 

Do you know why MS has never taken anything proprietary from nV or AMD when it concerns, MS's API's?  Because both AMD and nV have tried to get MS to use their own API's.  nV tried to force MS to not to go to texture based rendering with nv1 and already told ya how MS turned down AMD with Mantle.  (we can take Sony as a example of this too).  Because its about control, control of ones own API.  Control over ones own software ecosystem.

 

 

2 hours ago, laminutederire said:

As I said in the post just above, ultimately I'd largely prefer something going for universal optimization (because there is a lot to do even today). Because those universal optimization would be open source, they would be clear for anyone and it would prevent Nvidia from continuing abusing their position in the industry to push bad standards

 

An API is not optimized for certain hardware, certain hardware can be made better for an API though which is what we see all the time.  Examples:  ATI 5xx better dynamic branching performance for DX 9.0c. G80, better for DX10 than r600.  Do you know ATi and nV both worked on DX10 with MS?  Why did this happen, it wasn't due to the API that gave favor to nV hardware, it was due to design decisions of the chip makers. Compute performance over all better on nV hardware since going to unified shaders, even now with less flops.  Its because nV has more experience with SIMD, SIMT, technologies, they have been using it since the g80, AMD only started with GCN with SIMD.

 

Don't get SDK and API's confused.

SDK uses an API and creates something so its easier to integrate or create software to completion quicker.

 

CUDA as an API can be used on AMD hardware if nV allows it.  but they don't, CUDA SDK is proprietary but that doesn't mean the CUDA API's can't work on AMD hardware.

 

An API is just list of instructions that are exposing hardware features dude.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Razor01 said:

Do you know why AMD isn't involved in this?  Because their hardware right now and in the near future isn't a good mix for it?  Most likely that is the reason.  Have you noticed when ever AMD/ATi or nV have ever talked down a certain API, its because they have issues compared to the competition with it?

 

But again that is just a guess, but based on when nV didn't have hardware that worked well with Direct X they didn't want to work with MS.  Example, nv1, and FX series of cards.  MS had to create a different version of Direct X for the FX series.

 

Do you know why MS has never talken anything proprietary from nV or AMD when it concerns, MS's API's?  Because both AMD and nV have tried to get MS to use their own API's.  nV tried to force MS to not to go to texture based rendering with nv1 and already told ya how MS turned down AMD with Mantle.  (we can take Sony as a example of this too).  Because its about control, control of ones own API.  Control over ones own software ecosystem.

Just going to toss this out there that maybe AMD is less interested in working on DirectX, which they know has a performance impact on their hardware, than trying to push more adoption of Vulkan API...  

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Razor01 said:

 

 

Hmm because they have one of the best expertise in raytracers our there? They recently published about deeplearning networks for image denoising, they were one of the first to work on path guiding pathtracing with machine learning. (By the way graphics engine already are roughly raytracers, since they are not computing the extra bounces they use tricks to make it look nice anyway, the API is probably about pathtracing). How do you think they produce their movies? You believe they have people drawing those? There are cutting edge path tracing techs behind to make it work as fast as possible with the best fidelity as possible compared to the real solution.

Before talking to me like I'm an idiot, have your grounds covered please.

 

Cuda is a great example of Nvidia holding back the whole industry. They still refuse to make any efforts towards an unification with OpenCL which does pretty much the same with roughly the exact same runtime, in order to not make it painful for developers of machine learning applications. They're currently forcing them to recode everything with OpenCL, instead of finally helping the people behind OpenCL to make a CUDA to OpenCL conversion to allow people not to redo everything. Why? Because they know AMD is good at compute and that it would basically run down part of their ecosystem potentially. We end up with them locking out amd and them milking everyone else without having to move a finger performance wise. Would they have done it, Amd would be competing a lot more in AI and i am pretty sure, they would have themselves been working on making their products better than they are right now.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, WMGroomAK said:

Just going to toss this out there that maybe AMD is less interested in working on DirectX, which they know has a performance impact on their hardware, than trying to push more adoption of Vulkan API...  

I think that is true.

 

AMD spent all their time work on mantle and freesync.  Nvidia spent their time giving game developers resources that work and make their jobs easier.  Guess who won that battle?    There was nothing stopping AMD from creating API's and producing resources for game dev's that worked on their hardware.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, laminutederire said:

Hmm because they have one of the best expertise in raytracers our there? They recently published about deeplearning networks for image denoising, they were one of the first to work on path guiding pathtracing with machine learning. (By the way graphics engine already are roughly raytracers, since they are not computing the extra bounces they use tricks to make it look nice anyway, the API is probably about pathtracing). How do you think they produce their movies? You believe they have people drawing those? There are cutting edge path tracing techs behind to make it work as fast as possible with the best fidelity as possible compared to the real solution.

Before talking to me like I'm an idiot, have your grounds covered please.

 

Cuda is a great example of Nvidia holding back the whole industry. They still refuse to make any efforts towards an unification with OpenCL which does pretty much the same with roughly the exact same runtime, in order to not make it painful for developers of machine learning applications. They're currently forcing them to recode everything with OpenCL, instead of finally helping the people behind OpenCL to make a CUDA to OpenCL conversion to allow people not to redo everything. Why? Because they know AMD is good at compute and that it would basically run down part of their ecosystem potentially. We end up with them locking out amd and them milking everyone else without having to move a finger performance wise. Would they have done it, Amd would be competing a lot more in AI and i am pretty sure, they would have themselves been working on making their products better than they are right now.

Realtime solutions vs offline raytracing man, HUGE difference, I work on movie special effects and work on games, the pipelines are VASTLY different.

 

CUDA vs Open CL, really, do you know why CUDA is better right now, the feature lists of CUDA is much greater than Open CL.  It was created before Open CL, nV took that market and made that market with their money.  Just because something is "open" doesn't mean it will be better.  Time and resources need to be put in to make them better.  Its like Linux vs Windows.  MS was able to corner the desktop market because it had the software stack that people wanted.  Who made this happen.  MS did, they made MS office they made DirectX, Linux did after a certain amount of time as well but not because they made it others made it for them.  Time to delivery is slower with open source platforms, unless the companies involved in the open source really put the same amount of effort and resources into a proprietary one.

 

You can't blame a company for looking for its best interest.  Any company that doesn't will ultimately fail.

 

You kinda said it yourself, "Amd would be competing a lot more in AI", well if AMD put more effort and resources into Open CL and HSA, I'm sure they would have been able to stop nV, but when did nV start this, well with the g80.  That is over 10 years ago.  To expect AMD to just be able to push resources to Open CL over the past 3 gens (from GCN), to compete with nV who has had hardware capable of doing these things better from over 6 gens ago, is kinda not expected. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Razor01 said:

You can't blame a company for looking for its best interest.  Any company that doesn't will ultimately fail.

Well they are in theory vastly the same...

In practice games renderer use raytracers and movie use pathtracers. These exact pathtracers they dans to make an api for.

In this case open would be better for the sole fact that it is more inline with what researchers want to potentially have.

You can't blame a consumer to look for his best interest. I could care less about them failing, I personally want better tech sooner.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×