Jump to content

AMD announces open source ray tracing at GDC *Interview update*

Notional
3 minutes ago, leadeater said:

Both AMD's and Nvidia's only work on their pro series of cards so the only thing about that you can say is Nvidia's tech demo is more impressive, though if you enlist ILM and throw a bunch of top end hardware at it that no one is going to have then of course you tech demo is going to be amazing.

 

It's about as useful as Square Enix's Witches Cry tech demo that ran on equally insane hardware for the time that still isn't even close to possible on even two GPUs now. I would like to see that tech demo re-done with either AMD's or Nvidia's raytracing tech just so see how much better it is.

nV stated RTX runs well on any Volta or future architecture lol.  Big difference there.  Pretty big hint that their game cards are going to this just fine.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, AresKrieger said:

Eh it seems to work fine, not liking its closed source nature is all well and fine but realistically Nvidia often offers the higher quality option on the market. Either way we've been down this path before and Nvidia will likely win out due to competent marketing strategy. The only issue I have with Nvidia on this venture is their partnership with microsoft in an attempt to make dx12 (and by extention windows 10) relevant.

It works fine on 4 titans V... guess how it'll work on current cards..

Amd's approach should be picked on purely because it is meant to be ran on current and future hardware from everyone and not just future Nvidia cards. In the end their closed source approach is poised to divide the community even more than ganeworks does and that's an issue for us.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Razor01 said:

nV stated RTX runs well on any Volta or future architecture lol.  Big difference there.  Pretty big hint that their game cards are going to this just fine.

Run that Tech demo on a single Titan V then lol. You can say anything, but it doesn't mean you can run exactly whats been shown off.

Link to comment
Share on other sites

Link to post
Share on other sites

rays are really important, like the most important thing right now. for some reason i guess :dry:

.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, laminutederire said:

Amd's approach should be picked on purely because it is meant to be ran on current and future hardware from everyone and not just future Nvidia cards. In the end their closed source approach is poised to divide the community even more than ganeworks does and that's an issue for us.

Though I agree with the idea in practice AMDs software is often terrible until someone else uses it as a foundation, Mantle-->Vulkan for example, which is often its downfall. Additionally not having the solution integrated into an api makes it very difficult to code for thus costing time in an industry that often cuts corners to make up time, that is the reason I see AMD's solution as at minimum slow to start and more likely to not get off the ground.

 

Regardless I honestly don't see why ray traces are so important atm, they will kill performance regardless of implementation and optimization of games was already poor enough.

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AresKrieger said:

The only issue I have with Nvidia on this venture is their partnership with microsoft in an attempt to make dx12 (and by extention windows 10) relevant.

Is that an issue or is it the natural course of all business?  Those with the resources to push their products to be the best sell the most and consequently become the mainstay product.  Relevant is a largely misused term in this context I think, no one would say Freesync was just AMD trying to make their cards relevant, they were already relevant in their own right.

 

Nvidia follow a tried and true business practice, there is nothing unusual/wrong about it.  AMD to their credit are stepping outside the usual think box, and while it hasn't been a road paved in gold for them, they aren't going backwards either.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

Run that Tech demo on a single Titan V then lol. You can say anything, but it doesn't mean you can run exactly whats been shown off.

 

 

Keep this in mind, both companies do this when they can, when the other has a distinct advantage and always goes in favor of nV eventually too lol.  Lets start with tessellation, AMD had it first, they were the pioneers years before it was introduced at an API level.  nV promoted the hell out of it with DX10, their GS performance was better than AMD at this time, but alias over all they were just slow anyways.  AMD of course started downplaying tessellation with DX11, because nV had a huge advantage.

 

Lets go physics.  ATi was in the lead with GPU physics implementation with the x1800 and x1900!  Their dynamic branching performance helped them out do nV and they were also the pioneers of GPU physics, they started talking about and showing demos of it prior to nV.  nV said they could too and put out some really bad demos on the 7x00 series, compared to what ATi was doing,  Of course we know how this turned out after the g80 and the buy out of physX.....  AMD started down playing GPU based physics because they were still on VLIW which wasn't as conducive for physics as a SIMD architecture.

 

RTX runs really well on Volta, I don't know about next gen hardware, but if Volta is having an easy time, I'm expecting those gaming cards to be able to run them well too.

 

I'm not sure why nV ends up eventual on top, well kinda of am,  they plan out their GPU architectures a bit better than ATi and AMD when it comes to software development. 

We have seen this many times in the GPU wars, nV just seems to have a better understanding of when devs are ready to transition to new features.  It takes a certain amount of time for those features to be programmed for and its all timing.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, AresKrieger said:

Though I agree with the idea in practice AMDs software is often terrible until someone else uses it as a foundation, Mantle-->Vulkan for example, which is often its downfall. Additionally not having the solution integrated into an api makes it very difficult to code for thus costing time in an industry that often cuts corners to make up time, that is the reason I see AMD's solution as at minimum slow to start and more likely to not get off the ground.

 

Regardless I honestly don't see why ray traces are so important atm, they will kill performance regardless of implementation and optimization of games was already poor enough.

Good luck modifying their closed source ecosystem to make it run better though. Personally I'm just sick of bad software.  For crying out loud it's possible to have doom run at furiously high frame rate with high image quality. Honestly with gameworks and "easy" APIs we just end up with badly optimized games that could probably run with twice the frame rate would they be coded correctly.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, laminutederire said:

Good luck modifying their closed source ecosystem to make it run better though. Personally I'm just sick of bad software.  For crying out loud it's possible to have doom run at furiously high frame rate with high image quality. Honestly with gameworks and "easy" APIs we just end up with badly optimized games that could probably run with twice the frame rate would they be coded correctly.

reducing cost of making a game

video games are very costly we all know this

that is why gameworks will be around for long time

Link to comment
Share on other sites

Link to post
Share on other sites

According to Golem.de, AMD claims that we will see RadeOn ray tracing in games in "the next few months" as ultra settings. Maybe we will see it in Shadow of the Tomb Raider, which will be released on the 14th of September?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, pas008 said:

reducing cost of making a game

video games are very costly we all know this

that is why gameworks will be around for long time

Especially when publishers want developers to churn higher and higher quality in the same time or less. You need to start using "cheats"

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, pas008 said:

reducing cost of making a game

video games are very costly we all know this

that is why gameworks will be around for long time

That's their problems I guess. They should work on what matters. It remains that mediocre software is everywhere when you think about it.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, laminutederire said:

That's their problems I guess. They should work on what matters. It remains that mediocre software is everywhere when you think about it.

Have you worked in a professional software development position for a complete release cycle before?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

Have you worked in a professional software development position for a complete release cycle before?

I do not deny that they have some constraints. It remains that between the state of the art and the mean level of the market, there is a huge difference. That means that a lot of code could be a lot better.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, laminutederire said:

I do not deny that they have some constraints. It remains that between the state of the art and the mean level of the market, there is a huge difference. That means that a lot of code could be a lot better.

 

What created the mediocre software though, the engine or the game code?

 

Professionally made engines tend to be pretty good, on the other hand studios using those engines that are fully versed with them (using another company's engine), another matter.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Razor01 said:

Keep this in mind, both companies do this when they can, when the other has a distinct advantage and always goes in favor of nV eventually too lol.  Lets start with tessellation, AMD had it first, they were the pioneers years before it was introduced at an API level.  nV promoted the hell out of it with DX10, their GS performance was better than AMD at this time, but alias over all they were just slow anyways.  AMD of course started downplaying tessellation with DX11, because nV had a huge advantage.

 

Lets go physics.  ATi was in the lead with GPU physics implementation with the x1800 and x1900!  Their dynamic branching performance helped them out do nV and they were also the pioneers of GPU physics, they started talking about and showing demos of it prior to nV.  nV said they could too and put out some really bad demos on the 7x00 series, compared to what ATi was doing,  Of course we know how this turned out after the g80 and the buy out of physX.....  AMD started down playing GPU based physics because they were still on VLIW which wasn't as conducive for physics as a SIMD architecture.

 

RTX runs really well on Volta, I don't know about next gen hardware, but if Volta is having an easy time, I'm expecting those gaming cards to be able to run them well too.

 

I'm not sure why nV ends up eventual on top, well kinda of am,  they plan out their GPU architectures a bit better than ATi and AMD when it comes to software development. 

We have seen this many times in the GPU wars, nV just seems to have a better understanding of when devs are ready to transition to new features.  It takes a certain amount of time for those features to be programmed for and its all timing.

Um nothing you said had anything at all to do with what I said.

 

RTX runs well, that's not a performance statement on what it can actually do in actual deployment of that technology.

 

That tech demo did not run on a single card.

 

Quote

The above video, called Reflections, was created using an $122,000 (£86,000) DGX Station mini server box, packed with four discrete Tesla V100 graphics cards, to render the entire demo in real time. It’s running at 1080p and at a cinematic 24fps, and is barely distinguishable from a live action scene.

https://www.pcgamesn.com/epic-star-wars-unreal-raytracing

 

Nvidia paid big money for ILM to create an impressive tech demo that requires the most high end system to run, at unacceptable frame rates for this community apparently, so what you're saying is RTX is a better technology because they paid more for a better tech demo.

 

Also as you well know tech demos do not represent how mature the technology is and ready to use in games as shown in that demo.

 

As I mentioned single cards right now cannot run the Witches Cry DX12 tech demo from 2015, 3 years ago. A 1080Ti might be able to do it at 10-12 FPS, but that's just a wild guess I didn't actually check 4x Titan X vs 1 1080 Ti performance wise. Titan V might be able to do it at 16-17 FPS, add on RTX and you might get maybe 1 FPS or 4 with 4 Titan V's ;). All at 1080p of course, bump up to 4K and you might be able to break 1 FPS.

Quote

WITCH features 63 million polygons per scene, "six to 12 times more" than what was possible with DirectX 11, Microsoft says. Check out the real-time demo below and note that while the animations certainly are pretty, there isn't much going on in these scenes in terms of AI or NPC population.

https://www.engadget.com/2015/05/01/square-enix-tech-demo-cry/

 

I would like to see that 2015 DX12  demo run again with RTX or DXR but I highly doubt 4 Titan V's could pull it off.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, M.Yurizaki said:

Especially when publishers want developers to churn higher and higher quality in the same time or less. You need to start using "cheats"

you call them cheats

but i'll call them tools just like anyother software

 

yes just like alot of software you must pay for professional use

 

fyi everyone is caught up with graphics its funny, I am just sitting here waiting for a good game to come out again

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, pas008 said:

you call them cheats

but i'll call them tools just like anyother software

 

yes just like alot of software you must pay for professional use

 

fyi everyone is caught up with graphics its funny, I am just sitting here waiting for a good game to come out again

There was a reason I put the word in quotes. :P

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, AlwaysFSX said:

Hardware is a lot easier to adopt than software.

Also, Nvidia's Ray Tracing stuff piggy backs off of DX12. That is probably what's going to succeed.

agreed. with DX12 implementation rising.. and people generally wanting to use DirectX compared to other things.... (Mind went blank can't think of what they are called)

Everything will probably use the Nvidia implementation. And I'm sure AMD will be able to use it at /some/ point. Open source is always nice to have I guess, if it works with everything.

anything that works with everything is nice to have.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

Um nothing you said had anything at all to do with what I said.

 

RTX runs well, that's not a performance statement on what it can actually do in actual deployment of that technology.

 

That tech demo did not run on a single card.

 

https://www.pcgamesn.com/epic-star-wars-unreal-raytracing

 

Nvidia paid big money for ILM to create an impressive tech demo that requires the most high end system to run, at unacceptable frame rates for this community apparently, so what you're saying is RTX is a better technology because they paid more for a better tech demo.

 

Also as you well know tech demos do not represent how mature the technology is and ready to use in games as shown in that demo.

 

As I mentioned single cards right now cannot run the Witches Cry DX12 tech demo from 2015, 3 years ago. A 1080Ti might be able to do it at 10-12 FPS, but that's just a wild guess I didn't actually check 4x Titan X vs 1 1080 Ti performance wise. Titan V might be able to do it at 16-17 FPS, add on RTX and you might get maybe 1 FPS or 4 with 4 Titan V's ;). All at 1080p of course, bump up to 4K and you might be able to break 1 FPS.

https://www.engadget.com/2015/05/01/square-enix-tech-demo-cry/

 

I would like to see that 2015 DX12  demo run again with RTX or DXR but I highly doubt 4 Titan V's could pull it off.

 

 

Today's games don't look any where near those demos man, we don't use that many polys in today's games, as for scene polys we don't go more than 5 to 10 million, and half that is unseen polys.

 

We know what this is right?  Consoles, consoles are the limiting factor for PC gaming, asset development is limited by them.  Companies don't want to make multi versions of their assets for different platforms.  It costs close to 15k to 20k per character asset, that is without rigging and animations.  If we need to do 2 sets, one for PC and one for consoles, well that just doubled, actually more because the more complex the model the more time, more cost, we are looking at a pretty big increase in cost of development.

 

This is why in the other thread I mentioned why I'm waiting on AMD to catch up on their geometry through put, because I seriously don't want to make different assets, its just a waste of time for me to do.  I want a certain look and feel to the game and I'm not going to compromise on that.

 

PS polygon counts have a direct relationship to compute needs with raytracing.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, M.Yurizaki said:

There was a reason I put the word in quotes. :P

I figured but had to state what I stated

 

others might get mad

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure if this will become mainstream. AMD just doesn't have the gaming market share, and won't while their GPU remain excellent for compute (fucking miners)...but that would also increase the performance cost of their OSRT.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, bcredeur97 said:

agreed. with DX12 implementation rising.. and people generally wanting to use DirectX compared to other things.... (Mind went blank can't think of what they are called)

Everything will probably use the Nvidia implementation. And I'm sure AMD will be able to use it at /some/ point. Open source is always nice to have I guess, if it works with everything.

anything that works with everything is nice to have.

APIs?

Given Nvidia's tech is just something on top of DX12, I'm sure everyone will be able to benefit so long as Microsoft implements ray tracing well. I doubt AMD will adopt what Nvidia made, they're against adopting technology.. plus they've already announced their own ray tracing stuff.

 

We'll see. I'm sure DX12 adoption will increase a lot by 2022, so I'm not expecting a lot until then. But given that most of the major engine developers are working on this games should be VERY pretty soon.

.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Razor01 said:

PS polygon counts have a direct relationship to compute needs with raytracing.

I know, that was my point ;).

 

Tech demos are just that, 3 years is shorter than the time you see technology shown off fully implemented in to games, I'd say 5 is more realistic and even then it's dependent on hardware improving at a fast enough rate that 2 generations is more than 4 times that performance. 1 Titan V doesn't equal 4 Titan X Maxwells so it's already failed to meet that growth target, it's only 2.

 

We won't see that Star Wars tech demo level of detail in games in 3 years, won't happen.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Razor01 said:

 

What created the mediocre software though, the engine or the game code?

 

Professionally made engines tend to be pretty good, on the other hand studios using those engines that are fully versed with them (using another company's engine), another matter.

That does not change the end result, we should have better running games overall. Somewhat high refresh at 4K for a 1080ti should be the minimum and 1080p should be easy for mid range cards and 1440p should be easy for old high end or mid to high end of current gen. Just like games like doom.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×