Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Skanky Sylveon

Crytek reveals real time raytracing that works with both Nvidia and AMD hardware.

Recommended Posts

On 3/17/2019 at 11:30 AM, Mira Yurizaki said:

It's just the easiest way to show it off. 

 

I'd argue Metro Exodus is one of the better ways to use it. But it's like 144hz, until you actually see it and compare it against 60Hz, you'll never know what you're missing. And then when you find out what you're missing, you get annoyed by not having it.

That was HDR for me. Games look significantly worse with it off vs on. Granted I have one of the few monitors that is capable of giving proper HDR so it my experience will be different to others. 

Link to post
Share on other sites
1 hour ago, Bartholomew said:

Writing off rtx because it is propriety is invalid because

its irrelevant.

Name one proprietary thing that was widely used in the PC area.

There is none.

 

Also remember ESA?? nVidia ESA?? No?? That might be because its:

a) Proprietary and NOT open

b) only used in High End

 

So nobody had that and there weren't many nVidia ESA compatible PSU anyway because nobody could use ESA and it was Proprietary = only on NVidia Chipsets.

 

So no, it is already dead and the other side might not even need it thanks to good Async Compute Capabilitys.

 


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
42 minutes ago, Mira Yurizaki said:

Windows.

 

which seems to be on life support right now. (i am not saying there aren't proprietary things that succeeded , some did, but i do prefer open standards)

Link to post
Share on other sites
4 minutes ago, cj09beira said:

which seems to be on life support right now. (i am not saying there aren't proprietary things that succeeded , some did, but i do prefer open standards)

x86 then.

 

Not good enough? ARM.

 

EDIT: If you think I'm being facetious, well, take a hard, deep look at everything you use. And if we really wanted to be pedantic about proprietary vs. openness, the standard can be open, but the implementation can be proprietary at the same time.

Link to post
Share on other sites
52 minutes ago, Stefan Payne said:

its irrelevant.

Name one proprietary thing that was widely used in the PC area.

There is none.

 

Also remember ESA?? nVidia ESA?? No?? That might be because its:

a) Proprietary and NOT open

b) only used in High End

 

So nobody had that and there weren't many nVidia ESA compatible PSU anyway because nobody could use ESA and it was Proprietary = only on NVidia Chipsets.

 

So no, it is already dead and the other side might not even need it thanks to good Async Compute Capabilitys.

 

I think you are being a bit closed minded. Nvidia already showed that they can run the current raytracing technology on non rtx cards it's just not as efficient. This means it isn't closed to rtx cards.

Link to post
Share on other sites
1 hour ago, Stefan Payne said:

its irrelevant.

Name one proprietary thing that was widely used in the PC area.

There is none.

 

Also remember ESA?? nVidia ESA?? No?? That might be because its:

a) Proprietary and NOT open

b) only used in High End

 

So nobody had that and there weren't many nVidia ESA compatible PSU anyway because nobody could use ESA and it was Proprietary = only on NVidia Chipsets.

 

So no, it is already dead and the other side might not even need it thanks to good Async Compute Capabilitys.

 

Relevant, you just dont seem to differentiate between a generic software interface and a conrete (propriety or not) implementation. This is asserted by you question, "name one", as its easy to name like 10 open interfaces all having proprietairy implementations...

 

Easypeazy

 

Atx, standard size / external interface (think directx) implemented in custom, proprietairy ways (think rtx)

 

Ddr2 ddr3 ddr4, same slot/interface (think directx) implemented in custom, proprietairy ways (think rtx)

 

Ssd: some use qlc to be big and cheap, others use different, all proprietairy implementations. Same interface however, different speeds, different implementations.

 

X86, amd and intel, both propriety implementations

 

Etc etc

 

Dxr is a interface, part of directx12. Rtx is just a implementation / acceleration. People think its the same "because nvidia" however nvidia just helped defining required interfaces for a common problem (interfaces that anyone could implement).

 

Forget about raytracing for a second; just regular "draw a triangle" through a direct x interface; behind that interface directx will have the driver (driving *proprietairy* hardware implementation, nvidia or amd) draw it. Dxr is no different except that nvidia is only one currently providing implementation.

 

Granted over time rtx will indeed die, simply because general (cude or otherwise) will attain speed / flexibility enough for the task (same likely with tensor cores).

 

Just not for the reasons stated earlier (esp since one of them was lower end not supporting it, which from april they will, which confirms simply that its just acceleration of dxr which can have multiple implementations, software (fallback, already exists for dev), hw common (like on older gtx) or hw dedicated (rtx or whatever anyone else comes up with).

 

And unless amd want to just "partly" support dx12 most likely sooner rather than later they will provide an implementation as well, esp now that big engines will make use of dx12-including-dxr as well.

 

 

 

 

 

 

Link to post
Share on other sites
On 3/17/2019 at 11:05 AM, an actual squirrel said:

My understanding is that the crytek method uses octrees instead of BVHs, so it can't be accelerated by ray tracing cores. But the crytek method should be extremely slow, so I don't think this is the RTX killer that some are touting it to be.

 

Hey, I called it! It was revealed that the demo runs at 30 fps at 1080p on a vega 56.

https://www.cryengine.com/news/how-we-made-neon-noir-ray-traced-reflections-in-cryengine-and-more

 

Link to post
Share on other sites
On 3/21/2019 at 5:44 PM, cj09beira said:
On 3/21/2019 at 5:00 PM, Mira Yurizaki said:

Windows.

 

which seems to be on life support right now.

You w0t m8?

 

Linux usage isn't quite as pervasive as online echo chambers would have one believe, and neither is the usage of mobile vs. desktop/laptop (aka Windows) as cannibalistic / mutually-exclusive as some media outlets like to hype up...

Link to post
Share on other sites
On 3/21/2019 at 3:00 PM, Mira Yurizaki said:

Windows.

 

OSX

 

Rather, everything that makes OSX OSX and not Darwin.


Yup the yup.

 

Socialism is for figs.

Not supporting the political facade known as "Gay Pride."

 

Pyo.

Link to post
Share on other sites
On 3/18/2019 at 3:07 PM, TechyBen said:

Yeah, I think RTX is mainly about having a "do it all" button for developers. No coding required, the raycaster does it all, apart from manually tagging textures/materials as "raytrace this" to note it being water or not water, metal or rock etc. Where as the current "faking" that looks 99% the same, needs a lot of special tweaks, noting where/when/how to do each effect. See GamersNexus example. They got 99% of the effects running in Unreal (IIRC) on current tech, but it needed more setting up than just saying "raytrace all the things".

Kind of reminds me what is being done up for the end users at my job.

New generator for the end users to build up sims.  Old one would take them over a month to build something up.  Demoed the new one and they already see the new one cutting their time down to build sims to under a week.

 

The big reason the time cuts down: the software handles a whole lot more of the calculations and setup, and users don't have to waste time setting up everything to get the near same results like the previous generator.

 

I am kind of starting to see this new tech of ray tracing, path tracing, etc. as a way for devs to shorten their time working on the lighting and shadow effects which will allow to speed up development time or focus more on other areas.


Just a nutty gal that abuse hardware with F@H and BOINC.

F@H & BOINC Installation on Linux Guide

My CPU Army: 4690K Delid, E5-2670V3, 1900X, 1950X, 5960X J Batch

My GPU Army:960 FTW at 1551MHz, 1080Ti FTW3, 1080Ti SC, 1070 Hybrid, 2x Titan XP

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, PS4 Pro, Xbox One S, Xbox One X

My Tablet Squad: iPad 9.7" (2018 model), Samsung Tab S, Nexus 7 (1st gen)

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to post
Share on other sites
8 hours ago, thorhammerz said:

You w0t m8?

 

Linux usage isn't quite as pervasive as online echo chambers would have one believe, and neither is the usage of mobile vs. desktop/laptop (aka Windows) as cannibalistic / mutually-exclusive as some media outlets like to hype up...

i was refering to people not wanting to use it, not that other OS would take over, as long as they can play their games and surf the web most people don't really care which OS they are using, for a long time gaming has meant people are stuck on windows, but with wine and proton things are changing, but i don't really expect many people to make the jump.

 

Link to post
Share on other sites

So, the trick for Crytek's performance is half resolution reflections. Honestly, with exception of mirrors, no surface ever reflects identical image (which is why everyone mocked BF V almost mirror like reflections on cares which were just absurd). Meaning, using half resolution has lower performance penalty and actually looks more realistic in the end because reflections aren't 1:1 world copy in the object, but are slightly more blurry or less detailed, the way actual reflections are. If you walk up to a mirror, you can still go full precision and have good performance because mirrors aren't all over the game. Makes perfect sense really.

Link to post
Share on other sites
18 minutes ago, an actual squirrel said:

I mean, to recap, the main advantage this has over dxr is support on vega. But on vega, it is so slow that it is basically unusable.

Well, nothing is stopping AMD from adding DXR support on Vega. It's compute capable enough to do it honestly. I just don't think they have it on software/driver level yet like NVIDIA does even on non RTX cards.

Link to post
Share on other sites

Crytek finally spilled the beans at  https://www.cryengine.com/news/how-we-made-neon-noir-ray-traced-reflections-in-cryengine-and-more.

 

Aside from using lower resolution ray tracing (which in hindsight makes sense, a lot of shading is done at lower resolution to save on performance), they also switch between what they call a "mesh ray tracer" to voxel cone tracing (a presentation about this can be found on http://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf). Also note, voxel cone tracing is a derivative of ray tracing, just that the "rays" now have thickness.

 

Quote

Only smooth and clean surfaces like mirrors require true mesh ray tracing. Most low-gloss, less shiny surfaces can be traced much faster simply by tracing voxels, which will achieve the same or better visual output.

On one hand, this may pave the way for higher quality lighting to be done on existing hardware or without the need for hardware accelerated RT. On the other hand, still don't really like how "ray tracing" can be thrown around like this for marketing hype.

Link to post
Share on other sites

Well, it either is ray tracing or it isn't. The rate at which they are doing it is another thing, but ultimately, does it really matter? When we'll have compute power to piss away at pixel perfect ray tracing, sure. But we're not there yet and such "saving" measures are perfectly acceptable. It's still ray tracing so the expected result is still there, just not at mirror like quality, which is not desired anyway. I've ran all ray tracing tech demos on my 1080Ti and while they ran surprisingly well given the fact it has no dedicated RT units, but it was all just way TOO shiny just because they wanted to really show what ray tracing does. But I don't want everything stupid shiny, I want realistic. And realistic is actually a byproduct of these savings because reflections rarely look like exact mirror image of surrounding environment. Reflections are often softer or distorted because of surface materials or finishes and I actually prefer this over perfect reflections. It's faster and looks more realistic in the end. And when perfect reflection is required, it can still be done and since it's usually on very small portions it won't even affect performance much anyway. So, double win. I really hope more developers would go this route, pushing more RT into games that even people without any dedicated HW can experience.

Link to post
Share on other sites
19 minutes ago, RejZoR said:

Well, it either is ray tracing or it isn't.

Ray tracing appears to be defined as simply "shooting rays from the camera and how this ray interacts with the environment determines the pixel's final color." The problem is that derivatives of this definition can have names that don't appear to be related. It's like trying to be pedantic about which ambient occlusion method was used when someone simply says "ambient occlusion"

 

Also rendering the final image isn't a "all or nothing" approach. You can have a mixture of ray tracing and raster rendering for different parts of the pipeline.

 

Quote

The rate at which they are doing it is another thing, but ultimately, does it really matter? When we'll have compute power to piss away at pixel perfect ray tracing, sure. But we're not there yet and such "saving" measures are perfectly acceptable. It's still ray tracing so the expected result is still there, just not at mirror like quality, which is not desired anyway. I've ran all ray tracing tech demos on my 1080Ti and while they ran surprisingly well given the fact it has no dedicated RT units, but it was all just way TOO shiny just because they wanted to really show what ray tracing does. But I don't want everything stupid shiny, I want realistic. And realistic is actually a byproduct of these savings because reflections rarely look like exact mirror image of surrounding environment. Reflections are often softer or distorted because of surface materials or finishes and I actually prefer this over perfect reflections. It's faster and looks more realistic in the end. And when perfect reflection is required, it can still be done and since it's usually on very small portions it won't even affect performance much anyway. So, double win. I really hope more developers would go this route, pushing more RT into games that even people without any dedicated HW can experience.

Ray tracing isn't just about reflections though, it encompasses the entire lighting phase of the rendering pipeline. This is especially the case with global illumination. And you get accurate results every time whereas rasterized rendering produces lighting artifacts due to the nature of how most pipelines are set up. I want to believe that the ideal solution in the future for graphics pipeline is that ray tracing covers everything a renderer needs to produce the final color of the pixel without going through a multitude of steps to get there. This would highly simplify the graphics rendering engine (ray tracing is a stupid easy algorithm to implement it seems). Also considering that ray tracing can be accomplished using a few algorithms, this means you can create ASIC IP cores to accelerate the work. You can't do this with rasterized rendering for the lighting phase due to how different each algorithm is to get the final result.

 

However I don't think that'll really happen. Hybrid rendering approaches will be preferred and it's likely NVIDIA recognizes this.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×