Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

RX 6800 Ray Tracing Performance Leaked

Go to solution Solved by Random_Person1234,

UPDATE: Frank Azor (Chief Gaming Architect at AMD) promised that more details would be released about Big Navi ray tracing and super resolution before the launch of the cards.

https://videocardz.com/newz/amd-frank-azor-promises-more-details-on-radeon-rx-6000-ray-tracing-and-super-sampling-tech-before-launch

https://twitter.com/AzorFrank/status/1321808969283870721 

 

3 hours ago, BlackManINC said:

as games become better optimized for it. 

Or unoptimized like *cough* tessellation

The Ray Accelerator shared the same pipeline with Texture unit. If a game need to process both texture and RT traversal intensively, RDNA2 RT performance will drop much more compare to RTX. 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to post
Share on other sites
8 hours ago, bcredeur97 said:

I wonder if AMD will have a GOOD equivalent of DLSS. I've heard DLSS on the new 30 series cards is really good, so AMD has their work cut out for them there, prob can't afford to launch an equivalent to DLSS 1.0 (which iirc wasn't very good)

I’ve heard it’s good too.  IF there isn’t a lot of movement on the screen.  Don’t know how true that is or not.  

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
27 minutes ago, xAcid9 said:

Or unoptimized like *cough* tessellation

The Ray Accelerator shared the same pipeline with Texture unit. If a game need to process both texture and RT traversal intensively, RDNA2 RT performance will drop much more compare to RTX. 

That's not necessarily true. Just because a compute unit has a multipurpose role it doesn't mean one or the other function has to suffer just because it needs both. Which to me, RA unit acts as a supplemental thing and not as one or the other. Meaning if game doesn't request ray tracing, the RA units will simply be there and do nothing (unless they can utilize them for screen space reflections and ambient occlusion acceleration too somehow). When there is ray tracing involved, they will compute the necessary stuff on top of whatever rasterization units underneath it have to do. Assuming it's that dedicated and not a feature that has to share compute capabilities with rasterization CU's.

 

Looking at Ampere GPU diagram, they use a similar approach, but they have larger RT core clusters on each shader module opposed to RA on every shader unit. Clearly both have quite similar performance, we'll see which approach benefits under certain conditions and which is flawed under the others when cards get into reviewer's hands.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites

Its great news that AMD have got GPUS that can compete/beat Nvidia

 

I still went for a 3070 FE - but if I couldn't get one I would have waited for the AMD and possible Nvidia Refresh.

 

I think AMD will have similar issues with there GPUs at launch just like Nvidia.  look at the 3300x - I haven't seen it in the while since launch.

 

Overall this is a good thing for "us" consumers as the next couple of years is going to bring some great cards.

 

Oh and I hope AMD drivers are better. (I know not everyone has issues, but two of my friends on AMD 5700, 5700XT have had random issues)

Folding Stats

 

SYSTEM SPEC

AMD Ryzen 5 5600X | Motherboard Asus Strix B550i | RAM 32gb 3200 Crucial Ballistix | GPU Nvidia RTX 3070 Founder Edition | Cooling NZXT AIO | Case NZXT H1 | Storage Sabrent Rocket 2tb, Samsung SM951 1tb

PSU NZXT S650 SFX Gold | Display Acer Predator XB271HU | Keyboard Corsair K70 Lux | Mouse Corsair M65 Pro  

Sound Logitech Z560 THX | Operating System Windows 10 Pro

Link to post
Share on other sites

ray tracing that thing i'll never enable because i rather have more fps over shadows. Also why i stick to 1080p gaming, and will be getting a 6800XT if reviews are good. 

beast gpu (hopefully) + still more than good enough res + ray off => silent gaming + high fps count

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Link to post
Share on other sites
1 hour ago, RejZoR said:

That's not necessarily true. Just because a compute unit has a multipurpose role it doesn't mean one or the other function has to suffer just because it needs both. Which to me, RA unit acts as a supplemental thing and not as one or the other. Meaning if game doesn't request ray tracing, the RA units will simply be there and do nothing (unless they can utilize them for screen space reflections and ambient occlusion acceleration too somehow). When there is ray tracing involved, they will compute the necessary stuff on top of whatever rasterization units underneath it have to do. Assuming it's that dedicated and not a feature that has to share compute capabilities with rasterization CU's.

My comment was based on XBOX Series X and AMD Ray Tracing patent slides. I don't know if it's different compare to the RX 6000 series but most probably not. 

These Ray Accelerator is inside RNDA2 texture mapping unit(TMU) so it shared the same data path/bandwidth with texture ops. Peak performance is heavily depend on available bandwidth, if there is not enough bandwidth for it to reach peak performance because it got choke by texture operations then you'll get lower/inconsistent RT perf. 

 

I guess that's why Infinity Cache took a large part of RDNA2 die, to prevent/alleviate this bandwidth issue. 

At least that's what i understood looking/reading from these slides. 

Spoiler

image.png.bd61e7611fd393aa51e62a49b8e8e5a4.png

Spoiler

image.png.66dd8f9e23cb761778bb20a158fa9fe6.png

Spoiler

image.png.9cb7b6cf45564d75ea23664b913b212f.png

 

 

1 hour ago, RejZoR said:

Looking at Ampere GPU diagram, they use a similar approach, but they have larger RT core clusters on each shader module opposed to RA on every shader unit. Clearly both have quite similar performance, we'll see which approach benefits under certain conditions and which is flawed under the others when cards get into reviewer's hands.

They're not inside texture unit like RDNA2. 🤔

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to post
Share on other sites
1 hour ago, xAcid9 said:

They're not inside texture unit like RDNA2. 🤔

I'm not sure how different they actually are, the current AMD diagrams aren't that clear. Both have Ray Accelerators in their SM/CU respectively, we really only know the details of Nvidia RT cores currently.

 

 

As to your other point:

Quote

The RT Cores in Turing can process all the BVH traversal and ray-triangle intersection testing, saving the SM from spending the thousands of instruction slots per ray, which could be an enormous amount of instructions for an entire scene. The RT Core includes two specialized units. The first unit does bounding box tests, and the second unit does ray-triangle intersection tests. The SM only has to launch a ray probe, and the RT core does the BVH traversal and ray-triangle tests, and return a hit or no hit to the SM. The SM is largely freed up to do other graphics or compute work. See Figure 18 or an illustration of Turing ray tracing with RT Cores.

To be honest it doesn't sound that different to what AMD is also doing. As long as the bolded part is true for both then the same limitations should apply to both, both have return data paths and those require bandwidth, I just don't actually know how much (I assume based on the data type that it is not very much, but then the data itself might be small like I'm thinking but there just lots of it per second totaling more than I'm thinking?? 🤷‍♂️)

Link to post
Share on other sites

Does ray acceleration difference between AMD and NVidia really matter? Not sure on that.

 

What are the most common use cases of ray tracing in games? AFAIK, global illumination and reflections.

 

In the first you're giving up on traditional lighting for the most part, which means no light mapping, traditional shadow casting, etc.

 

On the second you can cover most of the relevant things with screen space reflections, which are cheaper. You can selectively try to reflect out of screen things like other players or important stuff (e.g. remember the battlefield V flamethrower demo? Something like that).

Link to post
Share on other sites

We'll see testing but looks fine. Still RT is still in infancy really. Base game needs to look good first and foremost. Some games don't even look that good and have RT and that it self doesn't make them look better per say. As in vs some awesome looking non RT games for example. But yeah in time more games will use it and more RT features with also higher use. Eventually all games. 

Also HDR needs to evolve faster on PC too. Monitors them selves though, but definitely having both of these in new games as standard will be amazing. 

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites
7 minutes ago, Doobeedoo said:

We'll see testing but looks fine. Still RT is still in infancy really. Base game needs to look good first and foremost. Some games don't even look that good and have RT and that it self doesn't make them look better per say. As in vs some awesome looking non RT games for example. But yeah in time more games will use it and more RT features with also higher use. Eventually all games. 

Also HDR needs to evolve faster on PC too. Monitors them selves though, but definitely having both of these in new games as standard will be amazing. 

Same could be said for pixel shaders when they were first introduced with GeForce 3. It was basically only used for water initially, but these days, pixel shading is integral part of anything rendered on screen. Ray tracing, given it's already in usable form now will over the upcoming years slowly become an integral part of every game. It'll come to a point where every game will be ray traced, just like all games have been pixel shaded for years.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites
22 hours ago, Random_Person1234 said:

The AMD equivalent of DLSS is not out yet, according to rumors it's being worked on. So they were using native res.

It's not a rumor, AMD talked about working on it during the reveal event.

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to post
Share on other sites
21 hours ago, Caroline said:

Oh no. Has AMD entered the memetracing competition too? welp, byebye to low prices I guess, $400 extra on a card just because it comes with that "feature", just like nvidia

 

Bait post, but I'll respond for the sake of education

 

AMD is providing a card that is 2x the performance of an RX 5700 while only boosting power consumption by a small margin, all on the same node. Despite this, they are charging much less than double the price of an RX 5700 at $579, not to mention that the 6800 XT provides an even better deal with the higher tier card performing about 5-10% better than a 3080 at $50 less. Raytracing was obviously going to be a staple feature on these newer cards, but AMD is not tacking on exorbitant prices to these cards, in fact the 6800 XT is priced the same as the 980 TI which was regarded as one of the best price/performance top end cards made in recent times (only superseded by the 1080 TI). 

8086k Winner BABY!!

My tech stuff

 

Main rig

Cpu: R5 1600AF 3.95ghz 1.32v (literally the worst bin I've ever seen)

Mobo: MSI b450 A-Pro MAX

Ram: 16gb Team Group T-Force Xtreem 3600 cl18 (3466 14-14-14-14-28) kinda tuned subs

Gpu: MSI 1080 ti Duke OC

PSU: Bitfenix Formula Gold 650w

SSD: 512gb Inland premium nvme

HDD: 2tb Seagate Barracuda Compute

 

Samsung Galaxy S9 | SD845 | Adreno 640

 

Link to post
Share on other sites
18 minutes ago, TheDankKoosh said:

Bait post, but I'll respond for the sake of education

 

AMD is providing a card that is 2x the performance of an RX 5700 while only boosting power consumption by a small margin, all on the same node. Despite this, they are charging much less than double the price of an RX 5700 at $579, not to mention that the 6800 XT provides an even better deal with the higher tier card performing about 5-10% better than a 3080 at $50 less. Raytracing was obviously going to be a staple feature on these newer cards, but AMD is not tacking on exorbitant prices to these cards, in fact the 6800 XT is priced the same as the 980 TI which was regarded as one of the best price/performance top end cards made in recent times (only superseded by the 1080 TI). 

? My memory was 5700 was $299.  Might be a currency though.

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
8 minutes ago, Bombastinator said:

? My memory was 5700 was $299.  Might be a currency though.

5700 was $350 but could typically be found for a bit less before the 5600 XT came out, I'm pretty much expecting the 6800 to be in a similar boat as I think it's too close to the price of the 6800 XT to be considered.

8086k Winner BABY!!

My tech stuff

 

Main rig

Cpu: R5 1600AF 3.95ghz 1.32v (literally the worst bin I've ever seen)

Mobo: MSI b450 A-Pro MAX

Ram: 16gb Team Group T-Force Xtreem 3600 cl18 (3466 14-14-14-14-28) kinda tuned subs

Gpu: MSI 1080 ti Duke OC

PSU: Bitfenix Formula Gold 650w

SSD: 512gb Inland premium nvme

HDD: 2tb Seagate Barracuda Compute

 

Samsung Galaxy S9 | SD845 | Adreno 640

 

Link to post
Share on other sites
3 hours ago, RejZoR said:

Same could be said for pixel shaders when they were first introduced with GeForce 3. It was basically only used for water initially, but these days, pixel shading is integral part of anything rendered on screen. Ray tracing, given it's already in usable form now will over the upcoming years slowly become an integral part of every game. It'll come to a point where every game will be ray traced, just like all games have been pixel shaded for years.

The wild bit for real time ray tracing for me is when that actually happens.  People talk about narrow hallways and elevators being used for loading times, but the same sort of thing happens with game set design in a way.  Systems used to render 3D models are still to one degree or another chiaroscuro (by which I include cangigante, sfumato, and unione)  based. They’re designed to be seen from one direction and thought has to be placed in how they’re set up.  Ray tracing ignores chiaroscuro.  Games will be able to do literally different things.   Before chiaroscuro there was cavalier perspective, which to a modern eye looks ridiculously primitive.  I don’t think the shift will be that big, but I think it could be big.  Chiaroscuro is not the only perspective system, and it must be kept in mind that it is merely a system.  Chiaroscuro attempts to create the illusion of a 3d space behind the picture plane.  There is also iconographic perspective though (used by Russian Orthodox Church icons) that attempts to create a 3d illusion in front of the picture plane.  To someone who’s brain is schooled to chiaroscuro a Russian icon painting looks strange and bloated.  It took me years to be able to see in iconographic perspective, but when I did I saw that Russian iconography was every bit as realistic as Angelico or Caravaggio. It has to be looked at in a totally different way though.  I think All-raytracing-all-the-time has the capacity to fundamentally change video games.  It could be so much bigger than “oh look! The reflection in the hubcap moves”

Edited by Bombastinator

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

Based on everything in this thread, there seems to be ZERO reason to wait for RTX 30's to be in stock unless you need NVENC. 

 

If you're just playing games, if the ATI cards are in stock, just might as well get one of those. Raster performance is likely going to be on par with the RTX 30's, and RT is surprisingly within spitting distance when not using DLSS. 

 

I'm real curious how AMD's super resolution looks in comparison to DLSS..

 

IF THIS IS LEGIT... I can't wait to get my hands on a 5000 series Ryzen chip and combining it with a 6800XT. I'll use my current system to upgrade the i7 3770 I have at work. 

Fun Rig - AMD Ryzen 5 3600  |  MSI B550 Tomahawk  |  32GB G.SKILL 3600 CL16 4x8GB |  PowerColor Red Devil 5700XT  | Creative Sound Blaster Z  |  WD Black SN850 500GB NVMe  |  WD Black SN750 2TB NVMe  |  WD Blue 1TB SATA SSD  |  Corsair RM850x  |  Corsair 4000D  |  LG 27GL650F-B  |

 

Work Rig - AMD Ryzen 3 3200G  |  ASUS X570-P Prime  |  32GB G.SKILL 3200 CL16 4x8GB |  MSI GTX 1650 GDDR6  |  WD Black SN750 500GB NVMe  |  WD Blue 500GB SATA  |  Samsung 860 Evo 500GB SATA  |  Corsair TX650  |  Corsair 4000D  |  Dell S2721DGF  |

Link to post
Share on other sites
On 10/31/2020 at 11:16 PM, Random_Person1234 said:

I'm still not sure if this justifies the $80 more the 6800 is compared to the 3070, especially considering the 3070 has DLSS.

But 6800 got more vRAM tho.... even if it just breaks even with performance.. I can justify the extra $80 on 8Gbs of VRAM

Link to post
Share on other sites

UPDATE: Frank Azor (Chief Gaming Architect at AMD) promised that more details would be released about Big Navi ray tracing and super resolution before the launch of the cards.

https://videocardz.com/newz/amd-frank-azor-promises-more-details-on-radeon-rx-6000-ray-tracing-and-super-sampling-tech-before-launch

https://twitter.com/AzorFrank/status/1321808969283870721 

 

Link to post
Share on other sites
On 10/31/2020 at 1:46 PM, Random_Person1234 said:

 I'm still not sure if this justifies the $80 more the 6800 is compared to the 3070, especially considering the 3070 has DLSS.

 

The problem I see with DLSS right now is that we don't know if DLSS is going to be a bait and switch or not. 

 

Most scaling algorithms used (i.e. bicubic, bilinear, etc) can be thought of as dumb algorithms (figuratively speaking). With DLSS in its current form, it has to be trained through machine learning using high resolution reference images supplied by the game developer to Nvidia. It's theoretically possible that at some point an AI can be trained with enough reference images to upscale all games without any training and provide better fidelity than dumb upscaling algorithms, but I would argue that it will never be as effective as a smart upscaler that is trained for a specific game the way that Nvidia are doing it now, and there are not that many games with DLSS to begin with. I would also argue that the computation power required with a general purpose AI upscaler might be better served in removing the tensor cores and adding more Float/Integer processors instead.

 

I'm not saying DLSS is snake oil, I'm merely saying that unless Nvidia train the upscaler for a specific game, its never going to be that much better than dumb upscaling, and even if it is the horsepower required and the extra diespace used is a wash. I hope to be proven wrong and Nvidia will eventually release a ubiquitous form of DLSS that is better than a dumb algorithm, without sacrificing die space on the GPU. 

 

for now the extra $80 gets you 16GB of Vram vs 8GB, and better rasterization performance. Nvidia does manage Vram usage better than AMD in drivers, but that only gets you so far. 

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to post
Share on other sites

delete, duplicate

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to post
Share on other sites

It's time for black screens and other driver breaking bugs 🤭

A PC Enthusiast since 2011
AMD Ryzen 5 2600@4GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2040MHz Memory 5000MHz
Cinebench R15: 1382cb | Unigine Superposition 1080p Extreme: 3439
Link to post
Share on other sites
28 minutes ago, Briggsy said:

The problem I see with DLSS right now is that we don't know if DLSS is going to be a bait and switch or not. 

 

Most scaling algorithms used (i.e. bicubic, bilinear, etc) can be thought of as dumb algorithms (figuratively speaking). With DLSS in its current form, it has to be trained through machine learning using high resolution reference images supplied by the game developer to Nvidia. It's theoretically possible that at some point an AI can be trained with enough reference images to upscale all games without any training and provide better fidelity than dumb upscaling algorithms, but I would argue that it will never be as effective as a smart upscaler that is trained for a specific game the way that Nvidia are doing it now, and there are not that many games with DLSS to begin with. I would also argue that the computation power required with a general purpose AI upscaler might be better served in removing the tensor cores and adding more Float/Integer processors instead.

 

I'm not saying DLSS is snake oil, I'm merely saying that unless Nvidia train the upscaler for a specific game, its never going to be that much better than dumb upscaling, and even if it is the horsepower required and the extra diespace used is a wash. I hope to be proven wrong and Nvidia will eventually release a ubiquitous form of DLSS that is better than a dumb algorithm, without sacrificing die space on the GPU. 

 

for now the extra $80 gets you 16GB of Vram vs 8GB, and better rasterization performance. Nvidia does manage Vram usage better than AMD in drivers, but that only gets you so far. 

yeah im cautious about seeing any new feature as a selling point for nvidia. remember when they said in the future every game will use physx. how many new games have physx today? and what about hairworks?

Link to post
Share on other sites
On 10/31/2020 at 5:45 PM, Random_Person1234 said:

AMD does have "ray tracing accelerators" built into their RX6000 GPUs. I think there's supposed to be one per CU.

I was referring to their software implementation.  It's just the general DX12 RT, instead of unique programming that requires their hardware.  Hopefully there will be optimizations for AMD cards, but it won't be dependent on unique AMD hardware to run.

Link to post
Share on other sites
On 10/31/2020 at 5:29 PM, RejZoR said:

Technically, so is "RTX". It's just NVIDIA branded crap slammed on top of what's essentially DXR. And since it's "RTX" they can make it NVIDIA specific. Meaning if they want to be real assholes, they can prevent it from running on Radeon cards as part of GameWorks proprietary nonsense used by game devs.

I have a feeling there will be more RT support and less RTX support now.  Programmers don't like closed features plus its easier to port from console

AMD 5900X / Gigabyte X570 Auros Pro / 64GB @ 3600c16 / 1TB Samsung 980 Pro 4.0x4 / 4TB total Inland TLC 3.0x4 / EVGA FTW3 3080 / Corsair RM750x /Thermaltake View71

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×