Jump to content

Toe to toe - 6800/6800XT reviews out

williamcll
35 minutes ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

not in Metro last light. The 4k and 1440p consumption is considerably higher
https://www.techpowerup.com/forums/proxy.php?image=https%3A%2F%2Fimg.techpowerup.org%2F201118%2Fpdcvvx8dau.jpg&hash=57c297238fa23a32344e973696691d58

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, AluminiumTech said:

At 1440p it does at least lean strongly towards AMD.

Not even the hardware unboxed results you mentioned earlier showed that. It's low single digit % difference on average. It is not make or break, or likely to tip someone either way in itself.

 

13 hours ago, AluminiumTech said:

Okay but again a not well supported feature by game developers.

Features have to start somewhere. It is constantly being added to.

 

13 hours ago, AluminiumTech said:

There were some games where Nvidia has much better perf because they're Nvidia sponsored titles. Other titles tended to do better on AMD.

Both sides will provide some extra level of support to some developers. This is nothing new that as a result some games may work relatively better on one side than the other. That's in part why I'm more interested in the average as a more general representation of the gaming market, not focusing on a few titles that might do much better or worse.

 

13 hours ago, AluminiumTech said:

1080p is more up in the air because reviewers in quite a few cases were CPU bottlenecked.

And in that scenario IMO it doesn't matter who is slightly ahead. You've dismissed 4k as a niche, and I'd put ultra high fps in a similar niche. How many play with displays beyond 144 Hz say? Probably not that many.

 

13 hours ago, AluminiumTech said:

AMD's been talking about low latency gaming for a while and has continually added features to improve this over time and has even improved it on existing hardware through new implementations of existing features.

I think at this point, we need someone to do side by side testing of both nvidia's and AMD's low latency solutions and see what they do in the real world. 

 

13 hours ago, AluminiumTech said:

Nvidia's compatibility with Freesync is hit or miss and it isn't the same.

That's just Freesync normally. The lack of quality standards in it resulted in many bad monitors, although I'm not aware of any significant problems in higher end models.

 

13 hours ago, AluminiumTech said:

it has 16GB VRAM, it has much better power consumption charactertics.

On the VRAM side, it still remains to be seen how many games make effective use of that going forwards. Devs will not neglect running decently on say 8GB class GPUs since that will still be a vast majority of the market. It is a question of how much it makes to the top end. Also it remains to be seen if devs also implement the IO thing which may help close the gap for lower VRAM cards. This will be a requirement on the next gen consoles since their total shared ram (system+vram) will be less than a high end PC gaming system.

 

13 hours ago, AluminiumTech said:

The only thing runaway about the 3080 is its power consumption which if left unrestricted would go well beyond 400W.

Compare like with like, the routine running power of Ampere is higher, no question there, but if you're going to unrestrict Ampere, what would navi do similarly? It'll likewise go up, even if not necessarily to the same level. For example, when looking at overclocking, which is when you'd exceed stock power limits, I'd like to compare under equivalent cooling to keep it fair. 

 

13 hours ago, AluminiumTech said:

I see everything except ray tracing good enough or even great.

I agree with that, but I don't see the AMD win you seem to. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Medicate said:

Techpowerups Power draw numbers seem a bit strange to me. They claim the 6800 draws 165W on average and have the 6800XT at 210W. That's 80W less than what other reviewers measure for average gaming power draw. That's a huge discrepancy.

Depend on what tools and scenario they used to measure power consumption.

 

2 hours ago, Medicate said:

Ok so turns out Techpowerup uses Metro Last light at 1080p for their Power draw testing. A title from 2013... at 1080p.

Yes, they've been doing this for years for consistency. You can check their Furmark result for max out power consumption.

 

2 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

Not really, you're often hit CPU/memory bottleneck in high fps gaming. 
CPU bottleneck = Less work for the GPU

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

Then Radeon Chill and cap it at 60fps

That's what i used to do with my 390

 

Glad to see that it only sip power at 1080p, ima be getting a silent setup finally (Not that it was that bad with the 390, but if it can get quiter, i'll take it. Probably will just have to replace some fans in my case afterwards)

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Snip

I’m talking about visual quality. Performance wise it performs fine but it looks like garbage to me.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

More transistor work and less stalls for cache hits. Depends, though, the load and the architecture a bit, but generally high FPS will put a heavier power draw than higher resolution.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, ONOTech said:

Yeah specifically talking about CUDA app support.

 

Re: Raytracing - it's not like the AMD cards are bad since RTX 2080S+ DXR performance is good; but you can get something better for the around the same price. AMD's DLSS alternative should decrease the margin though.

 

I also stream and NVIDIA's software suite and encoder is so much better. AMD apparently does very well with H265 so if that actually gets streaming support then they have a real competitor.

Nvidia's one critical advantage over AMD in the GPU space is doing the front-facing, consumer stuff better. Shadowplay matters because it's good enough and you can just turn it on. ReLive has spent so much time being buggy that we won't hear if it's working well for probably a year after they get it fixed.

 

As for Raytracing general, the sudden return of the Microsoft-client Minecraft Path-tracing feels less like testing and far more like a long-running feud with AMD NA PR. Yes, we get it. They were run by jerks and pissed off everyone in the NA Tech Press for years. But pulling out something obtuse as a test makes the reviewers look incompetent.  Both NVIDIA & AMD clearly designed their RT approach this generation about baseline adoption rather than actual high-performance. AMD's works fine and it'll get developed for due to the consoles. Nvidia only made Path Tracing better (being core for core unchanged otherwise), which will look better in the future as games will head in that direction, even if it's never really capable of actually performing anything but tech demos well with it. It'll be a few weeks to see if AMD's approach doesn't cause the pipeline stalls & stuttering that Nvidia's will tend to.

 

Raster performance, GPU driver stability and, these days, easy streaming are what matter to the end-users. Nvidia in a bit of a panic mode because they messed up the first part when you look at Pascal -> Turing -> Ampere progression. They're banking on 2 & 3 to keep them from losing sales for the next 4 months.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, AluminiumTech said:

I’m talking about visual quality. Performance wise it performs fine but it looks like garbage to me.

What I posted was VMAF...

If you don't understand what that means let me explain. The visual qualify NVenc puts out is better than x264 on the slow preset when both are using the same bitrate.

 

If you encode a lossless video using Turing's H.264 encoder with an 8Mbps output, the perceived visual quality loss will be about 11%.

If you used x264 with the slow preset and output an 8Mbps file then the perceived visual quality loss will be about 13%.

Turing's encoder outperforms x264 unless you go into the really slow presets like veryslow or placebo.

Turing's encoder is fantastic. It is absolutely amazing. You basically have 0 reason to not use it if you got a 20 series GPU and want to encode video. For HEVC it matches x265 at the slow preset too.

 

Or as the article puts it:

Quote

Essentially, Turing is in such a good place now with their RTX 2080TI, 2080, 2070 and 2060 cards. The NVENC encoders for both H.264 AVC and H.265 HEVC are simply excellent. It’s the best H.264 encoder around now. Basically because it’s the newest at the time of writing, and x264 has been optimised into the ground. x264 will not get much better as an encoder, and will only increase speed with CPU improvements. Turing NVENC is already beating it on Slow for this test video, there’s no reason not to use it.

 

My prediction is that Turing will only be toppled as the all-round king of encoders when more sites make the switch to VP9 and the race starts again. VP9 does have an interesting ability to enforce real-time encoding at a set CPU usage. This will greatly ease the transition for average streamers by removing the need to find their perfect preset. But that is a story for another day.

 

 

Edit: got Turing and Pascal mixed up in my writing. I previously said Pascal's encoder was fantastic but I meant Turing's encoder.

Link to comment
Share on other sites

Link to post
Share on other sites

The encoder and gsync support for my monitor are the 2 reasons I'm heavily leaning towards nvidia for a GPU upgrade rather than AMD. While I do still applaud AMD's progress!!

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, ONOTech said:

AMD's DLSS alternative should decrease the margin though.

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

I am impressed enough so far that they caught up as much as they did, allow them some wiggle room, it's been a long time since they were even really competitive.

 

But i can't help thinking that we are being played, Lisa Su is apparently a (scratch that) to Jensen, it would not surprise me if they were in on something private together where AMD climbs and they collaborate to make each other look good whilst pretending they are separate, but that's just a conspiracy theory i made up in my head.

 

https://www.techtimes.com/articles/253736/20201030/fact-check-nvidia-ceo-uncle-amds-dr-lisa-su.htm#:~:text=Not technically "uncle" but still,they are very close relatives.

My computer for gaming & work. AMD Ryzen 3600x with XFR support on - Arctic Cooling LF II - ASUS Prime X570-P - Gigabyte 5700XT - 32GB Geil Orion 3600 - Crucial P1 1TB NVME - Crucial BX 500 SSD - EVGA GQ 650w - NZXT Phantom 820 Gun Metal Grey colour - Samsung C27FG73FU monitor - Blue snowball mic - External best connectivity 24 bit/ 96khz DAC headphone amp -Pioneer SE-205 headphone - Focal Auditor 130mm speakers in custom sealed boxes - inPhase audio XT 8 V2 wired at 2ohm 300RMS custom slot port compact box - Vibe Audio PowerBox 400.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

Yeah, Nvidia most definitely have more experience with regards to AI, not just in hardware terms but also in developing AI solutions themselves. Maybe if AMD brings Microsoft in they'll be able to compete there? But I wouldn't be surprised if that still wasn't enough.

 

On top of that, the AMD cards will have to run that AI on the general purpose compute units, rather than in dedicated tensor cores like DLSS. While nowhere near as intensive as training, evaluating an AI model (ie. making it do what it's meant to do) isn't computationally free - it will take up some GPU cycles that could otherwise be used for rendering. Even if you were to run the exact same DLSS 2.0 model on an AMD card, you wouldn't see the same performance improvements as you do on RTX cards because of this, so AMD's model will need to be better than Nvidia's just to equal it's effectiveness in gaming.

 

I'm also worried that AMD's solution, because of this performance penalty, will end up being very much like DLSS 1.0. A common compromise in AI is between evaluation performance and model accuracy - better models are generally harder to evaluate - so I'm concerned that AMD will sacrifice too much in terms of accuracy (how good the frames look) in order to meet the fps offered by their competitor.

 

I'm genuinely interested in AMD's DLSS competitor - I think the technology is fascinating - but I am worried at how it will end up being implemented and I'm especially worried regarding compatability with DLSS 2.0 games. I really don't want there to be a situation where games are either accelerated by one model or the other, or that compatibility for each has to be implemented separately. That's not the kind of bs we need, and it could be damaging to the uptake of these algorithms in general. But getting them to agree on something will be a tough fight given Nvidia's solution has been out in the wild for so long.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

Overall, pretty nice. Glad to see them returning with a flagship and eventually have a complete new series.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

@tim0901

It doesn't matter. Rendering at 1080p and upscaling with sharpening to 4K is still less taxing than rendering at native 4K, resulting in net gain of performance even if still worse than 1080p. Not to mention, if resizing and sharpening is done right it's possible it would even negate the need to use anti-aliasing as upscaling would sort of do that by itself.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, tim0901 said:

Yeah, Nvidia most definitely have more experience with regards to AI, not just in hardware terms but also in developing AI solutions themselves. Maybe if AMD brings Microsoft in they'll be able to compete there? But I wouldn't be surprised if that still wasn't enough.

 

On top of that, the AMD cards will have to run that AI on the general purpose compute units, rather than in dedicated tensor cores like DLSS. While nowhere near as intensive as training, evaluating an AI model (ie. making it do what it's meant to do) isn't computationally free - it will take up some GPU cycles that could otherwise be used for rendering. Even if you were to run the exact same DLSS 2.0 model on an AMD card, you wouldn't see the same performance improvements as you do on RTX cards because of this, so AMD's model will need to be better than Nvidia's just to equal it's effectiveness in gaming.

 

I'm also worried that AMD's solution, because of this performance penalty, will end up being very much like DLSS 1.0. A common compromise in AI is between evaluation performance and model accuracy - better models are generally harder to evaluate - so I'm concerned that AMD will sacrifice too much in terms of accuracy (how good the frames look) in order to meet the fps offered by their competitor.

 

I'm genuinely interested in AMD's DLSS competitor - I think the technology is fascinating - but I am worried at how it will end up being implemented and I'm especially worried regarding compatability with DLSS 2.0 games. I really don't want there to be a situation where games are either accelerated by one model or the other, or that compatibility for each has to be implemented separately. That's not the kind of bs we need, and it could be damaging to the uptake of these algorithms in general. But getting them to agree on something will be a tough fight given Nvidia's solution has been out in the wild for so long.

Basically nothing runs on the actual Tensor Cores. They're for server rendering. The ray traversal units are packaged with the Tensor Cores, but they don't actually use the Tensor Cores. DLSS is a customized upscaler routine. It doesn't need any part of the expanded units. It's a lot like the "RTX Voice" that doesn't need any part of RTX to run. It's just marketing.

 

Also, even DLSS 2.0 can be mostly mimicked by running at about a 70% render resolution and a sharpening filter. Nvidia had to badly over-engineer a solution to a problem they created for themselves.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Taf the Ghost said:

Also, even DLSS 2.0 can be mostly mimicked by running at about a 70% render resolution and a sharpening filter. Nvidia had to badly over-engineer a solution to a problem they created for themselves.

Mmm these sour grapes sure are tasty. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

idk look what they've done for the consoles. I assume not all them but they should have most of that work.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, GDRRiley said:

idk look what they've done for the consoles. I assume not all them but they should have most of that work.

Not really, that's all pretty well Sony and Microsoft doing the software leg work. AMD already has a hard enough time to get their software features adopted in games and overall has worse partnerships than Nvidia. For something so new and not done by them before AMD just doesn't have the execution track record to believe it won't be anything more than AMD releasing it but overall lacking the required polish for a very long time. It'll work, but that can be said about all of their features and software they release but working and being something you actually want to use are often too distant.

 

Outside of graphics effects features there is zero AMD software etc I use, I don't really want anymore anyway but my AMD GPUs are used strictly for their basic functionality.

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of talk for ray tracing as if it's a fully fleshed out feature set that every game on the market currently has. Like 7 games so far support it? Let's not pretend like nvidia had the most  amazing RT performance with the 20 series which was first gen RT. TODAY at this very moment  we can't really look down  on AMD for their RT performance becasue honestly speaking  it's a first  gen product so expecting 30 series levels of RT  performance  was a pipe dream and let's see  what the next generation of RT cards from AMD brings  to the table, we can be  critical about the  performance  then  since  they  would have had time to  make  architectural changes like they did from  zen2 to zen3. As  for the here and now I'd  give them a pat on the back for finally having cards at the high end  that are competitive albeit 6800's pricing is questionable to the 3070's

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Ray tracing performance for first generation is perfectly fine, especially if you're mostly a 1080p gamer. Even 1440p. Depends on games though. Some run above 100fps, but some not even at 60. I wonder how much ray tracing can even depend on drivers. Is there even any room left for improvements or it is whatever it is in hardware and that's that, can't make any improvements without fiddling with hardware side of GPU...

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, RejZoR said:

@tim0901

It doesn't matter. Rendering at 1080p and upscaling with sharpening to 4K is still less taxing than rendering at native 4K, resulting in net gain of performance even if still worse than 1080p. Not to mention, if resizing and sharpening is done right it's possible it would even negate the need to use anti-aliasing as upscaling would sort of do that by itself.

Absolutely it'll still be a net gain, that's why I'm still excited to see it come to life. It's not like the lack of dedicated hardware makes it pointless. My point is just that it'll be difficult for AMD to see the same level of gains as Nvidia because they don't have them. A 30% improvement is still a 30% improvement and is something to be proud of, but when compared to a 40% improvement it begins to lose some of its shine. (Percentages are for sake of argument - idk what the performance gain will be, don't read too much into them.)

10 hours ago, Taf the Ghost said:

Basically nothing runs on the actual Tensor Cores. They're for server rendering. The ray traversal units are packaged with the Tensor Cores, but they don't actually use the Tensor Cores. DLSS is a customized upscaler routine. It doesn't need any part of the expanded units. It's a lot like the "RTX Voice" that doesn't need any part of RTX to run. It's just marketing.

DLSS doesn't need tensor cores, no. Tensor cores are not for "server rendering", whatever that's meant to be - they are dedicated silicon (completely separate to RT cores) designed to accelerate machine learning and AI workloads by being very efficient at FP16 matrix (tensor) multiplication - it's nothing that can't be done on general GPU compute cores. They don't add any extra functionality to the card. But given that they are optimised for the exact workloads required when evaluating an AI model, it makes sense to use them when you can.

 

And RTX voice does make use of the tensor cores, as does DLSS. It might not make use of them much, but doing so still frees up general purpose CUDA cores to do something else. For example GN found a 15% fps drop when using RTX voice on GTX cards, vs a 10% drop on RTX cards. This drop is pretty uniform between cards of the two generations - a 2060 and 2080 Super saw a comparable drop in fps, with both seeing better results than a 1080. The software is optimised for tensor cores and does make use of them - there is a performance penalty if you don't have them.

 

The same will be true for DLSS - running the model on the GPU itself rather than on dedicated hardware would probably use up a similar ~5% of your overall gpu power (which would increase the higher your fps as you evaluate the model more often) which is then taken away from any further fps improvement. It's still going to produce a net gain, but you're still then comparing the performance of 100% of a card with tensor cores to 95% of a card without them. It will be far harder for the card without tensor cores to see the same fps gains as the card with them. They may not be utilised much - not that we can test how much of them is utilised by any given workload -  but they will give Nvidia's cards an edge over cards without them.

 

It's kinda like the difference between software vs hardware video decoding. Software can do it just fine, but using hardware acceleration is more efficient and will allow you to do more stuff on your system at the same time.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, RejZoR said:

Ray tracing performance for first generation is perfectly fine, especially if you're mostly a 1080p gamer. Even 1440p. Depends on games though. Some run above 100fps, but some not even at 60. I wonder how much ray tracing can even depend on drivers. Is there even any room left for improvements or it is whatever it is in hardware and that's that, can't make any improvements without fiddling with hardware side of GPU...

I'd say much like asynchronous compute, dedicated hardware would perform better than leaving it up to drivers which would incur a lot of overhead

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, XenosTech said:

I'd say much like asynchronous compute, dedicated hardware would perform better than leaving it up to drivers which would incur a lot of overhead

Which is why I'm wondering how much access to RT drivers even have and if AMD is even in a position they can improve anything with the driver and not having to make a new physical chip to address it. And I'm not talking about image scaling like NVIDIA does with DLSS to offset compute burden of RT by rendering less pixels and upscaling that to a larger output resolution in the end. I'm talking if they can actually do anything about the RT inside the rendering pipeline with the drivers.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, RejZoR said:

Which is why I'm wondering how much access to RT drivers even have and if AMD is even in a position they can improve anything with the driver and not having to make a new physical chip to address it. And I'm not talking about image scaling like NVIDIA does with DLSS to offset compute burden of RT by rendering less pixels and upscaling that to a larger output resolution in the end. I'm talking if they can actually do anything about the RT inside the rendering pipeline with the drivers.

They probably can but as someone else said earlier in the thread, AMD's driver division probably doesn't have the resources to make most of that feature with software only, though it would be interesting to see how they do it either way.

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×