Jump to content

AMD Releases More Information on FidelityFX Super Resolution, AMD's Alternative to Nvidia's DLSS

CommanderAlex

 

 

2020_11_18_8_49_58.png

Summary

 

AMD has plans on releasing an alternative to Nvidia's DLSS (Deep-Learning Super Sampling) for RX 6000 series graphics cards possibly later this year. AMD is calling their alternative FidelityFX Super Resolution (FSR), and AMD has mentioned that this does not require the use of machine learning. FSR will also be released as a cross-platform technology, so Xbox Series X and S users as well as PlayStation 5 users will also benefit. 

Quote

FSR is AMD's equivalent to Nvidia's DLSS (Deep Learning Super Sampling) which uses AI to sharpen up frames and stabilize frame rates at higher resolutions, and is essentially what allows GeForce cards to deliver decent performance when using ray traced lighting effects.

 

 

AMD has been focused on giving a DLSS equivalent after the launch of their RX 6000 series graphics cards, and AMD's VP of Graphics, Scott Herkelman, gave an update on the status of their progress now.

Quote

“It’s progressing very well internally in our lab, but it’s our commitment to the gaming community that it needs to be open, it needs to work across all things and game developers need to adopt it.”

 

Herkelman also went on to say that there are many different ways to do FSR that are not similar to Nvidia's Tensor Cores. 

Quote

"You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways. What matters the most to us is what game developers want to use because if at the end of the day it is just for us, we force people to do it, it is not a good outcome. We would rather say: gaming community, which one of these techniques would you rather see us implement so that this way it can be immediately spread across the industry and hopefully cross-platform.”

AMD is unsure which approach they would like to take with the implementation of FSR depending on game developers, as they all have their own specific needs/requirements. 

 

My thoughts

 

Great news for RDNA2 graphics cards and the latest generation of video game consoles to benefit from this new technology and AMD finally has an answer for DLSS. I'm excited to see how well it'll perform and compared to DLSS. I would expect some bumps along the way, just like DLSS 1.0 was by introducing blurriness to graphics. I'm excited to see how well this plays out for AMD and to stir up the competition with Nvidia. 

 

Sources

 https://www.pcgamer.com/amd-fsr-fidelity-fx-super-resolution-announcement/

https://www.notebookcheck.net/AMD-s-FidelityFX-Super-Resolution-should-release-this-year-but-Team-Red-has-not-yet-decided-what-non-AI-technique-to-employ.528187.0.html

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The biggest benefit will be to smooth, high framerate "4k" output on the consoles.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, SpiderMan said:

AMD has mentioned that this does not require the use of machine learning.

And just like that I've lost faith in this.  I would love to be proven wrong but I'm not aware of any effective (real time) upscaling algorithms that don't use machine learning or proprietary hardware which I don't think this solution will utilize.

Arch is better than Ubuntu. Fight me peko.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, JLO64 said:

And just like that I've lost faith in this.  I would love to be proven wrong but I'm not aware of any effective (real time) upscaling algorithms that don't use machine learning or proprietary hardware which I don't think this solution will utilize.

The article is very confusing. Maybe the source authors don’t understand the technical issues. DLSS by definition uses machine learning/deep learning. I think what the AMD’s VP of Graphics is actually saying is that improving ray tracing performance on their cards doesn’t need machine learning. Meaning they will have their DLSS rival in addition to improvements in pure ray tracing performance.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, JLO64 said:

And just like that I've lost faith in this.  I would love to be proven wrong but I'm not aware of any effective (real time) upscaling algorithms that don't use machine learning or proprietary hardware which I don't think this solution will utilize.

Or you're just so overhyped on the whole Ai buzzword you refuse to accept Ai maybe isn't even needed for such functionality.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

Or you're just so overhyped on the whole Ai buzzword you refuse to accept Ai maybe isn't even needed for such functionality.

I think the word AI is just used very confusingly in most media it’s like the magic „cloud“.

as far as I know Nvidia used machine learning to create algorithms for upscaling.( it could also be just marketing claims to impress investors and customers like Wirecard, who claimed they used AI while In reality employees were copying numbers from spreadsheets.

Hi

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

hi

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Sauron said:

Imagine buying a graphics card smh

You can buy them

Spoiler

They’re just expensive 

 

Hi

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

hi

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe I am misinterpreting what AMD said here, but these news makes me worried.

It sounds to me like we are quite a long way from FidelityFX being released, and we have no idea how good it will be. 

This is the quote I am referring to:

Quote

"You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways. What matters the most to us is what game developers want to use because if at the end of the day it is just for us, we force people to do it, it is not a good outcome. We would rather say: gaming community, which one of these techniques would you rather see us implement so that this way it can be immediately spread across the industry and hopefully cross-platform.”

It's great that they are working with developers to make sure the technology will be properly supported, but if they are still "evaluating many different ways" of getting their upscaling working then to me it sounds like they are just in the planning stages. During the keynotes I got the impression that FidelityFX would launch soon™ but now it sounds like it might be one or maybe even several years away.

 

 

4 hours ago, RejZoR said:

Or you're just so overhyped on the whole Ai buzzword you refuse to accept Ai maybe isn't even needed for such functionality.

I am not sure about upscaling in games, but for video ML-based upscaling is far superior to any traditional algorithm.

We can clearly see this if we compare for example FSRCNNX vs the best traditional upscaling algorithms like Lanczos or Spline.

 

More tests can be found here, along with PSNR and SSIM tests.

 

Machine learning is far superior to traditional upscaling. It's not just buzzwords.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, SpiderMan said:

AMD is unsure which approach they would like to take

Sounds promising! 

 

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Mark Kaine said:

Sounds promising! 

 

 

 

That's the thing I don't get when I read both articles, it seems like AMD is asking for input on what way to approach their FSR from game developers, but then they are making

"progress".

1 hour ago, LAwLz said:

It's great that they are working with developers to make sure the technology will be properly supported, but if they are still "evaluating many different ways" of getting their upscaling working then to me it sounds like they are just in the planning stages

Totally agree with you here like what I mentioned in the quote above. 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

if they are still "evaluating many different ways" of getting their upscaling working then to me it sounds like they are just in the planning stages. I got the impression that FidelityFX would launch soon™ but now it sounds like it might be one or maybe even several years away.

I think that was also the take away I got from the OP (haven't checked links). Different upscaling methods will have different strengths and weaknesses. nvidias approach does give interesting results beyond more traditional methods at a predictable compute cost. My concern is that using non-ML methods may not scale well in terms of computational complexity, but AMD will have better people than me working on it and we will judge them by their results when it does become available.

 

I'm reminded about the texture sharpening thing they released around the time DLSS came out, and the resulting arguments about the merits of both. Judging the visual quality of whatever they create will be particularly interesting.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

I think that was also the take away I got from the OP (haven't checked links). Different upscaling methods will have different strengths and weaknesses. nvidias approach does give interesting results beyond more traditional methods at a predictable compute cost. My concern is that using non-ML methods may not scale well in terms of computational complexity, but AMD will have better people than me working on it and we will judge them by their results when it does become available.

 

I'm reminded about the texture sharpening thing they released around the time DLSS came out, and the resulting arguments about the merits of both. Judging the visual quality of whatever they create will be particularly interesting.

It'll be interesting to see it play out. Mostly as we've been expecting at least some leverage of DirectML in this.

 

Still, the ML part might simply be on the backend at the design level rather than the active processing.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Maybe I am misinterpreting what AMD said here, but these news makes me worried.
It sounds to me like we are quite a long way from FidelityFX being released, and we have no idea how good it will be. 

You're not the only person that interpreted it as a long way off. I wouldn't even like to guess.

Link to comment
Share on other sites

Link to post
Share on other sites

What concerns is that by the time it comes out, DLSS would've been so deeply implemented throughout the industry that barely anyone outside console developers would look into AMD's solution.

 

Almost sounds as if they didn't expect DLSS post-2.0 to have the impact it did.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

It sounds like all they did until now is giving it a name. As they still didn't even decide exactly how they want to achieve their goal, i wouldn't bank on a release anytime soon. They're likely only bringing this up to up-market their GPUs and people don't buy based on DLSS support.

 

Hadware Unboxed also adressed this recently with pretty much the same conclusion.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, D13H4RD said:

What concerns is that by the time it comes out, DLSS would've been so deeply implemented throughout the industry that barely anyone outside console developers would look into AMD's solution.

Almost sounds as if they didn't expect DLSS post-2.0 to have the impact it did.

Well, for better or for worse, "console developers" basically includes all AAA titles, so...

 

But yeah, +1 to the list of "this is gonna take way too long".

Feels like a "hey guys! remember us? Yeah, we totally still exist!"

Link to comment
Share on other sites

Link to post
Share on other sites

fully expecting it to be kind of disappointing. they'll have to impress me.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Rauten said:

Well, for better or for worse, "console developers" basically includes all AAA titles, so...

 

But yeah, +1 to the list of "this is gonna take way too long".

Feels like a "hey guys! remember us? Yeah, we totally still exist!"

Yeah, though the concern I have has more to do with both the timeframe and how good it is.

 

If AMD can actually deliver a comparable, let alone superior, experience versus DLSS and is easier to implement whilst being platform agnostic, then it will be a huge win. Trouble is, if it's not as good as DLSS, and if AMD's solution arrives whilst DLSS has already been well-implemented in many titles, then perhaps it will not be bothered with.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, D13H4RD said:

Yeah, though the concern I have has more to do with both the timeframe and how good it is.

 

If AMD can actually deliver a comparable, let alone superior, experience versus DLSS and is easier to implement whilst being platform agnostic, then it will be a huge win. Trouble is, if it's not as good as DLSS, and if AMD's solution arrives whilst DLSS has already been well-implemented in many titles, then perhaps it will not be bothered with.

If AMD can get a more simple to implement system, they'll win out. Nvidia will still drop money at things (and AMD will need to as well), but "easy to integrate" really matters with this stuff. We also don't know which parts will be at the GPU driver level and which at the Game Engine level.

 

That's where this is all a tad confusing, because AMD could actually roll out multiple systems that work together. Because the real big victory would be a GPU-driver level (so game agnostic) Dynamic Resolution System to maintain frame rates. That might require custom resolution choices within a game (basically you choose 1440p DRS sort of thing), but that's really the big thing out there.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not going to say too little too late until it's released, but AMD has a lot of catch-up to do. Luckily, AMD is one of the companies that has some kind of track record for catching up and even leaping ahead in their tech, so here's to hoping it doesn't flop immediately.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

If AMD can get a more simple to implement system, they'll win out. Nvidia will still drop money at things (and AMD will need to as well), but "easy to integrate" really matters with this stuff. We also don't know which parts will be at the GPU driver level and which at the Game Engine level.

 

That's where this is all a tad confusing, because AMD could actually roll out multiple systems that work together. Because the real big victory would be a GPU-driver level (so game agnostic) Dynamic Resolution System to maintain frame rates. That might require custom resolution choices within a game (basically you choose 1440p DRS sort of thing), but that's really the big thing out there.

Both being easier to implement and especially being game-agnostic will be huge factors.

 

I think both NVIDIA and Epic have showed that at least on Unreal Engine with the new toolkit, implementing DLSS is very straightforward, though we will have to see how this pans out across other games on various engines. The big one will be if it's game-agnostic, which will be a big deal.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

If AMD can get a more simple to implement system, they'll win out. Nvidia will still drop money at things (and AMD will need to as well), but "easy to integrate" really matters with this stuff. We also don't know which parts will be at the GPU driver level and which at the Game Engine level.

Makes me think of the G-sync vs FreeSync thing. Will AMD come up again with something not as good, but good enough and being the lowest common denominator pick up adoption through that route. Especially as they have a lock on the consoles this will be adopted unless it ends up fundamentally broken.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

Makes me think of the G-sync vs FreeSync thing. Will AMD come up again with something not as good, but good enough and being the lowest common denominator pick up adoption through that route. Especially as they have a lock on the consoles this will be adopted unless it ends up fundamentally broken.

AMD is and will remain in the secondary position in the GPU space, so they'll stick to the "open standards" approach. The important part is integration packages, especially for UE and Unity. The interesting aspect will be how much this stuff can carry over to the Consoles, which might also explain some of the delay. If there's MS & Sony coordination, would explain why it seems like AMD has drug their feet on this. (i.e. it's not dragging, it's just a lot more complex of an integration issue.)

 

We'll know more at some point. I know everyone is doing the DLSS Narrative stuff, but it's still not common and doesn't look to be for a long time.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×