Jump to content

Developer created a mod to enable AMD's FSR in SteamVR/OpenVR games

Treble Sketch

Summary

 

fholger, a Free Open Source Software (FOSS) developer on GitHub has created a DLL mod that enables AMD's FidelityFX Super Resolution (FSR) in SteamVR games that use the OpenVR API. As long as they're running DirectX 11.

 

 

Images

spacer.png

 

Side-by-side comparisons provided by the developer...

Skyrim VR: https://imgsli.com/NjAxNTM/0/1

Fallout 4 VR (shown above): https://imgsli.com/NjAxNTE/0/1

Fallout 4 VR comparison between FSR and CAS sharpening: https://imgsli.com/NTk1OTI/2/1

 

 

Quotes

Quote

I was originally going to wait until AMD releases the official sources, but since the GTA5 mod has been out for a few days now, I figure I can make this available as a sort-of early access preview :)

- fholger on the reddit thread linked below

 

Quote

This is a best-effort experiment and hack to bring this upscaling technique to VR games which do not support it natively. Please understand that the approach taken here cannot guarantee the optimal quality that FSR might, in theory, be capable of. AMD has specific recommendations where and how FSR should be placed in the render pipeline. Due to the nature of this generic hack, I cannot guarantee nor control that all of these recommendations are actually met for any particular game. Please do not judge the quality of FSR solely by this mod :)

 

I intend to keep working on the performance, quality and compatibility of this mod, so do check back occasionally.

- "Important Disclaimer" fholger gave in the project's readme file

 

 

 

My thoughts

Do think that this will be super interesting for VR going forward, as upsampling will need to balance between visual fidelity and framerates. Since you are a lot "closer" to the screen in a VR headset/HMD than desktop gamers are to their monitors, with most headsets having a pixel count between 1440p and 2160p (4k) the difference in detail might likely be a lot easier to notice.

 

Don't think we'll really know until more people play around with games that support FSR or even DLSS in way more VR titles, as there only seems to be 3 games that support DLSS in VR:

- No Man’s Sky

- Wrench

- The Radius

 

Haven't really seen many videos talking about upsampling in VR yet, hopefully soon. But reviews can be tricky due to the variety of headsets across the current 3 generations with varying qualities.

Gen 0 (Pre-Rift/Vive), early VR attempts

Gen 1 (Rift/Vive), the first few headsets that started the rise of modern VR. Initial hype phase

Gen 2 (WMR headsets/mobile VR/Quest/Rift S), some alright headsets after the Gen 1 hype but do still offer some decent improvements/features added over the first wave in an intermediary wave of headsets. Development slows down as more long-term strategies are developed and then executed.

Gen 3 (Index/Quest 2/Reverb G2/Pimax 8K), more advancements in quality/lens/wireless/QoL/less tinker heavy and more general population consumer friendly headsets starts to be released.

 

Decagear, a headset that I am personally looking forward to seems to be fitting towards the tail-end of Gen 3 headsets. But also it feels like the start of Gen 4, where those advancements that came at a high price before will quickly start to drop as we start going into the S-curve of product adoption in the coming years. Especially as better GPUs/CPUs/headsets are released in the coming 3-5 years!

 

Do think it's going to be an interesting few years ahead for the entire gaming industry with better hardware (that we can HOPEFULLY buy...) and software that compliments the hardware well (DLSS/FSR/others)!

 

 

Sources

Reddit "announcment" post: https://www.reddit.com/r/virtualreality/comments/of4guu/i_created_a_mod_to_enable_amds_fidelityfx_super/

Github Repo: https://github.com/fholger/openvr_fsr

DLSS VR: https://worthplaying.com/article/2021/5/18/news/125960-nvidia-adds-dlss-support-to-9-titles-including-vr-new-game-ready-driver/

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, RejZoR said:

Pretty cool, especially since VR always requires more horsepower and FSR can mitigate that.

can you explain this

Link to comment
Share on other sites

Link to post
Share on other sites

That's really good.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, pas008 said:

can you explain this

With a normal game on a normal computer running on high graphics, the computer is rendering 3D graphics from one camera view in realtime at a resolution for the monitor (let's say 1080p for instance). High graphics looks good for you because the monitor is kinda distant from your eyes and you can see the whole thing at once.

 

On something like a Valve Index, the computer must render two slightly different 3D images from two camera views each at 1440x1600 resolution at high graphics. It's doing twice the work (or potentially more, say, if you're streaming with a third "observer" camera). Reducing the graphical quality is an option here. The tradeoff is the displayed images are inches from your eye, so any small reduction in quality can potentially be VERY noticeable, whereas on a monitor on your desk you might not notice the difference.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, pas008 said:

can you explain this

VR can use 2 screens or a big one, some by using lenses to give a good focus and field of view. but it can be a very personal thing in setup a bit like glasses for a person. (some VR headsets can be adjusted). Each eye would need a good resolution, good field of view, and create an image for each eye.

 

so 4k but x2 for VR, a lot of pixels and rendering.

 

VRSS

https://www.youtube.com/watch?v=Z-YrdkazD5o

 

PPI
https://www.youtube.com/watch?v=SXq1ZXDgPfE

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, HarryNyquist said:

With a normal game on a normal computer running on high graphics, the computer is rendering 3D graphics from one camera view in realtime at a resolution for the monitor (let's say 1080p for instance). High graphics looks good for you because the monitor is kinda distant from your eyes and you can see the whole thing at once.

 

On something like a Valve Index, the computer must render two slightly different 3D images from two camera views each at 1440x1600 resolution at high graphics. It's doing twice the work (or potentially more, say, if you're streaming with a third "observer" camera). Reducing the graphical quality is an option here. The tradeoff is the displayed images are inches from your eye, so any small reduction in quality can potentially be VERY noticeable, whereas on a monitor on your desk you might not notice the difference.

Wouldn't it make sense to use one of the eyes for observer instead of creating 3rd dedicated render output? It would essentially be free even if some applied limits like not rendering entire full image that would fit perfectly on observer monitor.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

Wouldn't it make sense to use one of the eyes for observer instead of creating 3rd dedicated render output? It would essentially be free even if some applied limits like not rendering entire full image that would fit perfectly on observer monitor.

That wouldn't work.

 

To have a 3D image, both eyes need to see a slightly different image, showing one image but with an offset depending on the eye wouldn't work as it would make background objects appear as "3D" as the foreground object. With 2 different cameras, you get realistic 3D effect since background object will pretty much be at the same place, but foreground objects will be placed at very different position.

 

For exemple, if you place your hand close to your face, and say more to the left side, when you close one eye, you'll see different part of your hand, but also, your hand will block different part of the background depending on the opened eye. With one image that you offset, that's not possible, the background will "move" with the hand, and the hand will always hide the same background assets regardless of what eye it's in front of.

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, pas008 said:

can you explain this

VR ideally needs a steady frame rate. While they have come up with ways to trick lower rates into usability, it remains the primary quality goal not to need to do that. I don;t know what recent systems do, but the original Vive I think ran fixed 90 Hz. Rift was similar but I think they were first to introduce a trick where it can generate motion shifted images so effectively the minimum would be half that.

 

FSR or other upscaling could be another way to help maintain a higher fps. My gut feeling is this might help with made for monitor games that happen to support VR, but is less useful in made for VR games which will take better optimisations for that process like foveated rendering and other similar ways to reduce the work really needed. 

 

55 minutes ago, HarryNyquist said:

On something like a Valve Index, the computer must render two slightly different 3D images from two camera views each at 1440x1600 resolution at high graphics. It's doing twice the work (or potentially more, say, if you're streaming with a third "observer" camera).

It's probably slightly less than twice the work, since you have near enough the same data set apart from differences in observer position. The actual rendering part would be doubled, but setting up the scene etc only needs to be done once.

 

39 minutes ago, RejZoR said:

Wouldn't it make sense to use one of the eyes for observer instead of creating 3rd dedicated render output? It would essentially be free even if some applied limits like not rendering entire full image that would fit perfectly on observer monitor.

I was just thinking that...

 

21 minutes ago, wkdpaul said:

That wouldn't work.

Think we have a misunderstanding on the claim. This is about an external observer e.g. one to project onto monitor for people outside the VR device to follow what is going on. Seems simple to use one "eye" of the VR user's view than to render a 3rd different viewpoint. I think some distortion correction is needed but that's about the only extra work needed.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, porina said:

Think we have a misunderstanding on the claim. This is about an external observer e.g. one to project onto monitor for people outside the VR device to follow what is going on. Seems simple to use one "eye" of the VR user's view than to render a 3rd different viewpoint. I think some distortion correction is needed but that's about the only extra work needed.

Not sure I understand

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, wkdpaul said:

Not sure I understand

You were talking about needing two viewpoints for the 3D image to be formed. The discussion is about a viewpoint for an external observer. Logically it would be efficient to re-use one of the two viewpoints already generated, as opposed to rending from a 3rd different viewpoint.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

You were talking about needing two viewpoints for the 3D image to be formed. The discussion is about a viewpoint for an external observer. Logically it would be efficient to re-use one of the two viewpoints already generated, as opposed to rending from a 3rd different viewpoint.

ah ok, makes more sense now, I somewhat missed that part. Yeah for an external display, one of the already rendered view is used.

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

Would be cool if someone managed to make FSR agnostic feature. I don't care if it also affects GUI and text too.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, HarryNyquist said:

With a normal game on a normal computer running on high graphics, the computer is rendering 3D graphics from one camera view in realtime at a resolution for the monitor (let's say 1080p for instance). High graphics looks good for you because the monitor is kinda distant from your eyes and you can see the whole thing at once.

 

On something like a Valve Index, the computer must render two slightly different 3D images from two camera views each at 1440x1600 resolution at high graphics. It's doing twice the work (or potentially more, say, if you're streaming with a third "observer" camera). Reducing the graphical quality is an option here. The tradeoff is the displayed images are inches from your eye, so any small reduction in quality can potentially be VERY noticeable, whereas on a monitor on your desk you might not notice the difference.

 

5 hours ago, Quackers101 said:

VR can use 2 screens or a big one, some by using lenses to give a good focus and field of view. but it can be a very personal thing in setup a bit like glasses for a person. (some VR headsets can be adjusted). Each eye would need a good resolution, good field of view, and create an image for each eye.

 

so 4k but x2 for VR, a lot of pixels and rendering.

 

VRSS

https://www.youtube.com/watch?v=Z-YrdkazD5o

 

PPI
https://www.youtube.com/watch?v=SXq1ZXDgPfE

 

 

4 hours ago, porina said:

VR ideally needs a steady frame rate. While they have come up with ways to trick lower rates into usability, it remains the primary quality goal not to need to do that. I don;t know what recent systems do, but the original Vive I think ran fixed 90 Hz. Rift was similar but I think they were first to introduce a trick where it can generate motion shifted images so effectively the minimum would be half that.

 

FSR or other upscaling could be another way to help maintain a higher fps. My gut feeling is this might help with made for monitor games that happen to support VR, but is less useful in made for VR games which will take better optimisations for that process like foveated rendering and other similar ways to reduce the work really needed. 

 

It's probably slightly less than twice the work, since you have near enough the same data set apart from differences in observer position. The actual rendering part would be doubled, but setting up the scene etc only needs to be done once.

 

I was just thinking that...

 

Think we have a misunderstanding on the claim. This is about an external observer e.g. one to project onto monitor for people outside the VR device to follow what is going on. Seems simple to use one "eye" of the VR user's view than to render a 3rd different viewpoint. I think some distortion correction is needed but that's about the only extra work needed.

i get that but always? what if I'm gaming with surround/eyefinity 3x 1440p/4k screens how is that not more demanding?

doesnt it all come down to how many pixels you are pushing overall?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, pas008 said:

i get that but always? what if I'm gaming with surround/eyefinity 3x 1440p/4k screens how is that not more demanding?

doesnt it all come down to how many pixels you are pushing overall?

not fully sure, some could be done in different ways. and not sure what kind of thing you are refering to?

If one has to render 2 scenes, it means all processing has to be done for both which can be even more expensive (as you might had a lot more processing to the image although some can maybe be done for the whole scene). While 1 scene can be scaled more easily depending on resolution, while rendering 2 different scenes and keeping the FPS and HZ the same without causing sickness with unstable framing and motion? Could be very technical challenges? not sure, been a while when it comes to VR, some parts might have been more standardized by now.

Spoiler

rant, might be hard understand or if just nonsense.

 

if we use VR and raytracing as an example, you might put raytracing to the whole scene but each scene to be rendered might have different models to be rendered due to difference in position/rotation for each eye/scene, making more objects or textures to be rendered or counted in the raytracing sometimes the same model but from a different angle. Like if each eye counted as a first person in normal flat screen games?

Although, we do have a common field were both eyes see the same thing from more or less the same side, so it could be more of raytracing x1.5 or something instead of both scenes counted on their own? (like ray tracing split into groups), one for main focus point used in both scenes, 2 zones or so for the sides being rendered for that specific view and it might have to be more adjhustable than that or with eye tracking features?

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, porina said:

Logically it would be efficient to re-use one of the two viewpoints already generated, as opposed to rending from a 3rd different viewpoint.

It would be the case for a simple observer view, but there's stuff like Beat Saber and VRChat that have (optional) separate viewpoint cameras that would indeed be a third viewport to render.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, porina said:

You were talking about needing two viewpoints for the 3D image to be formed. The discussion is about a viewpoint for an external observer. Logically it would be efficient to re-use one of the two viewpoints already generated, as opposed to rending from a 3rd different viewpoint.

Hmm, it would work for some people tagging along with the stream. Though from what I've seen a few others would prefer to also see the model/user/friend moving around in 3D too, but this does depend on people's preference.

 

5 hours ago, HarryNyquist said:

It would be the case for a simple observer view, but there's stuff like Beat Saber and VRChat that have (optional) separate viewpoint cameras that would indeed be a third viewport to render.

Do wonder how GPU chiplets will handle stuff like VR/multi-viewport rendering scenes, it's going to be interesting if each eye gets the full die performance or something else 😆

Link to comment
Share on other sites

Link to post
Share on other sites

Should be noted the full FSR code should be available in a couple of weeks. But I do like how the community is taking to an interesting new tool.

Link to comment
Share on other sites

Link to post
Share on other sites

Is it just me or does FSR just look like a smoothing affect applied to games?  Like go look at Steve's side by side video and you can see like all details are gone. Now Nvidia has this affect as well but it doesn't seem as pronounced since I guess they are recreating pixels and not just upscaling and smoothing the existing imagine.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/10/2021 at 12:14 AM, pas008 said:

can you explain this

Easy. Vive Pro 2's screens have a resolution of 2448x2448 each. That's a combined resolution of 4896x2448, but it doesn't matter because you have to render the square frame twice instead from slightly different angles, and you have to render the game at a slightly higher resolution than that to account for the lenses' optical properties. For instance, games are rendered at a vertical resolution of 1916 on my Cosmos Elite despite the fact that its screens have a vertical resolution of 1700. Most games are also displayed on the computer's screen, so it needs to render that as well.

Next, on a Vive Pro 2 you have to push that many pixels at a whopping 120 FPS to saturate the full capabilities of its screens.

And then a lot of processing power goes into tracking the headset and the controllers in the 3d space at 120Hz, and recalculating the camera position each time, and making sure that the frames are pushed to both eyes in perfect sync. As a result, to drive a headset like this, the computer has to be absolutely beastly.

If you can at least render the game at a much lower resolution and have it be upscaled to the headset's native resolution, the system requirements will be much more reasonable, at least when it comes to the graphics card, and high pixel density/high refresh rate headsets will be a lot more viable. You will, however, need a capable CPU for VR regardless of what GPU or headset you have.

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, That Franc said:

Easy. Vive Pro 2's screens have a resolution of 2448x2448 each. That's a combined resolution of 4896x2448, but it doesn't matter because you have to render the square frame twice instead from slightly different angles, and you have to render the game at a slightly higher resolution than that to account for the lenses' optical properties. For instance, games are rendered at a vertical resolution of 1916 on my Cosmos Elite despite the fact that its screens have a vertical resolution of 1700. Most games are also displayed on the computer's screen, so it needs to render that as well.

Next, on a Vive Pro 2 you have to push that many pixels at a whopping 120 FPS to saturate the full capabilities of its screens.

And then a lot of processing power goes into tracking the headset and the controllers in the 3d space at 120Hz, and recalculating the camera position each time, and making sure that the frames are pushed to both eyes in perfect sync. As a result, to drive a headset like this, the computer has to be absolutely beastly.

If you can at least render the game at a much lower resolution and have it be upscaled to the headset's native resolution, the system requirements will be much more reasonable, at least when it comes to the graphics card, and high pixel density/high refresh rate headsets will be a lot more viable. You will, however, need a capable CPU for VR regardless of what GPU or headset you have.

To add on to this, eye tracking is still often handled in software. I'm not familiar with any protocols that allow eye tracking data to be processed and sent at an HMD level (yet, I'm sure this will change), but my Vive Pro Eye uses around 20% of my processing power... just for the eye tracking!

 

As VR technology is still in a (relatively) early stage in the grand scheme of things, it will keep getting more powerful, but it will still take time for that power to also translate in a way that's easy to run on more systems for PCVR, or standalone.

i do the writing of things hello | they/them

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, JBee said:

uses around 20% of my processing power... just for the eye tracking!

You sure? It isn't that expensive. The problem is that it needs to run in real-time, so it gets scheduled very frequently and looks like it is taking that much processing when in fact it isn't. A dedicated controller/processor would be more efficient, but could increase latency. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, pas008 said:

 

 

i get that but always? what if I'm gaming with surround/eyefinity 3x 1440p/4k screens how is that not more demanding?

doesnt it all come down to how many pixels you are pushing overall?

The thing is your 3x4K even something crazy as 4x4K grid setup of monitors running a single game still uses only one render pipeline, as in no matter what resolution your all monitors display only 1 frame at the time. VR on the other hand is 2 separated screens which require 2 separated frames to be shown at the same time so it uses 2 render pipelines, as in VR doesn't double the resolution it doubles the frames. So to run your 8K monitor grid at 120fps it only needs the GPU to render 120fps because your monitor grid only shows a single frame at the time, but to run 120hz VR your GPU must basicly render 2x120fps (not 240fps because the GPU isn't rendering frames to be shown left->right->left->right... but 2x120fps because it needs to render left+right->left+right->left+right... two different frames in a single frame).

So not twice the resolution, but twice the frame.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/9/2021 at 8:25 PM, Caroline said:

Is there a way to enable it WITHOUT vr? 

Yeah, this. I dont get it why not just for *all* games (even if limited to dx11 which the majority of games is anyway)

 

I was waiting for such a mod, and as such this news is slightly disappointing.

Why is this limited to vr only? 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×