Jump to content

Call me crazy, but I like Nvidia's DLSS, Frame Gen, AI AI-powered tools. It adds ways to play games at higher frame rates to match monitors, allowing older cards like the 30 and 40 series to last longer than they should, and even making the card with less VRAM slightly worth it.
It highlights some of the potential with AI as well. AI hasn't been explored much, but DLSS and Frame Gen show how AI could be used positively.
I think the only argument that could be made is that the cards are too expensive, which is true.
I am not saying "Oh, Nvidia on top!" I am saying, "I like how they positively use AI."
I dislike that Nvidia is leaning more on AI and not gamers anymore, and it is a sad reality.
I dislike that people talk trash on Nvidia for choosing to use AI to power their cards.

Link to post
Share on other sites

4 minutes ago, Cinlow said:

Call me crazy, but I like Nvidia's DLSS, Frame Gen, AI AI-powered tools. It adds ways to play games at higher frame rates to match monitors, allowing older cards like the 30 and 40 series to last longer than they should, and even making the card with less VRAM slightly worth it.

thats what they in theory are supposed to do but in practice dont

 

they use more vram

frame gen becomes most useful when you already have plenty of performance and has major drawbacks on lower framerates… if that doesn’t bother you then more power to you 

Link to post
Share on other sites

Framegen makes the fps faster but the response slower.  The lower the base framerate the worse the experience.  Try running 4x FG on a game getting 30fps it would feel like you're walking through quicksand.  It is not passible until your base framerate is over 60-90fps depending on your sensitivity.  The slower cards are likely to get the least measurable improvement.

AMD 7950x3d / Gigabyte Aurous Master X670E/ 64GB @ 6000c30 / 2 x 4TB Samsung 990 Pro / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED + MSI 321URX

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to post
Share on other sites

I like framegen, in VR, it is the only way to get a smooth experience and nice graphics in games like DCS, X-Plane 12, and msfs202x. I use the quest 3 AWS and its fine, sometimes there is artifact, like the helicopter`s rotor making waves, but after a few minutes I don`t see it anymore.

 

I tried ASW, in racing games like ACC and it is mostly fine. Works better if you do it at 120hz and you can maintain 60fps, I'm not good enough to see a latency when doing 36fps with ASW on, but I prefer having real 72fps rather than 60 real one and 60 fake ones.

Link to post
Share on other sites

Primarily it stems from and then gets regurgitate by people with a combination of no personal experience and/or using the tech outside of optimal and recommended conditions. 

 

Examples.  1080p users trying to use DLSS Performance and complaining about the image quality. Or 4070 users who get 45fps native trying to use Frame Gen to improve their framerate and complaining about the input latency penalty. 

 

Then there are the people that just live in the past and hate on new things to hate on them.

 

I've been a user and believer of this tech since it was released and will continue to make use of it as its massively improves my gaming experience when used properly.

 

 

 

 

Ryzen 7 7800x3D -  Asus RTX4090 TUF OC- Asrock X670E Taichi - 32GB DDR5-6000CL30 - SuperFlower 1000W - Fractal Torrent - Assassin IV - 42" LG C2

Ryzen 7 5800x - XFX RX6600 - Asus STRIX B550i - 32GB DDR4-3200CL14 - Corsair SF750 - Lian Li O11 Mini - EK 360 AIO - Asus PG348Q

Link to post
Share on other sites

Before RTX era, pretty much everything was directly rendered. Some people attach some kind of purity to that, and see upscaling and framegen as tricks.

 

But if we take that historical era as reference, personally I do find great value in upscaling. I tend to only use it at the higher quality settings, where it is often practically not noticeable, while giving a nice boost to frame rates. At lower settings the impact is more visible.

 

Frame gen is a bit more complicated. It can give better latency if you compare native rendering without modern latency reduction methods, but worse than native with that latency reduction. Still, if old native rendering was acceptable, frame gen latency can be too. To me latency is pretty much a good enough, or not good enough thing. If it feels good, I don't care if it is 1ms or 100ms. If it is bad, I don't care if it is 100ms or 1000ms.

 

I first tried Nvidia frame gen with Portal RTX. I had an under-powered 3070. FG did give a fps boost, but it felt a bit weird. I didn't have quite enough base fps to keep latency in the comfort zone. When AMD joined in with FSR3 FG, I tried that in Forspoken demo. That felt ok in responsiveness, but unfortunately the mandatory FSR upscaling looked awful. More recently, I tried HL2 RTX demo on 5070 Ti. Here, I could get an acceptable base frame rate with upscaling only, and in this scenario FG did help with the fluidity. Latency was well in my "good" zone, and turning on FG gave me that higher fps smoothness. It wasn't perfect though, with artefacts showing on UI elements when FG was turned on as the interpolators can't separate the UI from world content. I'm not sure if it can be fixed in this instance, since the preferred way to do it is to draw the UI separately after processing the world content. If HL2 was never intended to work that way, it might not be possible. This should not be a problem for any game that natively supports FG.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, MSI Ventus 3x OC RTX 5070 Ti, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 4070 FE, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

Even though I have a high end GPU ( RTX 4080 ) I still use DLSS Quality whenever I can.

I really do enjoy DLSS and it helps improve framerate without really affecting the visual quality in my opinion.

 

Link to post
Share on other sites

Imo all that is caused by Nvidia's disgusting and out of touch marketing. Yes frame gen and DLSS improve frame rate more while sacrificing quality less than the old way of turning down settings, but they are not equivalent to native rendering. Putting new cards with new tech turned on in a direct fps comparison to older cards is fine, but that's not showing the whole picture so setting the marketing language and tone 100% according to those comparisons is not fine if not outright scummy. "RTX 5070, 4090 performance" is brainrot speech.

 

Imo all they really have to do is to show what settings are needed to make GPUs in the comparison get to the same frame rate at the same resolution, then show people the differences in graphical quality on the receiving end. This is the value of FG and DLSS, not just fps numbers and especially not fps to price since price is as unstable as it gets right now.

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites

What ai? Lol...

 

DLSS... Looks trash most of the time...framegen is more useful but has latency issues and sometimes looks like trash too...

What's not to like, right... ?

 

Or maybe they shoulda put all that efforts into raster performance instead of "Raytracing" DLSS, etc etc... then devs could focus on making good games instead of shoveling the newest "features" (which they neither understand or care about) into everyone's faces...

 

I always find it weird when people say observations or even indifference are "hate"... Makes me wonder what their agenda really is... but sure, good guy Nvidia, I guess?? 😂

 

 

3 hours ago, porina said:

Before RTX era, pretty much everything was directly rendered. Some people attach some kind of purity to that, and see upscaling and framegen as tricks.

Well, they are... crutches to get the way too demanding "Raytracing" feature to work at "acceptable" performance... Remember they said their "RTX" shenanigans would run on dedicated "RTX" cores and basically cost no performance??

 

(I don't either really but that's what their marketing basically implied)

 

It's really all mostly terrible marketing with buzzwords - but what really gets me is that people don't seem to understand that while some effects might use Raytracing, that's not really how Raytracing should be used as you can do that easily with other, less demanding "tricks"... Raytracing should be used as rendering pipeline getting rid of any and all pixels, and aliasing... Ofc, we're probably still a decade or two away from cards being powerful enough for that.

 

As said it's not really that fact alone, it's mostly the marketing, the bad optimization because of these features and devs obviously abusing things like DLSS and framegen as substitutes for basic game optimization.

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to post
Share on other sites

5 hours ago, Cinlow said:

and even making the card with less VRAM slightly worth it.

As a hardline backer of Frame Generation (currently using LSFG 3.0) and temporal generation (pingponging between LS1 and XeSS Quality), no. VRAM is definitely where its a hard case of "There's no replacement for displacement" for GPU, especially in the new paradigm where textures are directly loaded to VRAM and computer RAM for faster load times and smoother performance. Lower than 12GB (hell some titles like Black Myth Wukong and MH: Wilds easily exceeds that into 16GB realm on 4K) is already insufficient, and game with this in mind quickly suffers in 1% and .1% low (read: more stutters) because more swaps to the SSD have to be done.

 

And as everyone points out, the latency issue is still terrible for a lot of people and even the best efforts are still +15ms from native. For tightly timed genres like rhythm games, bullet hells, and souls like, its unacceptable. And another thing, Frame Generation also dont cover you when you are heavily GPU bound, if youre at 100% and then you turn on frame generation its a stutter fest, and it shows not only in feeling but also in graphs if you run CapFrameX to find out.

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

 

Link to post
Share on other sites

3 hours ago, Jurrunio said:

Yes frame gen and DLSS improve frame rate more while sacrificing quality less than the old way of turning down settings, but they are not equivalent to native rendering. 

I want to stress that in almost every situation where DLSS looks "better" then native.

The cause is actually bad TAA implantations at native. 

Link to post
Share on other sites

8 hours ago, Cinlow said:

Call me crazy, but I like Nvidia's DLSS, Frame Gen, AI AI-powered tools. It adds ways to play games at higher frame rates to match monitors, allowing older cards like the 30 and 40 series to last longer than they should, and even making the card with less VRAM slightly worth it.
It highlights some of the potential with AI as well. AI hasn't been explored much, but DLSS and Frame Gen show how AI could be used positively.
I think the only argument that could be made is that the cards are too expensive, which is true.
I am not saying "Oh, Nvidia on top!" I am saying, "I like how they positively use AI."
I dislike that Nvidia is leaning more on AI and not gamers anymore, and it is a sad reality.
I dislike that people talk trash on Nvidia for choosing to use AI to power their cards.

My first encounter with DLSS was when I tried turning on Raytracing in Cyberpunk. Suddenly every texture had this "oily" look to it, and I then discovered that DLSS had also been activated without my approval. It wasn't a good first impression, and since then I've been reluctant to turn it on, preferring native resolutions with a clearer picture to upscaling, and balancing other features to preserve an acceptable fps. 

 

Haven't really played around with framegen as I have not been in a situation where I need it.

 

The thing you should probably keep in mind when saying that " the only argument that could be made is that the cards are too expensive" is also the backlash they caused when they twisted the truth and claimed that the 5070 would deliver 4090 performance and such. And not really going Mea Culpa when called out on these claims, but rather doubling down.

AI generated frames are not the same as natively rendered. 

While the AI (I really do hate the widespread use of that word) development is very interesting, and holds potential, cramming it into every product and go "there, its better now because its with AI" may not be way to go.

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to post
Share on other sites

Reading the comments in this thread, some are focusing on worst case scenarios. These are not a universal solution, so there will be cases where it is not appropriate to use. What if I told you, you don't have to use it in those situations! However there are solid tangible gains where it is appropriate. Unless you're struggling to run 1080p in the first place, non-AMD upscaling generally is mature. Frame gen is still maturing so that is more situational.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, MSI Ventus 3x OC RTX 5070 Ti, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 4070 FE, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

To be honest, I also fell onto a bit of a hate-train since all I experienced was the high input latency mess from using AMD's FMF on my ROG Ally where I only ever got a base framerate of between 40 to 60 FPS where the input latency added by AMFMF was really noticeable. I stopped using it after a while because I had to re-patch the custom driver every time there was a driver update and I got tired of that lol. I then tried it again a few weeks ago in Cyberpunk on my RTX 4070Ti where with everything cranked, RT & PT on, my base framerate was around 60. It was geniuinely cool having over 120 FPS at times with everything cranked and the very demanding Path Tracing turned on, so I finally got off the hate-train. It's kinda misleading how Nvidia said that the 5070 was gonna have 4090 performance, that's definitely wrong as the 4090 used regular frame gen when the 5070 was using MFG

Link to post
Share on other sites

13 hours ago, Mark Kaine said:

What ai? Lol...

 

DLSS... Looks trash most of the time...framegen is more useful but has latency issues and sometimes looks like trash too...

What's not to like, right... ?

 

 

I'm not sure what you are looking at.  9/10 times I have to pause the game and pull out a magnifying glass to tell the difference.  Usually, the more obvious differences DLSS is the one that looks BETTER due to not having lost textures or blurry images in TAA.

AMD 7950x3d / Gigabyte Aurous Master X670E/ 64GB @ 6000c30 / 2 x 4TB Samsung 990 Pro / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED + MSI 321URX

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to post
Share on other sites

18 hours ago, Cinlow said:

Call me crazy, but I like Nvidia's DLSS, Frame Gen, AI AI-powered tools. It adds ways to play games at higher frame rates to match monitors, allowing older cards like the 30 and 40 series to last longer than they should, and even making the card with less VRAM slightly worth it.
It highlights some of the potential with AI as well. AI hasn't been explored much, but DLSS and Frame Gen show how AI could be used positively.
I think the only argument that could be made is that the cards are too expensive, which is true.
I am not saying "Oh, Nvidia on top!" I am saying, "I like how they positively use AI."
I dislike that Nvidia is leaning more on AI and not gamers anymore, and it is a sad reality.
I dislike that people talk trash on Nvidia for choosing to use AI to power their cards.

i consider dlss quality to be about the same as native. The problem i have is with frame gen, it doesn't feel the same and the overhead is huge. So i can easily end up in a situation where it is 4k 90fps vs 4k 60->120 FG, and 4k 90 fps feels better overall. 

 

The main usage for now might be for 120fps->240 FG, it's kinda niche.

 

The main problem i have with nvidia is forcing 12vhpwr down our throats and the 50-seires drivers are actually a dumpsterfire.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×