Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

27 minutes ago, porina said:

What's the consensus for motion blur in gaming? I tend to turn off any motion blur if given the option.

I also turn it off, just find it annoying in racing games and just makes it harder to play.

 

27 minutes ago, porina said:

Do game devs tune the motion blur according to the 180 degree rule? This would vary with frame rate and I'm not sure they're that smart.

I'd be willing to bet there is no "rule" and they apply it based on artistic goals of what they think the game should look like. I'll point to games like Need for Speed and Burnout as examples.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Dracarris said:

welp okay I think the difference in Cuda cores and SMs is rather small but the wider memory interface sure is a bigger difference.

There is barely a difference between the 3080, 3080 12GB and 3080ti and its the same die which is why it isn't an issue.  In this instance its a massive difference and a die that is a full tier lower.  When you change the die its not the same card regardless of naming.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, ewitte said:

There is barely a difference between the 3080, 3080 12GB and 3080ti and its the same die which is why it isn't an issue. 

Yeah that's what I meant. However I have to admit I got screwed by this naming scheme for good when making the decision between a 3080 and 3080 12GB. The regular 3080 at that point was just around 900€ while the 12G was still around 1050€. Assuming the only difference is the 2GB in VRAM I decided for the regular 3080. Guess this won't happen again to me.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, porina said:

On "soap opera effect" I don't have experience of it so can't directly relate it to gaming. In trying to read around I'm still unclear what is it about it. For example, has anyone had the effect when playing games around 120 fps? I don't believe it is just a high frame rate causing it.

 

My guess is that motion blur may lead to soap opera effect. Motion blur is natural in regular video content. There's a 180 degree shutter rule, which I'd simplify as the effective motion blur time is half the frame time. That's baked in to the content at capture. If you then adjust the frame rate without altering that exposure time, the 180 degree rule is broken, which leads to a different perception of the content. It can be used for creative purposes, but generally feels "wrong".

 

What's the consensus for motion blur in gaming? I tend to turn off any motion blur if given the option. Do game devs tune the motion blur according to the 180 degree rule? This would vary with frame rate and I'm not sure they're that smart. I'd guess they just offer different fixed degrees of blur and it is up to the user to adjust. This may need to be a consideration if motion blur is rendered by game engine and DLSS 3 creates extra frames, that a lower motion blur amount would be appropriate with the generated frames. 

The soap opera effect is usually associated with using interpolation in videos / movies which can make some some movement more fluid and better but because it doesn't have any other data other than the frames it smooths things that should be more abrupt making it look off. if you want to play around with it there's this program that allows you to do it on vlc

https://www.svp-team.com/wiki/SVP:VLC

a similar effect happens in movies recorded at a higher frame rate if the camera movements don't account for this change ( at least for me faster smooth pans is what gets me (think its because the eye doesn't do those movements instead it jumps around) while the slower ones are much better with the higher fps).

because in games you control the camera usually these things aren't a problem.

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, leadeater said:

You get stuttering mess most of all when you are playing 24 fps content on a 60 fps (60 hz) or greater output, unless it's sync'd correctly you'll get stutters as 24 does not divide in to 60 perfectly. Also problems are greatly more visible on a larger screen, I watch on a 96" projector screen and native untouched 24 fps end to end always looks better and smoother than anything else.

The TV's i have also natively support 24 fps playback. At least i think they do. They are high-end OLED TV's, id' be surprised if they'd lack such a basic TV feature.

 

58 minutes ago, leadeater said:

Problem is everyone has different equipment and tend to like the look of things different, art in the eyes of the beholder etc etc.

 

Well yes but games and movies are entirely different things so in this respect they aren't actually that directly comparable. A movie is a fixed frame rate, you have no way to increase the underlying "rendering output frame rate". Games have a completely different render and output pipeline so what you can do and how you can to it are vastly different which greatly affects the end result.

 

Like I said, it can be fine on games and terrible on movies, what's good for one isn't always good for the other.

In the end i'd rather have the option than not have it. I'm personally a fan of interpolation and i'm aware that there are plenty of people who don't like it. Still, just because someone doesn't like or use a feature that doesn't mean it shouldn't exist. Luckily we're not all the same.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

So the FE card is 450 watts while we can increase the Power Limit value to 600 W.  
 

So is that in like Afterburner? Just slide the power limit? 
 

If an FE can already max out the new 600 watt cable, then what’s the point of AIB cards if ur water cooling? 
 

 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Shzzit said:

If an FE can already max out the new 600 watt cable, then what’s the point of AIB cards if ur water cooling? 

Just like the past generations AIB cards will have more power connectors.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Stahlmann said:

Just like the past generations AIB cards will have more power connectors.

You sure?  Iv looked over all the 4090 cards and they ALL have a single atx 3.0 connector it seems like.

 

Ordered a 1650 watt atx 3.0 psu with  2  X of the new 600 watt connectors just in case lol. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

On "soap opera effect" I don't have experience of it so can't directly relate it to gaming. In trying to read around I'm still unclear what is it about it. For example, has anyone had the effect when playing games around 120 fps? I don't believe it is just a high frame rate causing it.

 

In a nutshell... the "soap opera effect" doubles/triples/quadruples/quintuples the frames which makes things filmed at 24fps appear to be 30/60fps. It get's the name from soap operas being filmed live on 60i cameras and not on film at 24fps. That additional framerate makes it a bit unsettling. When the 120 and 240hz flat panels came out and did it without asking, people seriously were going out of their minds of how terrible it looks.

 

I can't explain the effect because it can't be recorded or captured in any meaningful way.  However the brain absolutely goes "this looks wrong, and is extremely creepy"

 

If DLSS is in effect doing this to not only upscale, but "up-framerate", there's going to be a wave of "motion-sickness defaults" induced in games that never did so before.

 

This is also an effect that is noticed if you record video that was captured at 60fps, but only displayed at 30fps originally. Suddenly the video seems a lot smoother, too smooth.

 

The first time I saw the "soap opera effect" was in my parents LG 4K television and they were watching something, and it made a horror film look like something filmed with a 30 year old VHS-C camcorder. 

 

The tradeoffs are numerous, but in general when it comes to television and film, it absolutely ruins the intended visual effects, makes things look blurrier as they are interpolated, and makes dark images lighter/light images darker.

 

Basically it makes "Big budget" content look like it was filmed on cheap equipment because everything looks worse.

 

So I can't speak for DLSS3, but I'm sure the GPU can do a much better job than the televisions over-driven motion blur interpolation. However I've already tried to play games with all the RTX features turned on, and DLSS on a 30 series card is already a significant disappointment. Sure yes, you might get 4K out of a 1080p input footage, but it doesn't look like "4K" native footage, it looks like a 1080p image was upscaled and then compressed with jpeg at 60%. A still  image looks okay, but moving images are absolutely noticable.

 

Link to comment
Share on other sites

Link to post
Share on other sites

wow these new cards will suck for some PCs, 3,5 - 4 slot cards.

a lot of heat pipes that could increase cost again + the need for "GPU support frame or sticks" to hold the GPU up and not sag.

some creating cables for ATX 3.0, 1 atx 3.0 connector into 4x 8 pin atx 2.0? however that would work.

So cooling these cards and having a case that fits or designs that doesn't look bad for the GPUs. Also the heat pushed to the CPU.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Quackers101 said:

some creating cables for ATX 3.0, 1 atx 3.0 connector into 4x 8 pin atx 2.0? however that would work.

How would it not work, it's just copper wires. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kisai said:

In a nutshell... the "soap opera effect" doubles/triples/quadruples/quintuples the frames which makes things filmed at 24fps appear to be 30/60fps. It get's the name from soap operas being filmed live on 60i cameras and not on film at 24fps. That additional framerate makes it a bit unsettling. When the 120 and 240hz flat panels came out and did it without asking, people seriously were going out of their minds of how terrible it looks.

Unfortunately your description reads like many I found in in searching. It doesn't really provide more than a vague description something is off when increasing the framerate by creating new frames. In my earlier post where I describe the 180 degree shutter rule, that provides a mechanism for why it might look wrong, but I don't know for sure it is why.

 

Let's say we have video filmed normally at 30fps. It should have 1/60s of motion blur in each frame. If you shoot normal 60fps video, it should have 1/120s motion blur in each frame. If you take the 30fps footage and interpolate frames to give 60fps, you still have the 1/60s motion blur in each frame which is 2x what we expect. I suspect many PC games will have no motion blur, which is "off" in the other direction, but we're more used to it. It could also explain why stop motion looks off compared to real video. PC games that do implement motion blur may be a danger area for DLSS 3.0 if this is the mechanism.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, ZetZet said:

How would it not work, it's just copper wires. 

might not matter, just wonder by the added resistance and the power regulation stuff. but I guess it might not matter as much and the cable quality.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Quackers101 said:

might not matter, just wonder by the added resistance and the power regulation stuff. but I guess it might not matter as much and the cable quality.

More wire reduces resistance, connectors themselves are negligible. I just wonder if you will be able to remove power limits without the 600W pcie cable sense wires.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm due for an upgrade from my GTX-980, and will likely be making the move this generation. With the gap in between the 4090 and 4080 16gb, I would bet that in 4-6 months we get a 4080TI, that might be my solution.

 

I do think the 4080 naming is a bit of a fiasco, the 4080 12gb should have been the 4070. People would still be upset about a $900 70 series card but with todays markets that may be the reality we currently live in.

CPU: i9-13900k MOBO: Asus Strix Z790-E RAM: 64GB GSkill  CPU Cooler: Corsair H170i

GPU: Asus Strix RTX-4090 Case: Fractal Torrent PSU: Corsair HX-1000i Storage: 2TB Samsung 990 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

Calling the 4080 12gb a 4070 is apt.  But I'm going to go one step further and say that the 4090--based on price and hardware differences from the 4080 16gb--is also mislabeled.

 

It should be called Titan RTX 2

Link to comment
Share on other sites

Link to post
Share on other sites

NVIDIA shows GeForce RTX 4090 running 2850 MHz GPU clock at stock settings:

 

Quote

RTX4090-CLOCK-HERO-BANNER-1200x324.thumb.jpg.e5f1b63b6661d180983ee9b2354ea736.jpg

 

2022-09-22_2-01-56.gif.62eb0aca78a1f7f5ba99e3f30cdc3981.gif

 

2022-09-22_2-04-57.gif.f456236487d8927a52097fa81979ace1.gif

 

In Cyberpunk 2077 DLSS3 demo, the RTX 4090 GPU has been shown with on-screen GPU metrics displayed. Interestingly, the OSD also shows the GPU clock speed, which is NVIDIA’s first admission to what is the ‘actual’ clock of the RTX 4090 GPU. And it is higher than ‘official’ boost clock, indeed. The card has been shown running with 2800 – 2850 MHz GPU clock.

 

https://videocardz.com/newz/nvidia-shows-geforce-rtx-4090-running-2850-mhz-gpu-clock-at-stock-settings

Link to comment
Share on other sites

Link to post
Share on other sites

In regarding to input lags we have only one example. From 24 frame was created 96 frame. So to have 2  normal frame its needed 1/24=~42 ms. But according to Nvdia the input lag was 58 ms. So for rendering fake frame was consumed average ~16 ms. So for competitive games it will be even worst then 24 frame experience.

Next what will happens When someone will apply this feature for 120 rendered frame games ? We will see 8ms(1s/120) + 16ms fake frame input lag  and 480 frames ?

This feature is useless for VR gaming and any competitive games in my opinion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/21/2022 at 12:19 PM, starsmine said:

3080 12GB, and 3080
2060 and 2060 12GB and 2060 KO (which only EVGA gave it the KO name to distinguish it, otherwise we would have never known)
1060 3GB and 1060
1050 2GB and 1050 3GB
RX560 and RX560 <-- now that one is bad
RX 5700xt and  RX 5700 XT 50th Anniversary Edition
the four varients of GTX 460 that are all called 460 

Ah, thank you for the source there, so that density is them averaging all the logic, cache, and wires into one density number.

And most of these examples were anti-consumer too. What's your point?

 

Just because they've "done it before" doesn't excuse this practice.

 

192-bit "80-class" card - $900.

 

Wow, such innovation.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Sir Beregond said:

Wow, such innovation.

very PCIe 5, high quality chips and very bigger is better. just have to wait and see.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Sir Beregond said:

And most of these examples were anti-consumer too. What's your point?

 

Just because they've "done it before" doesn't excuse this practice.

 

192-bit "80-class" card - $900.

 

Wow, such innovation.

Same energy as being mad about a koenigsegg having only 3 cylinders. 

 

2 hours ago, Floong said:

In regarding to input lags we have only one example. From 24 frame was created 96 frame. So to have 2  normal frame its needed 1/24=~42 ms. But according to Nvdia the input lag was 58 ms. So for rendering fake frame was consumed average ~16 ms. So for competitive games it will be even worst then 24 frame experience.

Next what will happens When someone will apply this feature for 120 rendered frame games ? We will see 8ms(1s/120) + 16ms fake frame input lag  and 480 frames ?

This feature is useless for VR gaming and any competitive games in my opinion.

 

 

You should  look at that picture again. 

look to the left, notice that number under the 24? yea thats the input lag at 24fps. 

The argument everyone else is having is will it have more latency then DLSS 2.0, not more latency then with DLSS off.

input lag is not the time between frames. Look up nvidia reflex, which yes is a nvidia product, but it has some solid information.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

In a nutshell... the "soap opera effect" doubles/triples/quadruples/quintuples the frames which makes things filmed at 24fps appear to be 30/60fps. It get's the name from soap operas being filmed live on 60i cameras and not on film at 24fps. That additional framerate makes it a bit unsettling. When the 120 and 240hz flat panels came out and did it without asking, people seriously were going out of their minds of how terrible it looks.

The effect is only found on devices calculating intermediate frames, the frames are not multiplied.

3 hours ago, porina said:

Unfortunately your description reads like many I found in in searching. It doesn't really provide more than a vague description something is off when increasing the framerate by creating new frames. In my earlier post where I describe the 180 degree shutter rule, that provides a mechanism for why it might look wrong, but I don't know for sure it is why.

 

Let's say we have video filmed normally at 30fps. It should have 1/60s of motion blur in each frame. If you shoot normal 60fps video, it should have 1/120s motion blur in each frame. If you take the 30fps footage and interpolate frames to give 60fps, you still have the 1/60s motion blur in each frame which is 2x what we expect. I suspect many PC games will have no motion blur, which is "off" in the other direction, but we're more used to it. It could also explain why stop motion looks off compared to real video. PC games that do implement motion blur may be a danger area for DLSS 3.0 if this is the mechanism.

The effect can be best described as looking "unnatural" and "flat". The TV is rendering intermediate frames which are imperfect. The background is often soft and gets sharpened (reducing the depth of field). Sometimes foreground and background cannot be distinguished by the software resulting in glitching. In the end it's often a much clearer and sharper image, but looking like it's filmed in an artificial environment.

This video explains the problems emerging from generating intermediate frames.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kilrah said:

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

I agree it is not incising "input lag" for this case (Cyberpunk demo), it  is turning useless Ray tracing feature with 166 input lag feature to playable for some case. I'm mainly interested in  VR gaming from my point of view (Ray tracing + DLSS 3.0) = is useless. 

The Cyberpunk presentation only one example of technology. I have hope then someone will tested impact of new DLSS without Ray tracing and share results.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Floong said:

it is turning useless Ray tracing feature with 166 input lag feature to playable for some case. I'm mainly interested in VR gaming from my point of view (Ray tracing + DLSS 3.0) = is useless.

I do wonder how DLSS 3 would work with foveated rendering for VR/XR.

 

[SIGGRAPH 2022] Noise-based Enhancement for Foveated Rendering

https://www.youtube.com/watch?v=Chitc_FTB5M

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×