Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

57 minutes ago, starsmine said:

Isn't TSMC N5 171.3 MTr/mm^2 with n4 being close to 181?
and TSMC n7 91.2 MTr/mm^²
and samsung n8 at 61?


I just took the ratio between the two because that number is not telling the whole truth anyways, that density is best case, logic and cache have two different densities, and some parts of the chip just dont shrink at all like wires between nodes.

For N4 and Samsung 8 nm we have direct numbers.

 

For N7 I just used the Apple SoC numbers as a comparison.

https://www.angstronomics.com/p/the-truth-of-tsmc-5n

It's worth to mention that Apple only achieved 89 MT/mm² in their second iteration chip in N5 and they achieved only 82 MT/mm² on their first try. So something in the neighbourhood of 80 to 90 MT/mm² for a GPU on a matured N5 process seems reasonable. The "theoretical" 171 MT/mm² for N5 is unachievable. It's closer to 138 MT/mm², considering the 6% improvement N4 should be around 146 MT/mm². Still, it seems to be getting more and more difficult to reach these theoretical limits in practice.

 

But I think we side-tracked the conversation well enough and this isn't really important since we haven't seen TSMC N7 dice from Nvidia. 😉

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, starsmine said:

3080 12GB, and 3080

Why that one? they are apart from the VRAM capacity almost identical GPU-wise?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm concerned if DLSS3's optical flow stuff will make the video game equivalent of the soap opera effect, especially if the source frames are low enough.

 

But we'll see

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dracarris said:

Why that one? they are apart from the VRAM capacity almost identical GPU-wise?

No sireee.  The 3080 12GB is basically a 3080Ti (more vram, wider bus, more cuda).   This is a great table: https://en.wikipedia.org/wiki/GeForce_30_series#GeForce_30_(30xx)_series_for_desktops

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, AnonymousGuy said:

No sireee.  The 3080 12GB is basically a 3080Ti (more vram, wider bus, more cuda).   This is a great table: https://en.wikipedia.org/wiki/GeForce_30_series#GeForce_30_(30xx)_series_for_desktops

welp okay I think the difference in Cuda cores and SMs is rather small but the wider memory interface sure is a bigger difference.

Link to comment
Share on other sites

Link to post
Share on other sites

NVIDIA detailed the design of its GeForce RTX 4090 Founders Edition Cooler, PCB and new Power Spike Management:

 

Quote

Bngn2ZfYIAeibAK5.thumb.jpg.61902a35e91e1d9529978432c30441a2.jpg

 

na2EqWz0OjLgntw6.thumb.jpg.5875ff151fd78b2871638170834866b1.jpg

 

BjzGXb1SjXHZCa3W.thumb.jpg.653c4ae2d2ab3ae2241bb027615e5be4.jpg

 

vJoOCvEeTgLXIIuQ.thumb.jpg.6ef7ee8eb5abf2a90e857c4acab95032.jpg

 

Y9Xhgw02WMSQOxXi.thumb.jpg.e6dbc6d1d9d78552d71cdf4601b779ed.jpg

 

r9mPtQxMYUZ5XdS1.jpg.4877359c4c13b983aad7fc1eb5c5d81b.jpg

 

5wTpTV9WTdU7ZlZe.thumb.jpg.62a4c32aab94f30fd5179fb8723f39b5.jpg

 

Vt7rabsSe2kJTLl1.thumb.jpg.48902b05d9529c17c24b593fa5733fcb.jpg

 

GjjwL1GkChihZVy5.thumb.jpg.092a33b420fdddc75b39d6438e989dbb.jpg

 

NVIDIA says it has made several changes to its design. The metal outer frame now comes with a more pronounced gunmetal tinge. The heatsink array underneath has been redesigned to improve airflow between the two fans.

 

The fan itself has been updated, too. NVIDIA says it tested as many as 50 new fan designs before choosing this one, which offers up to 20% higher airflow than the one you have in the RTX 3090 Ti. Besides the heatsink and fan, the company also redesigned the vapor-chamber plate that pulls heat from the GPU and surrounding memory chips; as well as the base plate drawing heat from the VRM components. The heatpipes follow an improved heat-distribution.

 

While the RTX 4090 operates at 450 W by default, the power delivery capability allows you to increase the power limit up to 600 W for overclocking. The card features a 23-phase VRM (20-phase GPU + 3-phase memory). NVIDIA claims that it has re-architected its VRM to offer superior transient voltage management. This is specifically to minimize the kind of spikes or excursions we've seen with previous-gen high-end graphics cards such as the RTX 3090 Ti and Radeon RX 6900 Series. These spikes often causes spontaneous shutdowns on some power supplies, even if they had a much higher wattage rating than required.

 

NVIDIA shows how current spikes get mitigated by their new VRM design, which uses a PID controller feedback loop. While the RTX 3090 bounces up and down, the RTX 4090 stays relatively stable, and just follows the general trend of the current loading pattern thanks to a 10x improvement in power management response time. It's also worth pointing out that the peak power current spike on the RTX 3090 is higher than the RTX 4090, even though the RTX 3090 is rated 100 W lower than the RTX 4090 (350 W vs 450 W).

 

https://www.techpowerup.com/299096/nvidia-details-geforce-rtx-4090-founders-edition-cooler-pcb-design-new-power-spike-management

Link to comment
Share on other sites

Link to post
Share on other sites

The $900 4070 pretending to be a 4080 is especially ridiculous and hilarious, I actually laughed out loud at that. Oh and the "fancy" interpolation tech just screams extra latency to me. As per their Cyberpunk comparison slide, they make a big deal about that "58ms" latency. That's ridiculously high for that framerate, and would be very jarring to play. They would have to do some serious black magic rituals to get that sorted out.

 

Thoroughly unimpressed. When Nvidia slashes prices next year then I might look into possibly upgrading, but I'd rather just wait for the 50 series.

MAIN SYSTEM: Intel i9 10850K | 32GB Corsair Vengeance Pro DDR4-3600C16 | RTX 3070 FE | MSI Z490 Gaming Carbon WIFI | Corsair H100i Pro 240mm AIO | 500GB Samsung 850 Evo + 500GB Samsung 970 Evo Plus SSDs | EVGA SuperNova 850 P2 | Fractal Design Meshify C | Razer Cynosa V2 | Corsair Scimitar Elite | Gigabyte G27Q

 

Other Devices: iPhone 12 128GB | Nintendo Switch | Surface Pro 7+ (work device)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Alcarin said:

The $900 4070 pretending to be a 4080 is especially ridiculous and hilarious, I actually laughed out loud at that. Oh and the "fancy" interpolation tech just screams extra latency to me. As per their Cyberpunk comparison slide, they make a big deal about that "58ms" latency. That's ridiculously high for that framerate, and would be very jarring to play. They would have to do some serious black magic rituals to get that sorted out.

 

Thoroughly unimpressed. When Nvidia slashes prices next year then I might look into possibly upgrading, but I'd rather just wait for the 50 series.

They keep up this fuckery, and I WILL be switching to team Red.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, IPD said:

They keep up this fuckery, and I WILL be switching to team Red.

I've been pretty staunchly loyal to team Green for years now, but with the departure of EVGA from the market (thanks to Nvidia) and now this, my loyalties are becoming pretty fickle...

MAIN SYSTEM: Intel i9 10850K | 32GB Corsair Vengeance Pro DDR4-3600C16 | RTX 3070 FE | MSI Z490 Gaming Carbon WIFI | Corsair H100i Pro 240mm AIO | 500GB Samsung 850 Evo + 500GB Samsung 970 Evo Plus SSDs | EVGA SuperNova 850 P2 | Fractal Design Meshify C | Razer Cynosa V2 | Corsair Scimitar Elite | Gigabyte G27Q

 

Other Devices: iPhone 12 128GB | Nintendo Switch | Surface Pro 7+ (work device)

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, porina said:

At first glance RDNA2 might seem roughly aligned with Ampere. AMD aimed for around parity in raster performance in their models but if you look around the edges they were worse in RT, worse video encoder. Probably more but those two were most noticeable to me. They're getting more feature complete but now they need to significantly improve the performance on wider features to match competition. For me to consider buying any RDNA3 card, at a given street pricing, AMD would have to offer comparable or better performance in both raster and RT games (native rendering), and have a video encoder that works well and is widely supported in software. I can't see them doing that.

Yes? I didn't say otherwise. I'm talking about pre-Pascal when AMD have better value/performance mid-range and yet they gained very little market share. 

 

As a consumer/gamer i don't really care about RT performance or RT games in general, 4 years since they released 2000 series and I only have like less than 5 RT games that I feel like i want to play with RT enabled. DLSS is more important feature imo and AMD equivalent FSR2.x is worth the trade off if the price is right. 

 

I've never have the chance to try RDNA2 encoder, I know they're bad for live streaming but for local recording they can't be worse than the ancient encoder in my 980 Ti which I have no problem with the quality. 🤔 

 

Quote

Also for RDNA2 generation it didn't feel like GPU production was their priority. They were capacity constrained and it felt like the much more profitable enterprise products were their focus. So they made enough GPUs to have a presence and had no reason to try for share, which generally requires offering significantly more value than the competition, or the competition to make a massive blunder. Despite the posts in this thread, I don't see what we know so far to count as that.

Yes, I feel like it's been like that since RDNA1. So to anyone that say "wait for AMD", you'll be disappointed I think.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone know

 

which AIB card has the highest power limit? 

 

if these GPUs have DP 2.0?

 

 

 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Shzzit said:

Anyone know

 

which AIB card has the highest power limit? 

 

if these GPUs have DP 2.0?

 

 

 

no dp2.0 unless AIBs add it, nothing on power limits yet, the psus arent ready

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is why I have no respect for you guys.

 

"I hope you guys buy AMD cards so Nvidia lowers their prices so I can buy an Nvidia card cheaper because I'm a fanboy and will buy Nvidia no matter what but you can buy AMD"

image.png.685febb1bf0c04f153161fa6fe3b3bb2.png

 

Nvidia will get away with this and you will all pay their prices and next year be shocked again that they increased prices again but you'll always pay anything for a new GPU because it excites you and makes you forget about your depression for a few minutes and the best part is that most people who buy the 4090 will just be people on state welfare while the rich and wealthy people use their money on luxury holidays.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Gamer Schnitzel said:

Nvidia will get away with this and you will all pay their prices and next year be shocked again that they increased prices again but you'll always pay anything for a new GPU because it excites you and makes you forget about your depression for a few minutes and the best part is that most people who buy the 4090 will just be people on state welfare while the rich and wealthy people use their money on luxury holidays.

That's exactly what I've been saying. You see everyone crying about the prices and yet they will be out of stock for months after release. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Shzzit said:

if these GPUs have DP 2.0?

This was in their Q&A session:

 

Quote

Q: Why isn’t DisplayPort 2.0 listed on the spec sheet?

 

The current DisplayPort 1.4 standard already supports 8K at 60Hz. Support for DisplayPort 2.0 in consumer gaming displays are still a ways away in the future.

https://www.nvidia.com/en-us/geforce/news/rtx-40-series-community-qa/?=&linkId=100000150864636

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/21/2022 at 7:10 AM, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

Didn't Nvidia say you should upgrade your 1080Ti when they announced the 3080 2 years ago? You're lagging behind. Don't let the global recession keep you from supporting our favourite leather jacket enthusiast.

 

12 hours ago, Herrscher of Whatever said:

I'm concerned if DLSS3's optical flow stuff will make the video game equivalent of the soap opera effect, especially if the source frames are low enough.

 

But we'll see

The calling "Soap opera effect" only applies to content that is SUPPOSED to have low-fps and it feels "odd" to some people when playing back at 60 or even 120 fps. That's why it's undesireable to most purists. Like in 24 fps movies. They somehow feel like it's lower quality content when looking at higher framerates. I never understood that stance, especially because modern implementations of interpolation barely have any artifacting.

 

Vincent Teoh from HDTV test explained it in some video a few months back. He basically said you should let your imagination do the interpolation in low-fps content. Or that it feels like the movie was filmed on a phone camera. That's just ridiculous imo. Why would i want to watch 24 fps content when i can watch 120 fps content?

 

In a video game you WANT higher fps and it doesn't feel odd when your game is smoother. No one is going to complain if a 30 fps game suddenly runs at 60+ fps, even when input lag is still around the same level as 30 fps. The game still looks better.

 

I never understood why people are so strongly against interpolation. If the DLSS 3.0 implementation doesn't have huge problems with visual artifacts i see it as a win for anyone that plays sightseeing-type games at max settings. I bet they will integrate some kind of toggle so you won't have interpolation at the cost of input lag if you don't want to and use the 2.0-ish implementation that's only upscaling.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Stahlmann said:

I never understood why people are so strongly against interpolation. If the DLSS 3.0 implementation doesn't have huge problems with visual artifacts i see it as a win for anyone that plays sightseeing-type games at max settings. I bet they will integrate some kind of toggle so you won't have interpolation at the cost of input lag if you don't want to and use the 2.0-ish implementation that's only upscaling.

Again still waiting for more details from nvidia directly or otherwise, but to my current understanding DLSS 3 integration is not much different than that required for DLSS 2, and DLSS 2 will likely remain supported for existing RTX GPUs. So a switch to turn on/off DLSS 3 seems a given.

 

For reasons given earlier I don't believe DLSS 3 will introduce latency, in that it is forward predicting, not interpolating. See it as a relative to reprojection in VR systems, but unlike reprojection it does turn the clock forward in the predicted content. Hmm. Didn't think of that before, it could be useful to boost VR systems. If it includes data, directly or otherwise, from the user input it could be really good.

 

A side thought is that, just how much tensor perf is available? So we have native rendering as the reference condition. DLSS 2 is upscaling, reducing raster effort and making up the difference from tensor cores. DLSS 3 seems to be taking more tensor perf to generate frames, but that implies there is enough perf to do that without it becoming the limiting factor.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Stahlmann said:

Vincent Teoh from HDTV test explained it in some video a few months back. He basically said you should let your imagination do the interpolation in low-fps content. Or that it feels like the movie was filmed on a phone camera. That's just ridiculous imo. Why would i want to watch 24 fps content when i can watch 120 fps content?

Maybe you're misunderstanding something or I am but from what I know about this the issue is the content is 24 fps and it's not a good experience if you try and increase that frame rate. If the original source is higher frame rate, like The Hobbit then it's fine or at least much better but trying to make 24 fps content 60fps+ basically ways looks horrid.

 

Movie and real life content are perception wise entirely different things to video games and animation, what looks good or works well on one does not necessarily mean it will for the other.

 

Unlike CSI you cannot create something that which was never there. We might be getting a lot closer to actually being able to do that with things like DLSS but to do it on a movie in real time at photo realism we are WAY off being able to do that properly.

 

"Better than source material" in other words is just a myth, it's only as good as the source or worse, never better. You're always fighting compromises if you try and alter, fix, improve etc. I for one do not like any screen smoothing or sharpening effects on movies, never ever.

 

Also when it comes to 24 fps movie content it's important that the entire chain actually supports 24 fps mode otherwise you get frame time misalignment. My projector and Blu-ray player for example support 24 fps mode

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Maybe you're misunderstanding something or I am but from what I know about this the issue is the content is 24 fps and it's not a good experience if you try and increase that frame rate. If the original source is higher frame rate, like The Hobbit then it's fine or at least much better but trying to make 24 fps content 60fps+ basically ways looks horrid.

 

Movie and real life content are perception wise entirely different things to video games and animation, what looks good or works well on one does not necessarily mean it will for the other.

 

Unlike CSI you cannot create something that which was never there. We might be getting a lot closer to actually being able to do that with things like DLSS but to do it on a movie in real time at photo realism we are WAY off being able to do that properly.

I don't know what experiences you had with interpolation, but the TV's i have can create 120 fps content out of 24 fps movies without having significant problems. There are some very rare cases where interpolation artifacts are noticeable, for example when a person is moving in front of shutters. But overall i'd say the experience with a smoother (and noticeably sharper, because interpolation typically gets rid of persistence blur) image is heaps better than having a 24 fps stutter mess. Interpolation doesn't create the frames out of nothing. It's an algorithm that guesses what comes in between 2 existing frames. There have also been examples of photo editing software removing stuff from pictures and guessing what was behind that, with stunning accuracity at times. Frame interpolation is basically the same, just in real-time.

 

1 hour ago, leadeater said:

"Better than source material" in other words is just a myth, it's only as good as the source or worse, never better. You're always fighting compromises if you try and alter, fix, improve etc. I for one do not like any screen smoothing or sharpening effects on movies, never ever.

Of course, better than source is always marketing speech, be it Nvidia, AMD or Intel. But in the case of DLSS for it's been close enough. So i can turn it on, set it to "quality" mode knowing there is no significant or noticeable loss in image quality. The loss is there, but you really need to pixel peep to see it. (Kinda like compressed MP3 music) However, the increase in fps is very noticeable, often catapulting something barely playable to over 60 fps. (Cyberpunk 2077 at 4K is a good example for this) But as always there will be plenty of nay sayers for anything that isn't perfect 100% of the time. Imo something like DLSS doesn't have to be perfect. Close enough is what i'm aiming for with this kind of stuff. And it's still worth mentioning that there are cases where DLSS can make details visible that don't even exist in native 4K. Very thin lines on a distant fence for example.

 

1 hour ago, leadeater said:

Also when it comes to 24 fps movie content it's important that the entire chain actually supports 24 fps mode otherwise you get frame time misalignment. My projector and Blu-ray player for example support 24 fps mode

Idk what that has to do with what i brought up. I only brought the 24 fps movie example up to argue that interpolation can ge a good thing to make games appear smoother, even though input lag stays the same.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Stahlmann said:

I don't know what experiences you had with interpolation, but the TV's i have can create 120 fps content out of 24 fps movies without having significant problems. There are some very rare cases where interpolation artifacts are noticeable, for example when a person is moving in front of shutters. But overall i'd say the experience with a smoother (and noticeably sharper, because interpolation typically gets rid of persistence blur) image is heaps better than having a 24 fps stutter mess.

You get stuttering mess most of all when you are playing 24 fps content on a 60 fps (60 hz) or greater output, unless it's sync'd correctly you'll get stutters as 24 does not divide in to 60 perfectly. Also problems are greatly more visible on a larger screen, I watch on a 96" projector screen and native untouched 24 fps end to end always looks better and smoother than anything else.

 

Problem is everyone has different equipment and tend to like the look of things different, art in the eyes of the beholder etc etc.

 

9 minutes ago, Stahlmann said:

Idk what that has to do with what i brought up. I only brought the 24 fps movie example up to argue that interpolation can ge a good thing to make games appear smoother, even though input lag stays the same.

Well yes but games and movies are entirely different things so in this respect they aren't actually that directly comparable. A movie is a fixed frame rate, you have no way to increase the underlying "rendering output frame rate". Games have a completely different render and output pipeline so what you can do and how you can to it are vastly different which greatly affects the end result.

 

Like I said, it can be fine on games and terrible on movies, what's good for one isn't always good for the other.

Link to comment
Share on other sites

Link to post
Share on other sites

for DLSS 3.0 you seem to want to combine it with nvidia's "ReflexTM", from one of the creators.

Quote

NVIDIA Reflex removes significant latency from the game rendering pipeline by removing the render queue and more tightly synchronizing the CPU and GPU. The combination of NVIDIA Reflex and DLSS3 provides much faster FPS at about the same system latency.

also why you wouldn't or be in less of need for modding DLSS 3 for older RTX GPUs, but maybe some can do some tricks with it to avoid some downfalls.

Quote

DLSS3 relies on the optical flow accelerator, which has been significantly improved in Ada over Ampere - it’s both faster and higher quality.

there could be some news around directstorage... but who knows and if that helps us out in any sort of way.

Link to comment
Share on other sites

Link to post
Share on other sites

On "soap opera effect" I don't have experience of it so can't directly relate it to gaming. In trying to read around I'm still unclear what is it about it. For example, has anyone had the effect when playing games around 120 fps? I don't believe it is just a high frame rate causing it.

 

My guess is that motion blur may lead to soap opera effect. Motion blur is natural in regular video content. There's a 180 degree shutter rule, which I'd simplify as the effective motion blur time is half the frame time. That's baked in to the content at capture. If you then adjust the frame rate without altering that exposure time, the 180 degree rule is broken, which leads to a different perception of the content. It can be used for creative purposes, but generally feels "wrong".

 

What's the consensus for motion blur in gaming? I tend to turn off any motion blur if given the option. Do game devs tune the motion blur according to the 180 degree rule? This would vary with frame rate and I'm not sure they're that smart. I'd guess they just offer different fixed degrees of blur and it is up to the user to adjust. This may need to be a consideration if motion blur is rendered by game engine and DLSS 3 creates extra frames, that a lower motion blur amount would be appropriate with the generated frames. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Quackers101 said:

for DLSS 3.0 you seem to want to combine it with nvidia's "ReflexTM", from one of the creators.

also why you wouldn't or be in less of need for modding DLSS 3 for older RTX GPUs, but maybe some can do some tricks with it to avoid some downfalls.

there could be some news around directstorage... but who knows and if that helps us out in any sort of way.

Interesting, I think there is going to need to be a lot of testing of DLSS 3, in combination with many things, to really get a sense of it and what's best when. I'm a visuals above frame rate type of person for most of my games so I'd be most interest in 60-120 fps realm at the best possible visual quality, however that is achieved which will change game by game.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×