Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

2 hours ago, porina said:

On "soap opera effect" I don't have experience of it so can't directly relate it to gaming. In trying to read around I'm still unclear what is it about it. For example, has anyone had the effect when playing games around 120 fps? I don't believe it is just a high frame rate causing it.

 

In a nutshell... the "soap opera effect" doubles/triples/quadruples/quintuples the frames which makes things filmed at 24fps appear to be 30/60fps. It get's the name from soap operas being filmed live on 60i cameras and not on film at 24fps. That additional framerate makes it a bit unsettling. When the 120 and 240hz flat panels came out and did it without asking, people seriously were going out of their minds of how terrible it looks.

 

I can't explain the effect because it can't be recorded or captured in any meaningful way.  However the brain absolutely goes "this looks wrong, and is extremely creepy"

 

If DLSS is in effect doing this to not only upscale, but "up-framerate", there's going to be a wave of "motion-sickness defaults" induced in games that never did so before.

 

This is also an effect that is noticed if you record video that was captured at 60fps, but only displayed at 30fps originally. Suddenly the video seems a lot smoother, too smooth.

 

The first time I saw the "soap opera effect" was in my parents LG 4K television and they were watching something, and it made a horror film look like something filmed with a 30 year old VHS-C camcorder. 

 

The tradeoffs are numerous, but in general when it comes to television and film, it absolutely ruins the intended visual effects, makes things look blurrier as they are interpolated, and makes dark images lighter/light images darker.

 

Basically it makes "Big budget" content look like it was filmed on cheap equipment because everything looks worse.

 

So I can't speak for DLSS3, but I'm sure the GPU can do a much better job than the televisions over-driven motion blur interpolation. However I've already tried to play games with all the RTX features turned on, and DLSS on a 30 series card is already a significant disappointment. Sure yes, you might get 4K out of a 1080p input footage, but it doesn't look like "4K" native footage, it looks like a 1080p image was upscaled and then compressed with jpeg at 60%. A still  image looks okay, but moving images are absolutely noticable.

 

Link to comment
Share on other sites

Link to post
Share on other sites

wow these new cards will suck for some PCs, 3,5 - 4 slot cards.

a lot of heat pipes that could increase cost again + the need for "GPU support frame or sticks" to hold the GPU up and not sag.

some creating cables for ATX 3.0, 1 atx 3.0 connector into 4x 8 pin atx 2.0? however that would work.

So cooling these cards and having a case that fits or designs that doesn't look bad for the GPUs. Also the heat pushed to the CPU.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Quackers101 said:

some creating cables for ATX 3.0, 1 atx 3.0 connector into 4x 8 pin atx 2.0? however that would work.

How would it not work, it's just copper wires. 

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kisai said:

In a nutshell... the "soap opera effect" doubles/triples/quadruples/quintuples the frames which makes things filmed at 24fps appear to be 30/60fps. It get's the name from soap operas being filmed live on 60i cameras and not on film at 24fps. That additional framerate makes it a bit unsettling. When the 120 and 240hz flat panels came out and did it without asking, people seriously were going out of their minds of how terrible it looks.

Unfortunately your description reads like many I found in in searching. It doesn't really provide more than a vague description something is off when increasing the framerate by creating new frames. In my earlier post where I describe the 180 degree shutter rule, that provides a mechanism for why it might look wrong, but I don't know for sure it is why.

 

Let's say we have video filmed normally at 30fps. It should have 1/60s of motion blur in each frame. If you shoot normal 60fps video, it should have 1/120s motion blur in each frame. If you take the 30fps footage and interpolate frames to give 60fps, you still have the 1/60s motion blur in each frame which is 2x what we expect. I suspect many PC games will have no motion blur, which is "off" in the other direction, but we're more used to it. It could also explain why stop motion looks off compared to real video. PC games that do implement motion blur may be a danger area for DLSS 3.0 if this is the mechanism.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, ZetZet said:

How would it not work, it's just copper wires. 

might not matter, just wonder by the added resistance and the power regulation stuff. but I guess it might not matter as much and the cable quality.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Quackers101 said:

might not matter, just wonder by the added resistance and the power regulation stuff. but I guess it might not matter as much and the cable quality.

More wire reduces resistance, connectors themselves are negligible. I just wonder if you will be able to remove power limits without the 600W pcie cable sense wires.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm due for an upgrade from my GTX-980, and will likely be making the move this generation. With the gap in between the 4090 and 4080 16gb, I would bet that in 4-6 months we get a 4080TI, that might be my solution.

 

I do think the 4080 naming is a bit of a fiasco, the 4080 12gb should have been the 4070. People would still be upset about a $900 70 series card but with todays markets that may be the reality we currently live in.

CPU: i9-13900k MOBO: Asus Strix Z790-E RAM: 64GB GSkill  CPU Cooler: Corsair H170i

GPU: Asus Strix RTX-4090 Case: Fractal Torrent PSU: Corsair HX-1000i Storage: 2TB Samsung 990 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

Calling the 4080 12gb a 4070 is apt.  But I'm going to go one step further and say that the 4090--based on price and hardware differences from the 4080 16gb--is also mislabeled.

 

It should be called Titan RTX 2

Link to comment
Share on other sites

Link to post
Share on other sites

NVIDIA shows GeForce RTX 4090 running 2850 MHz GPU clock at stock settings:

 

Quote

RTX4090-CLOCK-HERO-BANNER-1200x324.thumb.jpg.e5f1b63b6661d180983ee9b2354ea736.jpg

 

2022-09-22_2-01-56.gif.62eb0aca78a1f7f5ba99e3f30cdc3981.gif

 

2022-09-22_2-04-57.gif.f456236487d8927a52097fa81979ace1.gif

 

In Cyberpunk 2077 DLSS3 demo, the RTX 4090 GPU has been shown with on-screen GPU metrics displayed. Interestingly, the OSD also shows the GPU clock speed, which is NVIDIA’s first admission to what is the ‘actual’ clock of the RTX 4090 GPU. And it is higher than ‘official’ boost clock, indeed. The card has been shown running with 2800 – 2850 MHz GPU clock.

 

https://videocardz.com/newz/nvidia-shows-geforce-rtx-4090-running-2850-mhz-gpu-clock-at-stock-settings

Link to comment
Share on other sites

Link to post
Share on other sites

In regarding to input lags we have only one example. From 24 frame was created 96 frame. So to have 2  normal frame its needed 1/24=~42 ms. But according to Nvdia the input lag was 58 ms. So for rendering fake frame was consumed average ~16 ms. So for competitive games it will be even worst then 24 frame experience.

Next what will happens When someone will apply this feature for 120 rendered frame games ? We will see 8ms(1s/120) + 16ms fake frame input lag  and 480 frames ?

This feature is useless for VR gaming and any competitive games in my opinion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/21/2022 at 12:19 PM, starsmine said:

3080 12GB, and 3080
2060 and 2060 12GB and 2060 KO (which only EVGA gave it the KO name to distinguish it, otherwise we would have never known)
1060 3GB and 1060
1050 2GB and 1050 3GB
RX560 and RX560 <-- now that one is bad
RX 5700xt and  RX 5700 XT 50th Anniversary Edition
the four varients of GTX 460 that are all called 460 

Ah, thank you for the source there, so that density is them averaging all the logic, cache, and wires into one density number.

And most of these examples were anti-consumer too. What's your point?

 

Just because they've "done it before" doesn't excuse this practice.

 

192-bit "80-class" card - $900.

 

Wow, such innovation.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Sir Beregond said:

Wow, such innovation.

very PCIe 5, high quality chips and very bigger is better. just have to wait and see.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Sir Beregond said:

And most of these examples were anti-consumer too. What's your point?

 

Just because they've "done it before" doesn't excuse this practice.

 

192-bit "80-class" card - $900.

 

Wow, such innovation.

Same energy as being mad about a koenigsegg having only 3 cylinders. 

 

2 hours ago, Floong said:

In regarding to input lags we have only one example. From 24 frame was created 96 frame. So to have 2  normal frame its needed 1/24=~42 ms. But according to Nvdia the input lag was 58 ms. So for rendering fake frame was consumed average ~16 ms. So for competitive games it will be even worst then 24 frame experience.

Next what will happens When someone will apply this feature for 120 rendered frame games ? We will see 8ms(1s/120) + 16ms fake frame input lag  and 480 frames ?

This feature is useless for VR gaming and any competitive games in my opinion.

 

 

You should  look at that picture again. 

look to the left, notice that number under the 24? yea thats the input lag at 24fps. 

The argument everyone else is having is will it have more latency then DLSS 2.0, not more latency then with DLSS off.

input lag is not the time between frames. Look up nvidia reflex, which yes is a nvidia product, but it has some solid information.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kilrah said:

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

I agree it is not incising "input lag" for this case (Cyberpunk demo), it  is turning useless Ray tracing feature with 166 input lag feature to playable for some case. I'm mainly interested in  VR gaming from my point of view (Ray tracing + DLSS 3.0) = is useless. 

The Cyberpunk presentation only one example of technology. I have hope then someone will tested impact of new DLSS without Ray tracing and share results.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Floong said:

it is turning useless Ray tracing feature with 166 input lag feature to playable for some case. I'm mainly interested in VR gaming from my point of view (Ray tracing + DLSS 3.0) = is useless.

I do wonder how DLSS 3 would work with foveated rendering for VR/XR.

 

[SIGGRAPH 2022] Noise-based Enhancement for Foveated Rendering

https://www.youtube.com/watch?v=Chitc_FTB5M

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Kilrah said:

DLSS3 isn't going to "increase input lag", it's just not going to decrease it / not as much compared to real fully calculated frames. 

 

Sure it's not going to be able to guess a player is suddenly appearing in a corner so for this type of game it's going to be pretty pointless, but there are lots of slower games where it'll likely be just fine.

Tell me if I'm remembering wrong but I thought Nvidia themselves said that DLSS3 (and/or the frame interpolation portion of it) will cause some increased input lag but is offset by an improved Nvidia reflex?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, thechinchinsong said:

Tell me if I'm remembering wrong but I thought Nvidia themselves said that DLSS3 (and/or the frame interpolation portion of it) will cause some increased input lag but is offset by an improved Nvidia reflex?

Yes, it's offset but not negated. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just became aware of Twitter thread below by a NV VP. Some take away points:

 

DLSS 3 relies on Optical Flow Accelerator hardware unit. An OFA is also present in Turing, Ampere, but is significantly improved in Ada. It could run on older RTX GPUs but wouldn't give the expected performance (both in fps and quality) that Ada will.

 

System latency with DLSS3 is comparable to running without. So not the same, but sounds like it'll be close enough for all but the most demanding twitch gamers.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

sounds like it'll be close enough for all but the most demanding twitch gamers.

So possibly useless for esports shooters, great 😞 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Dogzilla07 said:

So possibly useless for esports shooters, great 😞 

I don't have a personal interest in that niche of gaming, but they're already running insane fps. Even without DLSS 3, 40 series should offer some raw performance uplift if it is wanted. If it is all about speed and precision I'd guess native rendering will remain meta for that area. DLSS 3.0 seems like it is more useful as a RT heavy quality of life upgrade.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, CTR640 said:

I still cannot believe nThiefia got the audacity to call the 4070 4080/12. For real, literally everyone who buys the 4080/12 is basically fucked.

One of the most digusting tactic ever. Unfortunately they can get away with that...

And Nvidia priced a 4070/4070Ti card at $900, $200 more than the previous 3080 10GB, and $400 more than the 3070. I find the pricing rather egregious after the 30 series was more expensive, and prices went up again after the 30 series launch.

Nvidia also gets away with not allowing Turing and Ampere card users to use DLSS3, the hardware is present but Nvidia wants people to buy RTX 40 series cards to use the latest DLSS, no complains about this because Nvidia.

30 minutes ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

Anyone buying the 4080 12GB without knowing the difference got f*cked by Nvidia's deceptive marketing.

By asking the question anyone running a 3070 or below is admitting the naming is important, the 4080 12GB should be called a 4070Ti and I think calling it a Ti would be generous as the real 4070 might get cut down a lot, like the 3070 was. The 3070 felt like oh you're poor so you only get 8GB of VRAM, getting a card that won't last nearly as long without having to turn things way down so VRAM isn't a limiter.

The naming is important because the 4080 12GB and the 4080 16GB don't have anything in the naming that makes a clear difference in specifications like the significant difference in Cuda cores or bus bandwidth, the problem is most people looking at graphics cards on a shelf or on an online retailer aren't going to know the difference and maybe won't care, they'll just see the 4080 12GB being $300 less and buy that, without knowing they're getting screwed over paying $900 for a cut down card that uses a completely different die from the real 4080.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Avocado Diaboli said:

Why would someone buying that be fucked? Are you implying that anyone currently running a 3070 or below is fucked?

 

I keep coming back to this and I really have to ask: Why does the name bother you? You're told what you're getting, never mind that doing your research on what you're buying is imperative regardless of the name on the box. This may be speculation, but to me, it sure seems like a lot of you complaining about the name are under the impression that calling a card an "80-class card" means something. As in, a 4080 necessarily has to be strictly comparable to a 3080 and all the previous cards under the same xx80-moniker. Why?

 

In the end, it's still as I pointed out from the beginning: Names don't matter and if you put great stock into what something is called instead of what performance you're buying, you just look silly. "Butbutbut the bus width", so what? the 3070 also had a larger bus width than the 4080 12GB. As did all the cards down to the 3060 Ti. "Butbutbut the number of CUDA cores", so what? The 30-series core clocks are almost a GHz lower than the 40-series. There are so many differences and variations that comparing any card to the previous one in any "class" is pointless based on specs alone, let alone the fucking name on the box. Compare performance and buy accordingly, not based on what name you'd like to put into your forum signature.

Great, another Jensen worshipper. I'm talking about the 4080, not the 3070 so why are you even talking about that in the first place?

The 3080/10 and 3080/12 were already a lame choice but they atleast do not differ in cuda cores and speed? If yes, I would like to be corrected.

But this 4080 nonsense is on a different level. The performance will high likely not gonna be the same between 4080/12 and 4080/16, so yeah, consumers gets fucked by getting the nerfed 4080 instead of getting the 4080, 4080 and or 4080...

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×