Jump to content

NVIDIA Project Beyond GTC Keynote with CEO Jensen Huang: RTX 4090 + RTX 4080 Revealed

5 minutes ago, leadeater said:

Wait for the EK blocks to come out. Typically they do a block for reference PCB cards and then a few others like Asus Strix. Anyway it would be a really bad idea to buy a GPU with a custom PCB now with the intention of putting a water block it without even knowing if EK will make a block for it.

I'm just going buy what EK said, looks like they already have one ready for FE and are making more for the most popular AIB cards. 

 

EK-Quantum Vector² FE RTX 4090 water blocks, backplates, and active backplates are compatible with NVIDIA GeForce RTX 4090 Founders Edition GPU. The EK Cooling Configurator will be updated regularly with AIB partner PCBs and models as new info comes in. EK plans to provide all popular AIB models with their own water blocks to ensure customers have a wide range of choices depending on their preferred brand or requirements in the graphics card size.

 

EK-Quantum-Vector2-FE-RTX-4090-D-RGB-PR1

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Shzzit said:

EK plans to provide all popular AIB models with their own water blocks to ensure customers have a wide range of choices depending on their preferred brand or requirements in the graphics card size.

Was ages ago and I got burned when I got a card that I was going to water cool, didn't jump on it soon enough and EK stopped making the blocks for the card, only stocked the refence PCB ones. So yea, get it while it's in stock, don't be me and wait until it too late haha.

 

My current one came pre-installed with an EK block, Powercolor Liquid Devil. Have to say even though it's not hard to put a water block on it sure was nice to not have to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Shzzit said:

So if im just gonna be getting an EK waterblock soon as they come out,  WHICH card would be best?   

 

FE or maybe highest watt AIB card like the Asus Strix?  

 

Or does it not matter because they will all do the same Mhz at the same low temp?

It's only ever really mattered if you're trying to get every last MHz out of the card. Which is getting harder and harder to do considering that nvidia is cracking down on things like BIOS editing and external voltage control.

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Was ages ago and I got burned when I got a card that I was going to water cool, didn't jump on it soon enough and EK stopped making the blocks for the card, only stocked the refence PCB ones. So yea, get it while it's in stock, don't be me and wait until it too late haha.

 

My current one came pre-installed with an EK block, Powercolor Liquid Devil. Have to say even though it's not hard to put a water block on it sure was nice to not have to do it.

Haha nice, cant wait to upgrade.  Going for the new Asus 48" oled thats 138hz.  By the looks of the leaks the 4090 should do 4k 138hz pretty easy on most games. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Shzzit said:

So if im just gonna be getting an EK waterblock soon as they come out,  WHICH card would be best?   

 

FE or maybe highest watt AIB card like the Asus Strix?  

 

Or does it not matter because they will all do the same Mhz at the same low temp?

FE is usually more limited.  450W is going to be the same as the TUF, the Strix is going to be higher according to their launch info.

 

FE you also won't likely be able to flash without messing up video outputs or something.

 

FE also sucks more for power modding like shunts.

 

TLDR: buy a Strix if you want the best one.

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AnonymousGuy said:

FE is usually more limited.  450W is going to be the same as the TUF, the Strix is going to be higher according to their launch info.

 

FE you also won't likely be able to flash without messing up video outputs or something.

 

FE also sucks more for power modding like shunts.

 

TLDR: buy a Strix if you want the best one.

Aww that makes sense, thanks a lot.  Ya i totally forgot about dual bios too, def should get that, i love OC and tinkering.   Man i miss EVGA already, was gonna get a ftw card but 8(.  

 

I have been eye balling the Strix.  And ya im guessing it will be 500 or so watts vs stock 450,

 

Im so excited. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

there's some seriously questionable numbers in the Cyberpunk 2077 RTX on/DLSS demo, 3090ti runs 4k ultra with RT at 90+fps with DLSS and around 40ish with RT off, so the 22 to 90+fps I have to ask what card and settings are they even running to do that when a 3090ti is already doing the shown FPS. Hopefully it's not the 4090 because a 3090ti at 8k ultra with RT is doing around 30fps so if either of the 4080 or 4090 can only do 22 fps without DLSS3.0 how can they possibly be making the claims they are about performance uplifts?

 

their flight sim demo has what appears to be the same fps as a 3080 at 4k DLSS off (based on their own website's numbers https://www.nvidia.com/en-us/geforce/news/microsoft) .... not a great look when it comes to "2-4x the performance" of 30-series. 

 

it's looking more and more like a 2-4x increase (based on the numbers shown) is comparing 30series without DLSS to 40series with DLSS3.0, which is absolutely misrepresenting the performance of the cards.

Guess we'll see during the upcoming independent reviews but I'm actively in the market for a GPU and the current showing isn't pushing me to want to stand in line for a 40series if these were the claims without the substance of numbers behind them. If Nvidia was so confident about their performance uplift in games they would put up the fps numbers instead of playing the Apple charts game with zero qualifiers. This looks just like their 30series launch where it was claimed 3090 was 50% faster than the TitanRTX but that amounted to 23fps on the titan to 29fps on the 3090 in Death Stranding, or 1fps delta in RDR2.

 

that's not even getting into Fortnite 1440p at over 600fps on a 4090 with "e-sports high" settings (which isn't even a thing!) it's more likely they were using the performance mod which, more realistically, goes from 470-500fps on a 3090ti to 600fps on the 4090.

 

honestly, at the end of the day 2-4x isn't going to be a realistic performance bump within the game engines just by throwing more cores at the engine, without DLSS vs non-DLSS or scummy trickery with settings changes to misrepresent the improvement it's just not possible to make that big of an improvement simply by changing nodes or chip structures between generations. This is far too common and frankly should be classified as false advertising if they aren't willing to put real world numbers to their claims within their own launch slides as if they can't benchmark their products against the previous ones. (or are the 30series all still stuck in warehouses being held to manipulate the market like their investor's meeting mentioned?) 

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

That US $900 12GB "4080" card is a joke. You can get a used 3090 card from eBay for $100 less than that, which has twice as much VRAM and will probably outperform it in most games if you don't have the RTX settings maxed out.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, GhostRoadieBL said:

 

Guess we'll see during the upcoming independent reviews but I'm actively in the market for a GPU and the current showing isn't pushing me to want to stand in line for a 40series if these were the claims without the substance of numbers behind them. If Nvidia was so confident about their performance uplift in games they would put up the fps numbers instead of playing the Apple charts game with zero qualifiers. This looks just like their 30series launch where it was claimed 3090 was 50% faster than the TitanRTX but that amounted to 23fps on the titan to 29fps on the 3090 in Death Stranding, or 1fps delta in RDR2.

 

They did the same thing also 3000 vs. 2000 where they were like "here's DLSS on with the 3090, and DLSS off with the Titan RTX" and got to some bullshit conclusion like "it's 8K ready graphics with RTX on".

 

Realistically then it was "fuck off nvidia, 30 fps at 8K with DLSS set to speed over quality doesn't mean shit"

 

To this day I don't play anything with RTX on for the 3090 cause it craters FPS.

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, AnonymousGuy said:

They did the same thing also 3000 vs. 2000 where they were like "here's DLSS on with the 3090, and DLSS off with the Titan RTX" and got to some bullshit conclusion like "it's 8K ready graphics with RTX on".

 

Realistically then it was "fuck off nvidia, 30 fps at 8K with DLSS set to speed over quality doesn't mean shit"

 

To this day I don't play anything with RTX on for the 3090 cause it craters FPS.

Yep, ray tracing is a cool tech demo but rasterized performance is still king. Even enabling ray traced shadows in WoW tanks my framerate. The most played games do not support ray tracing at all.

 

This announcement did exactly what it intended on doing; make the 30 series seem like a good deal.

5800X3D / ASUS X570 Dark Hero / 64GB 3600mhz / EVGA RTX 3090ti FTW3 Ultra / Dell S3422DWG / Razer Deathstalker v2 / Razer Basilisk v3 Pro / Sennheiser HD 600

2021 Razer Blade 14 3070 / iPhone 15 Pro Max

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, vetali said:

Yep, ray tracing is a cool tech demo but rasterized performance is still king. Even enabling ray traced shadows in WoW tanks my framerate. The most played games do not support ray tracing at all.

 

This announcement did exactly what it intended on doing; make the 30 series seem like a good deal.

Yeah right now I'd say that if someone is thinking "maybe I should get a 4070"....no....go buy a higher tier 3080ti or 3090  right now.  Prices of the 3000 series cards are only going to go up/flat from here as the new inventory burns out and the mining-dump comes to an end.   

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AnonymousGuy said:

Yeah right now I'd say that if someone is thinking "maybe I should get a 4070"....no....go buy a higher tier 3080ti or 3090  right now.  Prices of the 3000 series cards are only going to go up/flat from here as the new inventory burns out and the mining-dump comes to an end.   

They have a long way to go down still before going up. And we don't have a clue how aggressive AMD will be with rdna3.

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, ZetZet said:

They have a long way to go down still before going up. And we don't have a clue how aggressive AMD will be with rdna3.

The price is already going up from where it was a couple weeks ago.  Everyone thinks an event like the ethereum merge killing mining or a new release is the exact date that the price is best.  Really it's more priced-in at that point and the lowest point is a couple weeks before.  Something to keep in mind is that every gpu that was sold to miners would have been sold to a gamer otherwise.  So there's not going to be a huge volume of GPU's on the market with no buyers, driving the price down.  And judging by the high prices on the 4000 series, that's going to vacuum up even more of the used 3000 inventory out there right now when it's like "hey you can spend $700 on a 3080Ti right now and probably do better than a $900 4080 12GB"

 

AMD is no-factor really.  15% of steam survey is AMD.  That tells me most people straight don't care what AMD releases.

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

I think people are focusing too much on the specs without knowing how the performance really stacks up. We don't know if they're using the same die just cut down, or whether they're all using different dies.

 

From the last gen, performance vs CUDA cores varied depending on what die was being used, even with the same architecture

 

Just from a few quick calculations at 1080p the GA102 (3080/3080ti/3090/3090ti) averages about 70-75 CUDA cores per frame, whereas the GA104 (3060ti/3070/3070ti) averaged  around 50-55 CUDA cores per frame.

 

So, if the 4080 12GB is really just a '4070' in disguise, then it would use the next die down the stack.

For arguments sake, let's just say they have exactly the same performance per CUDA core as the 30 series. That could actually put the 12GB 7% ahead of the 16GB in performance despite having less CUDA cores!

 

Obviously that won't be the case, but it's exactly why you should never just look at the specs and think more=better!

 

And just to add to the argument, IMO, the 4080 16GB isn't the 'true' 4080 with the 4080 12GB being the 4070. To me, the 4080 16GB is the 4080ti and the 4080 12GB is the 'true' 4080

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

Would love to see what kind of review guide they sending to reviewer. 

Nvidia: We've included some edibles with the review cards, feels free to chew it while doing long hours of benchmarks, we love you. =)

 

AMD will price RDNA3 accordingly to 4000 series performance imo, they stop caring for market share long time ago because you guys will buy Nvidia doesn't matter how high is the price anyway.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, HenrySalayne said:

AD104 spec: 7680 CUDA cores, 12G GDDR6X. 192 bit memory interface

4080 12G spec: 7680 CUDA cores, 12G GDDR6X. 192 bit memory interface

 

AD103 spec: 10752 CUDA core, 16G GDDR6X, 256 bit memory interface

4080 16G spec: 9728 CUDA core, 16G GDDR6X, 256 bit memory interface

 

So, they are using different dies, which could very well mean the performance gap between them is a lot smaller than the numbers would suggest

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, yolosnail said:

 

So, they are using different dies, which could very well mean the performance gap between them is a lot smaller than the numbers would suggest

Why would that be?  How is something with 30% less compute resource on a narrower bus supposed to make up the difference?

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, AnonymousGuy said:

Why would that be?  How is something with 30% less compute resource on a narrower bus supposed to make up the difference?

 

Because, as I demonstrated in my previous post, the smaller dies in the past have typically provided more FPS per CUDA core than the higher end dies, so performance does not scale linearly when you're changing dies. If all were on the same die, then the performance scale is pretty linear. 

 

Of course, with the new architecture, maybe things will scale more linearly across dies, but we just don't know until reviews are in!

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AnonymousGuy said:

AMD is no-factor really.  15% of steam survey is AMD.  That tells me most people straight don't care what AMD releases.

I'd wager it's the same as it was in CPU space; AMD GPUs used to be so bad nobody cared about them and there was no point in buying them in any sense either. Now they are back in game again and could very well become relevant in GPU market especially now when Nvidia is screwing up with customers with their confusing naming schemes and bloated prices. Of course market share is not going to change overnight but right now AMD could compete with pricing quite easily.

 

I'd say DLSS 3 is a gimmick and with the information we got in the keynote I feel like it's only going to work in cases where the frame rate was already good enough. You can't fix input lag by generating completely new frames out of thin air. So yes it might trick some customers shopping in the lower end of GPUs into choosing Nvidia (and maybe even paying a bit more) because of DLSS 3 marketing but overall it's not going to be that much of an advantage for Nvidia like DLSS 1 and 2 were against AMD.

 

The bright side here is the fact that now at least I'm waiting for RDNA3 release date with much greater interest than I was just yesterday 😄 I have an RTX3090 but am in need of another high-end PC and it looks like I'm going to buy AMD this time around and see how they work nowadays. Last Radeon I had was an HD6950 so it's been a while...

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, BiG StroOnZ said:

RTX 4090 is $1599, RTX 4080 16GB is $1199, and RTX 4080 12GB is $899.

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, yolosnail said:

I think people are focusing too much on the specs without knowing how the performance really stacks up. We don't know if they're using the same die just cut down, or whether they're all using different dies.

 

From the last gen, performance vs CUDA cores varied depending on what die was being used, even with the same architecture

 

Just from a few quick calculations at 1080p the GA102 (3080/3080ti/3090/3090ti) averages about 70-75 CUDA cores per frame, whereas the GA104 (3060ti/3070/3070ti) averaged  around 50-55 CUDA cores per frame.

 

So, if the 4080 12GB is really just a '4070' in disguise, then it would use the next die down the stack.

For arguments sake, let's just say they have exactly the same performance per CUDA core as the 30 series. That could actually put the 12GB 7% ahead of the 16GB in performance despite having less CUDA cores!

 

Obviously that won't be the case, but it's exactly why you should never just look at the specs and think more=better!

 

And just to add to the argument, IMO, the 4080 16GB isn't the 'true' 4080 with the 4080 12GB being the 4070. To me, the 4080 16GB is the 4080ti and the 4080 12GB is the 'true' 4080

I think you misunderstand why the smaller core count chips get more frames per core. 
Its simple as game engines and drivers failing to use all the cores at once. Its a highly parallel process sure, but its not using n threads.

Its like a 1000 lane highway and at say 4am, only 100 get used, at rush hour all are used. 
vs a 500 lane highway where where at 4am only 100 get used and at rush hour all are used.

that 4 am part, you just used 10 lanes per car on one gpu, and on the other gpu you used 5 lanes per car.

the die being used had minimal effect on cuda performance. A GA102 cut down to GA104 specs will perform similarly. You saw a good example of this with the 2060 KO where a TU104 was cut down to the equivalent core count of the tu106 and they performed equivalently in gaming benchmarks. the KO only pulling ahead on certain compute tasks that looked for resources available only on the TU104.

Its literally difficult to keep 10k cores fed with data, and it simply doesnt do that in many situations. there are a bunch of bubbles. 

But yes the performance difference between the 4080 12G and 4080 16G is being exaggerated here... and also I feel like a bad point when they are in completely different price brackets.

Most generations have had a card or many cards in its line up with the same name, with different ram amounts, perform differently, its not misleading here like we have had in the past (460 SE, 460, 460 OEM as an example, also some 460s had twice the ram)

 

  like guys

image.thumb.png.ced1c61cb05b51260d1e914ce794cf45.png
Basically, wait a bit for benchmark suits to see how much the spread is, and recognize that its a 300 dollar difference between 900 and 1200 dollars.
Also there is not much slack to make that card cost much less then 900 dollars, Im not sure Nvidia even with the names dont mean anything mentality really wants to be going around with a xx70 tier card costing 900 dollars.

 

1 hour ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

get a 4060 and it will still be 2x faster or something, idk. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, eGetin said:

I'd say DLSS 3 is a gimmick and with the information we got in the keynote I feel like it's only going to work in cases where the frame rate was already good enough. You can't fix input lag by generating completely new frames out of thin air.

Actually, yes, yes you can! That's the proven ability of machine learning. While it doesn't seem all that much, smoothing out the 1% and 0.1% of the lower FPS would be a substantial improvement in eliminating micro-stutter.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, StDragon said:

Actually, yes, yes you can! That's the proven ability of machine learning. While it doesn't seem all that much, smoothing out the 1% and 0.1% of the lower FPS would be a substantial improvement in eliminating micro-stutter.

But in this case the frames don't come from game engine but from the GPU or driver or whatever. DLSS 1 and 2 were huge improvements because they were just fancy upscaling and your "underpowered" GPU could render frames in much lower resolution which meant higher FPS. If I understood correctly, DLSS 3 takes one step further from there and it basically creates a new frame out of thin air based on predicted changes in pixels. So in this case you are able to see a greater value in FPS counter but it doesn't exactly feel like it.

 

The question really is that how high base FPS do you need so that the game doesn't feel "off"? Is 40 enough? 60? 100? 20 for sure isn't and getting 30 or 40 fps just by generating "fake" frames in between the real ones is not going to enhance your input lag much. It remains to be seen for now.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Spotty said:

Australian prices are out. 

RTX 4080 12GB: $1659 AUD ($1100 USD)

RTX 4080 16GB: $2219 AUD ($1500 USD)

RTX 4090: $2959 AUD ($2000 USD)

 

I think I'll be keeping my 1080ti a little longer. 

Is ordering from the US not a thing?  At that price difference how bad could shipping be on a used 3000 series?

Workstation:  14700nonK || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 13700K @ Stock || MSI Z690 DDR4 || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3060 RTX Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, starsmine said:

I think you misunderstand why the smaller core count chips get more frames per core. 
Its simple as game engines and drivers failing to use all the cores at once. Its a highly parallel process sure, but its not using n threads.

Its like a 1000 lane highway and at say 4am, only 100 get used, at rush hour all are used. 
vs a 500 lane highway where where at 4am only 100 get used and at rush hour all are used.

that 4 am part, you just used 10 lanes per car on one gpu, and on the other gpu you used 5 lanes per car.

the die being used had minimal effect on cuda performance. A GA102 cut down to GA104 specs will perform similarly. You saw a good example of this with the 2060 KO where a TU104 was cut down to the equivalent core count of the tu106 and they performed equivalently in gaming benchmarks. the KO only pulling ahead on certain compute tasks that looked for resources available only on the TU104.

Its literally difficult to keep 10k cores fed with data, and it simply doesnt do that in many situations. there are a bunch of bubbles. 

But yes the performance difference between the 4080 12G and 4080 16G is being exaggerated here... and also I feel like a bad point when they are in completely different price brackets.

Most generations have had a card or many cards in its line up with the same name, with different ram amounts, perform differently, its not misleading here like we have had in the past (460 SE, 460, 460 OEM as an example, also some 460s had twice the ram)

its simply 4080 12G, and 4080 16G

 

My point was that while when they're using the same die you can pretty much calculate the FPS you'd get by increasing/decreasing the CUDA cores, if they're different dies you can't.

 

On average, GA102 dies need 75 CUDA cores for 1fps, so, if I want to try and calculate what a 3070 would get from that I would come out with 76fps, but in reality the 3070 gets 106fps because GA104 dies only need 55 CUDA cores for the same 1fps.

 

The reason for that difference is a moot point IMO since I was basing it on real world data.

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×