Jump to content

Okay, so...nit-picking a slapped-together promo video or is the 3080 really not faster?

GuruMeditationError
Go to solution Solved by porina,
9 minutes ago, GuruMeditationError said:

"Why should the Time Spy Extreme benchmark be 8500 with 8000 cuda cores; It literally would have been double that [...] the 3D mark benchmarks scale linearly with cuda core count numbers, with sli; those benchmarks are [...] designed to scale as far as you can push them for the end of time..."

There lies the "problem". You can't assume perfect scaling, especially not across different architectures like we have here. One particular resource has been increased, doubled even. It does not follow everything will show a doubling in performance, unless it was exactly limited only by that single resource. As I said earlier, not everything is doubled. Limitations will still be present elsewhere. At least this tells me whoever they are, are a waste of time. Wait for something more credible from elsewhere.

I know a lot of this marketing stuff is fabricated and I expect that's the case with the Nvidia 3000 series launch video but...

 

...the guy at Frame Chasers (a small YouTube channel) is claiming Nvidia 3080 specs are a lie and that it's not faster than his 2080 ti
 


What do people think?

Is this guy just picking holes in a hastily slapped together promo video or is Nvidia actually lying about the cuda core count and the speed of the 3080?
 

Surely it just has to be a badly slapped together promo and not just a blatant deception on Nvidia's part? (although remember the GTX 970...? ...and they did recently just destroy their 3D Vision community with a wholesale dump of the 3D vision forums into off-topic. Yay Nvidia.)

Really hoping we're not going to be inundated with another wave of Italian guy cry-laughing videos.

Wondering what people think about this. 

The link posted above starts at his benchmark comparison but it's worth listening to the rest of the video for more context re. what he thinks about the cuda core count; to paraphrase: that they've just arbitrarily doubled the numbers of the leaked specs?

...maybe they leaked incorrect figures and this is part of their strategy?

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not giving them the view, what's the tldw?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

First of all, the 20 and 30 series are completely different architectures, so comparing CUDA core numbers tells nothing about real-world performance difference. We'll see about real-world performance without RT. Just wait until the reviews are out before you buy anything, otherwise you only have yourself to blame for blindly listening to the word of NVIDIA and buying their stuff expecting miracles.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I'm not giving them the view, what's the tldw?

He compared the 3DMark scores and said the 3080 is only about the performance of the 2080Ti. (I didnt watch the whole vid myself though)

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

I'm not giving them the view, what's the tldw?

He's claiming that the cuda core count has been arbitrarily doubled from the leaked specs and that they don't match the performance metrics available.

...and that his 2080 ti is 10% faster than the 3080 that Digital Foundry was benchmarking.

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

Just another side-note. The event has alrealy left it's marks on the used market. 2080Tis are selling for 500-600€ now. So basically it's a win for consumers already, even before the release. Just 2 days ago used 2080Tis were still sold for >1000€.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GuruMeditationError said:

He's claiming that the cuda core count has been arbitrarily doubled from the leaked specs and that they don't match the performance metrics available.

Well its not arbitrarily, they just double the shader count per SM from their Design page. The numbers are not equal due to this, and its not something you can just say cut them in half to make it equal, cause thats not exactly how it works.

Link to comment
Share on other sites

Link to post
Share on other sites

It look like there could be a bait and switch going on here...? ...but is it the consumers Nvidia might be trying to fake out, or the competition?

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Shimejii said:

Well its not arbitrarily, they just double the shader count per SM from their Design page. The numbers are not equal due to this, and its not something you can just say cut them in half to make it equal, cause thats not exactly how it works.

To be honest I don't have enough background in the tech to understand what you mean...? ...I guess I just have to assume it has internal logic and would make sense to someone who knows the field better than I.

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, GuruMeditationError said:

He's claiming that the cuda core count has been arbitrarily doubled from the leaked specs and that they don't match the performance metrics available.

In the event Jensen said they doubled the shaders, which presumably are linked closely to the CUDA core count. It does not follow everything else in the GPU was also doubled. We're going to have to wait for more detailed testing to see how this change impacts the overall performance. Didn't AMD do something similar with Navi compared to Vega? They rebalanced the internal units more towards gaming workloads.

 

I would at this point disregard attempts at indirect performance comparisons, as there are so many configuration variables that will be unknown it is pointless. If there exists known 3DMark scores, that could be more interesting.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, GuruMeditationError said:

To be honest I don't have enough background in the tech to understand what you mean...? ...I guess I just have to assume it has internal logic and would make sense to someone who knows the field better than I.

Say i have 68 Boxes, Each box will have a 64 shader units that deals with graphics. Each shader unit will get converted into Cuda Cores for marketing purposes. For instance the 2080ti has 68 Streaming Multiprocessors (also called SMS and boxes in the case above) so with their conversion into "cuda cores" it ends up being 68x64 = 4352. Now say i cut those Shader units size in half, and put ANOTHER shader unit next to it, now we still have the 68 boxes, but now with 128 shader units. The design will change a bit and spacing etc, but the concept should work. now instead of being 4352, it would be 8704 Cuda cores. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, porina said:

I would at this point disregard attempts at indirect performance comparisons, as there are so many configuration variables that will be unknown it is pointless. If there exists known 3DMark scores, that could be more interesting.

The Frame Chasers guy refers to a time spy extreme benchmark score; not sure if there's an official 3DMark score but...

...I ran a search and came up with this: 

https://www.pcgamesn.com/nvidia/rtx-3080-ti-benchmark-comparison#:~:text=According to these 3DMark results,the stock RTX 2080 Ti.

Here's what he says about the time spy extreme benchmark... (oops sorry, wrong link. here it is

 


@Shimejii  If I'm understanding you correctly I think you're arguing in favour of his point, in as much as they shouldn't just be exactly double the leaked specs?  What he's comparing is the leaked spec for the 3080 and the officially released spec for the 3080, not a comparison between the 3080 and the 2080ti. That in the launch video the 3080 cuda core count is exactly double the leaked spec but the performance doesn't seem to bear out the figures.  

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, GuruMeditationError said:

That's from way back in June, so a lot of things could have changed since then.

 

5 minutes ago, GuruMeditationError said:

Here's what he says about the time spy extreme benchmark... 

Don't just keep relinking the video. At least give a summary of the claims.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, porina said:

That's from way back in June, so a lot of things could have changed since then.

 

Don't just keep relinking the video. At least give a summary of the claims.

"Why should the Time Spy Extreme benchmark be 8500 with 8000 cuda cores; It literally would have been double that [...] the 3D mark benchmarks scale linearly with cuda core count numbers, with sli; those benchmarks are [...] designed to scale as far as you can push them for the end of time..."

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, GuruMeditationError said:

The Frame Chasers guy refers to a time spy extreme benchmark score; not sure if there's an official 3DMark score but...

...I ran a search and came up with this: 

https://www.pcgamesn.com/nvidia/rtx-3080-ti-benchmark-comparison#:~:text=According to these 3DMark results,the stock RTX 2080 Ti.

Here's what he says about the time spy extreme benchmark... (oops sorry, wrong link. here it is


@Shimejii  If I'm understanding you correctly I think you're arguing in favour of his point, in as much as they shouldn't just be exactly double the leaked specs?  What he's comparing is the leaked spec for the 3080 and the officially released spec for the 3080, not a comparison between the 3080 and the 2080ti. That in the launch video the 3080 cuda core count is exactly double the leaked spec but the performance doesn't seem to bear out the figures.  

In reality it comes down to how the design actually works and performs. Its not exactly easy to say "Hey this is just doubling so it shouldnt count" Because again thats not EXACTLY how it works. The way these Floating point Units work isnt just simple in terms of Cores.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Mondas42 said:

I guess I will wait for more youtube reviews before I make up my mind, either way its gonna be a lot faster than my RX5700XT and cheaper than a 2080TI was so I'm saving my pennies to upgrade.

I'm still intending to purchase a 3080 regardless. I'm still on a GTX 970 and have been really needing to upgrade for some time. 

I had an RTX 2070 Strix and had to sell it. I was beginning to think that might have been a mistake but now I'm kind of glad I sold it when I did.
 

8 minutes ago, Shimejii said:

In reality it comes down to how the design actually works and performs. Its not exactly easy to say "Hey this is just doubling so it shouldnt count" Because again thats not EXACTLY how it works. The way these Floating point Units work isnt just simple in terms of Cores.

I guess, but he's saying that exact doubling is suspect in light of the metrics not seeming to measure up...that basically the whole thing's all a bit fishy and doesn't really add up. He goes on to show his 2080ti benchmarked alongside the 3080 from the Digital Foundry video and his 2080ti appears to be running faster by just over 10fps. He puts it at 10% faster.

I guess you'd need to compare his test bed to the digital foundry test bed. 

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, GuruMeditationError said:

"Why should the Time Spy Extreme benchmark be 8500 with 8000 cuda cores; It literally would have been double that [...] the 3D mark benchmarks scale linearly with cuda core count numbers, with sli; those benchmarks are [...] designed to scale as far as you can push them for the end of time..."

There lies the "problem". You can't assume perfect scaling, especially not across different architectures like we have here. One particular resource has been increased, doubled even. It does not follow everything will show a doubling in performance, unless it was exactly limited only by that single resource. As I said earlier, not everything is doubled. Limitations will still be present elsewhere. At least this tells me whoever they are, are a waste of time. Wait for something more credible from elsewhere.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I read something about 2x fp32 unit per core, that probably need at a least driver level activation or/and some patches on the game/software to recognize it. 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

There lies the "problem". You can't assume perfect scaling, especially not across different architectures like we have here. One particular resource has been increased, doubled even. It does not follow everything will show a doubling in performance, unless it was exactly limited only by that single resource. As I said earlier, not everything is doubled. Limitations will still be present elsewhere. At least this tells me whoever they are, are a waste of time. Wait for something more credible from elsewhere.

Thanks. :0)

 

I'm going to mark it as solved. It seems that's probably the crux of the problem: an incorrect assumption.

Also, I find it hard to believe that Nvidia would make these claims and not be able to follow through with them; it's not like this stuff can't be checked, double-checked, ratified and validated. They'd look pretty bloody stupid if they didn't come through with this stuff.

 

I guess we just have to cut them some slack re. the wonky marketing. 

"I try to put good out into the world...that way I can believe it's out there." --CKN                  “How people treat you is their karma; how you react is yours.” --Wayne Dyer            

[Needs Updating] My PC: i5-10600K @TBD / 32GB DDR4 @4000MHz / Z490 AORUS Elite AC / Titan RTX / Samsung 1TB 960 Evo / EVGA SuperNova 850 T2

Link to comment
Share on other sites

Link to post
Share on other sites

I stopped watching the 4th time he screamed that people were stupid... this is a 15 min video that could have been 5 had he known what his point was before he hit record. It's like watching someone complain that SLI scaling isn't perfect.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×