Jump to content

NVIDIA just made EVERYTHING ELSE obsolete.

Emily Young
6 minutes ago, HenrySalayne said:

The raw FP32 performance is an objective and measurable value across many generations of graphics cards. It's not real world performance, but it's a good estimate.

I just used that example for the CPU market and you saw that following that logic a i9 10900k could have a 12 million dollar pricetag and still be considered a good price using your way of thinking. 

 

 

Everything else is just an attempt to derail the focus out of the main point which is that tech industry inherit characteristic is to exponentially increase performance each generation and that trying to normalize prices solely focusing on the performance factor doesnt make any sense. What exactly dont you understand? 

 

Or do you agree that the 10900K should have 24k times the pricetag of the pentium 1 because it has 24k times the FP performance? 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, papajo said:

I just used that example for the CPU market and you saw that following that logic a i9 10900k could have a 12 million dollar pricetag and still be considered a good price using your way of thinking. 

I don't know what's going on in your head, but you were certainly not following my way of thinking.

 

7 minutes ago, papajo said:

Everything else is just an attempt to derail the focus out of the main point which is that tech industry inherit characteristic is to exponentially increase performance each generation and that trying to normalize prices solely focusing on the performance factor doesnt make any sense. What exactly dont you understand? 

 

Or do you agree that the 10900K should have 24k times the pricetag of the pentium 1 because it has 24k times the FP performance? 

Once again I can only point at my previous post and ask you kindly to read it again. Your last few posts were just ranting and raving with absolutely no connection to what I said.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, HenrySalayne said:

I don't know what's going on in your head, but you were certainly not following my way of thinking.

 

Once again I can only point at my previous post and ask you kindly to read it again. Your last few posts were just ranting and raving with absolutely no connection to what I said.

 

You make a bang for buck comparison between generations of graphics cards and by "bang" you mean FP performance right? right. 

 

I just gave you an identical bang for buck comparison using FP performance but with CPUs. right? right. 

 

So where is your objection? 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, papajo said:

I just gave you an identical bang for buck comparison using FP performance but with CPUs. right? right. 

 

So where is your objection? 

I acutally think our definition of "identical" is a little bit different.

 

36 minutes ago, papajo said:

I just used that example for the CPU market and you saw that following that logic a i9 10900k could have a 12 million dollar pricetag and still be considered a good price using your way of thinking.

 

Now you only have to find where I said we shouldn't expect to get more performance per Dollar. Go ahead, I'm waiting.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, HenrySalayne said:

I acutally think our definition of "identical" is a little bit different.

 

That is probably the case, my definition is the mathematical one, meaning that you have two sets of values (input and output) and the input is manipulated in the same way(= using the same function) to produce an output , I meant that I used an identical function (not that my results or inputs are identical of course). 

 

13 minutes ago, HenrySalayne said:

Now you only have to find where I said we shouldn't expect to get more performance per Dollar. Go ahead, I'm waiting.

Well your entire premise of making this erroneous (for me and I already argued why) comparison was to show that price to perfomance differences are linear, at least in a lose sence = prices do not increase significantly more compared to FP performance. 

 

This premise infers that hence the above is true then Nvidia has not increased it's prices and/or the prices are fair. 

 

 

And what I did is to show you that using an arbitrary increasing performance indicator and dividing it by the price may output results that suit your argument in the tiny samble size you used but in reality there is no sense in comparing tech products in such a way because a main reason that makes computer technology driving our civilization is simply the fact that it's performance increases exponentially with time while the expense/cost lowers or stays the same which doesnt happen in most other industries. 

 

And I also used the same function  as you did to showcase that thinking that price increasing according to performance of a single indicator does not lead to rational outcomes if you see the big picture.

 

Or in other more simple words if it was "fair" to compare generations of tech products in terms of a single performance metric/per dollar then many of our tech products could have ridiculously high prices and that would be fair under said premise. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, papajo said:

show that price to perfomance differences are linear

🤔

6 minutes ago, papajo said:

is simply the fact that it's performance increases exponentially with time

🤷‍♀️

7 minutes ago, papajo said:

And I also used the same function  as you did

😅


I asked you several times to read my post carefully. But no, you just didn't do it. So once again: the ordinate is scaled logarithmically. What you are calling "linear" is actually exponential growth! Just look at the values and read the attached text. You could have actually read my post hours ago, but no. Complaining about it was much easier!

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, HenrySalayne said:

@AdmiralKird Way ahead of you!

 

grafik.thumb.png.ee6b5bd3cc08c607247fdf741e8ed40f.png

 

This graph shows basically "bang for the buck", or in other words, raw FP32 GFLOPS per Dollar (inflation adjusted). It doesn't consider actual gaming performance and it doesn't take VRAM, memory bandwidth or power consumption into account. The orange line shows the expected growth of performance per Dollar over time. I chose 33% annually because it's a general rule of thumb for performance increase (around 30%/y) and it fits the data points well. I chose the GTX 480 as a baseline, it was launched about 10 years ago. The abscissa (x-axis) is scaled linear and shows the release date, the ordinate (y-axis) is scaled logarithmically (exponential growth).

The prices are inflation adjusted on par with @GabenJr .

 

My thoughts:

The 20 series cards were pretty much a disappointment and the data clearly shows why. The 6 series cards were actually a great value which also coincides with my memory.

 

 

Data sources:

inflation calculator: https://fxtop.com/en/inflation-calculator.php

FP32 perfomance, launch date, launch MSRP: https://www.techpowerup.com/gpu-specs/geforce-rtx-3080.c3621

Nice! Using GFLOPS/$ is a better way of doing it than what I suggested. Although, there is a problem with the 33% increase (orange line) in the graph. If the y-axis on the graph is scaled logrithmically then a linear 33% increase due to time (x-axis) shouldn't end up being a straight line; it would bend logrithmically and flatten as you moved forward in time.

I reperformed the data using techpowerup and usinflationcalculator (using fxtop would have required 4 times as many drop downs for an extra 2% of precision which is kinda unnecessary for the time I want to put into this), and unscaled the y-axis:

flops1.thumb.jpg.b7c9271020401b69733fc596ccd28bdf.jpg
 

This is why this launch is so crazy, because there is a lot, A LOT (one could say even a metric butt ton) of performance per dollar that is so far ahead of what we've seen in a long time. You can't... you can't just scale that increase away.

But just looking at it doesn't tell the whole story. Let's go back in time a week. Let's remove the current RTX 3080 from this data set. Let's see what we should have expected for 2020 by using the 2006-2018 flagship 80 series data, graphing the data in the same scatter plot, and analyzing the GFLOPS/$ amount using a trendline. This will tell us what GFLOPS/$ amount we should have expected:

gflops2b.thumb.jpg.35c827d77480d364167505a7c83c36b0.jpg

So for September of 2020, using historical data the 3080 was expected to have a GFlops/$ ratio of 16.7. We would therefore reasonably have had expected a new, 3080 to have looked one of three ways:

1. A return to the $700 price point

If a card were to have released under this scenario, using that 16.7 we would have expected a $700 card to yield GFlops of 11,690 (700*16.7) - basically a limp 2080 Ti with a $500 price cut. Of course most of us didn't expect the 3080 to just be a weak 2080 Ti, we expected...

2. A modest increase in performance over the 2080 Ti.

A 2080 Ti has a FP32 Gflops performance of 13,450 (according to techpowerup). Let's say the 3080 would bring, I don't know, 25% more performance over the 2080 Ti. This would give us a Gflops figure of 16,813. And we would, using the 16.7 figure, have expected it to retail at launch (16,813/16.7) for $1,006. So we would have gotten a bit more bang, but it would have also cost us more. But there's a third scenario we could have expected...

3. A Whole Lotta performance.

In this case, let's say the 3080 were to launch with exactly it's current specs. We get a huge increase - 29,770 Gflops. Using our 16.7 figure, (29,770/16.7) we would have expected this card to retail for... $1,783. Eeeek. Someone throw Riley a life preserver.

But that's not what the 3080 is going to be, it is that performance, but at only $700, which is what makes this all so exciting. It bucks historical trends, both in the increase in power, and the price point (at least all the way back to the 8800 GTX). I'm done. I'm not owned by Nvidia, I have no duck in this race. I'm just pretty thrilled.

And I've spent way too much time on this.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, AdmiralKird said:

Although, there is a problem with the 33% increase (orange line) in the graph. If the y-axis on the graph is scaled logrithmically then a linear 33% increase due to time (x-axis) shouldn't end up being a straight line; it would bend logrithmically and flatten as you moved forward in time.

An annual improvement of 33% compared to the previous year (not the first year) is exponential. So it's factor 1.33 for year n+1, 1.33^2 = 1.77  for n+2,1.33^3 =  2.35 for n+3 and so on. If the performance increase wouldn't be exponential, we would see diminishing returns after some time.

 

To put it into perspective. You will get the "huge leap forward" Nvidia was talking about just in comparison to the 20 series. The "huge leap" just brings the performance back on track of the expected value.

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, HenrySalayne said:

An annual improvement of 33% compared to the previous year (not the first year) is exponential. So it's factor 1.33 for year n+1, 1.33^2 = 1.77  for n+2,1.33^3 =  2.35 for n+3 and so on. If the performance increase wouldn't be exponential, we would see diminishing returns after some time.

 

To put it into perspective. You will get the "huge leap forward" Nvidia was talking about just in comparison to the 20 series. The "huge leap" just brings the performance back on track of the expected value.

 

You're right, I was incorrect on that front. Quite frankly most of what I did might be incorrect as it also treats Gflops/$ based on a linear function whereas if it factors in exponential growth in Gflops/$ it would be more in line with a 40+ Gflops/$ value. I'm not sure though we would expect Gflops/$ growth to be heavily exponential, however?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, AdmiralKird said:

I'm not sure though we would expect Gflops/$ growth to be heavily exponential, however?

It will be some kind of exponential. To be fair, Dollar inflation is just one criteria. GPUs are a global product, so they might not be influenced only by the Dollar. And as we get further to fundamental technological limits, the annual increase might get lower. In the long term performance per Dollar should increase somewhat comparable to Moore's law.

 

15 minutes ago, AdmiralKird said:

Quite frankly most of what I did might be incorrect as it also treats Gflops/$ based on a linear function whereas if it factors in exponential growth in Gflops/$ it would be more in line with a 40+ Gflops/$ value

You can easily do it in Excel and switch the trendline to exponential even on a linear scale. It should match my findings then. 😉

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HenrySalayne said:

You can easily do it in Excel and switch the trendline to exponential even on a linear scale. It should match my findings then. 😉

You can, problem though is if you remove the 2080, which has a heavily depressed Gflop/$ caused by demand size economics in a fluke mining fad which is irrelevant to all prior and future years, you'd end up with an expected Gflop/$ of 80 in 2020 using exponential. If the mining craze never occurred, would a ratio of 80 in 2020 be realistic? if 2016 was still 14 Gflop/$ that's a per annum increase of 50%. The one thing Gflop/$ might not take into account is the exponential demand size increase due to expansion of the overall gaming market, especially within the last four years, from which there's not that great of data.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, HenrySalayne said:

If you can't supply any objective evidence, it is just your opinion

Well as long someone can speak or write they can, but...  From my point of view it's the wrong way to argue if you want the counterpart to listen.  Just look at this thread...

 

I agree with more or less everything you say.  Problem is that some people in this thread do not understand now market and market value works.  They only look from their own perspective and their perceived value.  Every generation of GFX adds functionality above the previous generation and there is not comparable.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kroon said:

Every generation of GFX adds functionality above the previous generation and there is not comparable.

This is actually a really good point. Something like NVENC for example adds additional cost because of the licensing fees for the codecs.

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/2/2020 at 4:33 AM, savstars said:

You're too slow.  I have already watched JayzTwoCents, Gamers Nexus and Bitwit's videos.  What happened to the dedication Linus?  

 

Pls share the link or more info, I want to check also the quality of the information, not the speed.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, LyondellBasell said:

Why tho. There's no reason for this, other than "It would be really nice."

Having the "top line GPU" isn't nearly as important as something like "Internet access" or "mobile phone access".

You could make the argument that the latter two are so important and intrinsic to daily life in a 1st-world country that we *should* insist that people be able to afford them.

It's not that simplistic.  (besides that I am not talking just about increasing the number of people that can afford top tier GPUs but in general have people use better GPUs e.g the ones that can only afford a ridiculously overprices 1030 to have a 1050 class of card etc. )

 

This has a negative result as a whole in our society. 

 

Games will have less complexity because developers try to optimize for the masses not for the exceptions = they check what will their average customer supposed to run that game on if its low permanence hardware they wont make the game complex/beautiful enough so that the mediocre hardware can run it. 

 

Also every other sort of activity depended on GPU performance gets slowed down because it is not economically viable to purchase a better GPU because of the ridiculous price tag (e.g fold@home )

 

21 hours ago, LyondellBasell said:

I don't think anyone is recommending you do that, lol.

Nvidia, if it asks from you to pay 1000$ for a RTX 2080  unless you buy exotic weird stuff your rest of your PC budget would be about 1000$ as well.. 

 

even if you buy a ridiculous mobo that costs 500$ even then your budget will be around 1500$ max....

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, AdmiralKird said:

Nice! Using GFLOPS/$ is a better way of doing it than what I suggested. Although, there is a problem with the 33% increase (orange line) in the graph. If the y-axis on the graph is scaled logrithmically then a linear 33% increase due to time (x-axis) shouldn't end up being a straight line; it would bend logrithmically and flatten as you moved forward in time.

I reperformed the data using techpowerup and usinflationcalculator (using fxtop would have required 4 times as many drop downs for an extra 2% of precision which is kinda unnecessary for the time I want to put into this), and unscaled the y-axis:

flops1.thumb.jpg.b7c9271020401b69733fc596ccd28bdf.jpg
 

This is why this launch is so crazy, because there is a lot, A LOT (one could say even a metric butt ton) of performance per dollar that is so far ahead of what we've seen in a long time. You can't... you can't just scale that increase away.

But just looking at it doesn't tell the whole story. Let's go back in time a week. Let's remove the current RTX 3080 from this data set. Let's see what we should have expected for 2020 by using the 2006-2018 flagship 80 series data, graphing the data in the same scatter plot, and analyzing the GFLOPS/$ amount using a trendline. This will tell us what GFLOPS/$ amount we should have expected:

gflops2b.thumb.jpg.35c827d77480d364167505a7c83c36b0.jpg

So for September of 2020, using historical data the 3080 was expected to have a GFlops/$ ratio of 16.7. We would therefore reasonably have had expected a new, 3080 to have looked one of three ways:

1. A return to the $700 price point

If a card were to have released under this scenario, using that 16.7 we would have expected a $700 card to yield GFlops of 11,690 (700*16.7) - basically a limp 2080 Ti with a $500 price cut. Of course most of us didn't expect the 3080 to just be a weak 2080 Ti, we expected...

2. A modest increase in performance over the 2080 Ti.

A 2080 Ti has a FP32 Gflops performance of 13,450 (according to techpowerup). Let's say the 3080 would bring, I don't know, 25% more performance over the 2080 Ti. This would give us a Gflops figure of 16,813. And we would, using the 16.7 figure, have expected it to retail at launch (16,813/16.7) for $1,006. So we would have gotten a bit more bang, but it would have also cost us more. But there's a third scenario we could have expected...

3. A Whole Lotta performance.

In this case, let's say the 3080 were to launch with exactly it's current specs. We get a huge increase - 29,770 Gflops. Using our 16.7 figure, (29,770/16.7) we would have expected this card to retail for... $1,783. Eeeek. Someone throw Riley a life preserver.

But that's not what the 3080 is going to be, it is that performance, but at only $700, which is what makes this all so exciting. It bucks historical trends, both in the increase in power, and the price point (at least all the way back to the 8800 GTX). I'm done. I'm not owned by Nvidia, I have no duck in this race. I'm just pretty thrilled.

And I've spent way too much time on this.

 

I am really disappointed that you spend so much time to excuse ridiculously high prices with this deceiving notion that GF/$ should make any sense ...

 

BTW  one reason that GF did not rise significantly especially the passed years is that nvidia is sandbagging now with rdna2 coming out they pushed a little just in case... . 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, HenrySalayne said:

To put it into perspective. You will get the "huge leap forward" Nvidia was talking about just in comparison to the 20 series. The "huge leap" just brings the performance back on track of the expected value.

And I told you again and agin  that this huge leap is to be expected (actually an even "huger" leap but nvidia is sandbagging ) and it doesnt make sense to try to normalize prices compared to the percentage of said leap.

 

 

If you think that is the case then a CPU nowadays could cost 12.000.000$( twelve million dollars) and still be considered a bargain under said premise by giving you an exact comparison using that logic

Link to comment
Share on other sites

Link to post
Share on other sites

My 2 cents is that 5 years ago, a 980 Ti cost 650 dollars (Not adjusted for inflation). Today, you can get the same performance below 250 dollars with the 1660 Super. With the added bonus of today's games looking better than a game from 2015.

 

And no, I'm not an Nvidia fanboy. I have an RX 580 running on my system and has no plan on upgrading.

Main Rig :

Ryzen 7 2700X | Powercolor Red Devil RX 580 8 GB | Gigabyte AB350M Gaming 3 | 16 GB TeamGroup Elite 2400MHz | Samsung 750 EVO 240 GB | HGST 7200 RPM 1 TB | Seasonic M12II EVO | CoolerMaster Q300L | Dell U2518D | Dell P2217H | 

 

Laptop :

Thinkpad X230 | i5 3320M | 8 GB DDR3 | V-Gen 128 GB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Fatih19 said:

My 2 cents is that 5 years ago, a 980 Ti cost 650 dollars (Not adjusted for inflation). Today, you can get the same performance below 250 dollars with the 1660 Super. With the added bonus of today's games looking better than a game from 2015.

 

And no, I'm not an Nvidia fanboy. I have an RX 580 running on my system and has no plan on upgrading.

Your logic is not valid but your circumstance tells the true story you have an RX 580 because you cant have a better one simply because you have to pay ridiculous markups for small performance increase by each tier. 

 

my point is that people shouldnt have to compromise that way... a RX 580 should be an even cheaper card inside a young child's PC that isnt even a "gamer enthusiast" but just a casual gamer ....

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, papajo said:

Your logic is not valid but your circumstance tells the true story you have an RX 580 because you cant have a better one simply because you have to pay ridiculous markups for small performance increase by each tier. 

 

my point is that people shouldnt have to compromise that way... a RX 580 should be an even cheaper card inside a young child's PC that isnt even a "gamer enthusiast" but just a casual gamer ....

Sounds like entitlement, IMHO, and for something not essential. Not primary, not secondary, but tertiary needs.

Main Rig :

Ryzen 7 2700X | Powercolor Red Devil RX 580 8 GB | Gigabyte AB350M Gaming 3 | 16 GB TeamGroup Elite 2400MHz | Samsung 750 EVO 240 GB | HGST 7200 RPM 1 TB | Seasonic M12II EVO | CoolerMaster Q300L | Dell U2518D | Dell P2217H | 

 

Laptop :

Thinkpad X230 | i5 3320M | 8 GB DDR3 | V-Gen 128 GB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Fatih19 said:

Sounds like entitlement, IMHO, and for something not essential. Not primary, not secondary, but tertiary needs.

Sounds like it used to be all this time before ~ 2012 

 

And sounds like progress. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, papajo said:

And I told you again and agin  that this huge leap is to be expected (actially an even "huger" leap but nvidia is sandbagging ) and it doesnt make sense to try to normalize prices compared to the percentage of said leap.

 

 

If you think that is the case then a CPU nowadays could cost 12.000.000$( twelve million dollars) and still be considered a bargain under said premise by giving you an exact comparison using that logic

It is pretty annoying you are constantly trying to put words in my mouth. You are not following my logic or narrative. Nothing of what you said has any connection to my post. Your entire argument is centered around something you made up. If you didn't understand my previous posts, feel free to ask questions if some things are still unclear.

 

But yet, here we are. You are trying to take Intel as an example of "fair" pricing. Guess what? If we look at the average annual increase of performance per Dollar of the Intel Core i7 CPUs in the last decade, it is only around 15% (inflation adjusted). An annual increase of 15% compared to around 30% Nvidia managed. If you would really follow my logic you would create a new thread in this very moment complaining about Intel being a utter rip-off and charging way to much for their processors. How can it be Intel only improved the performance per Dollar by 15% each year while Nvidia managed to get 30%?

And just to make sure, you did get my point this time: If Intel would have improved the performance per Dollar in the past decade like Nvidia did, the Core i7-10700K should cost around 110$ today.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/2/2020 at 7:48 AM, samcool55 said:

Prices sound good, specs sound good, but boy does it leave me confused AF.

Like, yea it seems like we finally have a reason again to upgrade (tbh I still don't with my ancient rx 480 because there's no new budget stuff yet).

Anyway, why do they suddenly want to compete with console gamers? They are a different market and even the 3070 is waaay to expensive compared to a console.

You can either get a complete new next-gen console, or a GPU with nothing else. Doesn't really make sense?

Given the substantial jump in performance, it really seems to me that Nvidia had been holding back. 
 

This move seems to be a shot at AMD as much as it is toward the new consoles. Though console gamers aren’t necessarily the target that Nvidia would want to win over, displaying such a chasm of performance, and at a cost effective pricing so very soon could sow doubt among Sony and Microsoft executives/engineers as to the competency of AMD going forward. 

 

The acquisition of ARM would further put Nvidia in an extremely strong position for future console gens as well. Higher end ARM cores already approach x86 performance, and the ISA is already a known element among many developers. This would likely render moot AMD’s advantage of a single IC design. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×