Jump to content

Battlefield V with DXR on tested from Techspot(Hardware Unboxed)

kiska3
21 minutes ago, Carclis said:

The problem is they have over the course of 15 or so years set our expectations of what each tier of product will cost. As a consumer we used to expect ~$500 for a top end GPU and that has slowly risen to the ~$700 we expect today. The Titan can be largely excluded from this because it is not marketed as a gaming card and because the price is so far out of whack that it's irrelevant.

The other constant that has become an expectation of consumers is performance. Every year we can expect better performance at each point in the product stack and hence better performance/dollar. This has always been the case and represents a problem when just the last generational leap and this one are 70% and 0% respectively (normalised for price).

If Nvidia desires to change that expectation then it needs to give us a reason for doing so because right now RTX and DLSS are MIA/pitiful when it comes to performance. Nvidia should have released the cards at reasonable prices and instead charged the extra when the hardware was actually capable of pushing the new features at acceptable performance levels. And if you want to argue that they're priced high due to the die size instead, that is irrelevant because consumers only pay for the performance they get. That was the case with Vega and is the same here with Turing and is only really Nvidia's fault that they wasted an extra 200mm² for zero performance.

 

Lol titans excluded 

Its direct sales data to them of who is buying them

And they sold out numerous times 

That data shows alot to them

 

 

Funny thing you guys argue about it but where is this abundance of supply if they are overpriced?

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

That doesn't make sense, even if it cost less to develop it is still a development cost.  I am not arguing the addition of tensor cores justifies the total cost, but it will add a cost to the GPU and thus be part of the overall reason for it's higher price tag.  

I know it's part of the price tag, every thing is. My point I was leaning to was, Tensors cores or any new tech before it is not unique situation. Not Tensor cores are not new, it's not a new situation.

 

What about this situation justifies the price increase, I don't need pointing out that they are part of the price. So were Pixel Shaders, Vertex Shaders, Geometry Shaders, Unified Shaders, Tessellation, GamerWorks. These are to name but a few new technologies have come in to existence over time, across generations of GPUs so what about Tensors is different that would command the higher asking price than ever before of specifically the 2080 Ti?

 

Why is just the 2080 Ti such a large increase in price when compared to the price increase of the 2080 or 2070. All I'm left with after evaluating what people have in past given is "Because they can" or "The die size for TU102 is way bigger", only one of those is a justification but I've yet to see much evidence why the increase had to be so much not that it increased at all.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Jarsky said:

FYI retailers can sell at the $999 price  https://www.evga.com/products/product.aspx?pn=11G-P4-2281-KR

The issue is supply on any of the RTX cards right now. Almost cant order a 2080Ti from anywhere right now unless you want to pay $200-300 more....the prices are far above what they should be right now costing upwards of $1400-1600

Those are for the very basic models and the FE price very much influences the overall price of all cards. AIBs know they can create cards with much better cooling and charge more than the FE price because their card is actually better than those at that current set price. There isn't a whole lot AIBs can do anyway when it's Nvidia that sets the supply price for the GPU package that they have to pay for, none of them are going to sell at a loss, neither is Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, pas008 said:

Lol titans excluded  

Its direct sales data to them of who is buying them

As much as it is the case that Titans do sell at stupid prices, the numbers are very low. I can tell you right now that 2080ti cards are artificially lower in stock than what they would seem to be. See my post below about how 2080ti stock is sold.

54 minutes ago, pas008 said:

And they sold out numerous times 

That data shows alot to them

The 2080ti did. But that's what happens when you only get about 10 in stock.

56 minutes ago, pas008 said:

Funny thing you guys argue about it but where is this abundance of supply if they are overpriced?

Right here.

 

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, mr moose said:

Not sure what your beef is here. 

It's not you, it's just incredibly annoying to have to go around in this circular discussion of pointlessness yet again when pointing out that the 2080 Ti is a significant price increase over last generation. This is not some incredibly controversial thing to be pointing out, everyone should be able to see that and with so many replies along the lines of that is the price now all have seen the price.

 

Yes it is what it is but it's very sad to see comments along the lines of "We've had $1200 card before" or "This is just the price we pay for early adoption" or "Other things are expensive too" or any other numerous comments of the like. It's these types of things which will lead to comments such as "We've had $1500 cards before" or "We've had $2000 cards before" etc.

 

It's naive to just say rely on competition to lower the prices, it's perfectly acceptable to pay the now higher price and at the same time say that's too high and unjustified. Negative feedback does work, lack of feedback doesn't. Sales isn't the be all and end all, which is another area I won't dive in to other than to say stock availability is not actually a good indicator of actual unit sales.

 

@Lathlaer put's it rather well so I don't need to say much more than that. There is currently a price/product void between the 2080 and 2080 Ti which Nvidia may at some point fill but if they do it's going to be a on a technical level worse than which used to be the price reduced top tier card i.e. 1080 Ti.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

I know it's part of the price tag, every thing is. My point I was leaning to was, Tensors cores or any new tech before it is not unique situation. Not Tensor cores are not new, it's not a new situation.

I don't see how that changes the cost of development.  It doesn't get cheaper as you go and every time you start something new (i.e not based on the previous one) it get even more expensive.

 

2 hours ago, leadeater said:

What about this situation justifies the price increase, I don't need pointing out that they are part of the price. So were Pixel Shaders, Vertex Shaders, Geometry Shaders, Unified Shaders, Tessellation, GamerWorks. These are to name but a few new technologies have come in to existence over time, across generations of GPUs so what about Tensors is different that would command the higher asking price than ever before of specifically the 2080 Ti?

you'd have to ask Nvidia that, as I said before you can't just develop a new tech and throw it in for free.

 

2 hours ago, leadeater said:

Why is just the 2080 Ti such a large increase in price when compared to the price increase of the 2080 or 2070. All I'm left with after evaluating what people have in past given is "Because they can" or "The die size for TU102 is way bigger", only one of those is a justification but I've yet to see much evidence why the increase had to be so much not that it increased at all.

Again I have already listed my reason for why I think the price is so high.  I'm not sure what was wrong with that.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

It's not you, it's just incredibly annoying to have to go around in this circular discussion of pointlessness yet again when pointing out that the 2080 Ti is a significant price increase over last generation. This is not some incredibly controversial thing to be pointing out, everyone should be able to see that and with so many replies along the lines of that is the price now all have seen the price.

 

Yes it is what it is but it's very sad to see comments along the lines of "We've had $1200 card before" or "This is just the price we pay for early adoption" or "Other things are expensive too" or any other numerous comments of the like. It's these types of things which will lead to comments such as "We've had $1500 cards before" or "We've had $2000 cards before" etc.

 

It's naive to just say rely on competition to lower the prices, it's perfectly acceptable to pay the now higher price and at the same time say that's too high and unjustified. Negative feedback does work, lack of feedback doesn't. Sales isn't the be all and end all, which is another area I won't dive in to other than to say stock availability is not actually a good indicator of actual unit sales.

 

@Lathlaer put's it rather well so I don't need to say much more than that. There is currently a price/product void between the 2080 and 2080 Ti which Nvidia may at some point fill but if they do it's going to be a on a technical level worse than which used to be the price reduced top tier card i.e. 1080 Ti.

I guess even if they were charging $3000 for it (like they did with the titan or whatever it was), I'd probably be saying the same thing.  Value is dictated by the highest bidder.

I agree it's naive to rely on competition to lower prices, but ultimately, without some sort of government regulation, that's all that is going to do it in the end.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

So why is it such a large increase over the price increase for the 2080? What exactly makes it $1200 and not $999? Is this another "because Nvidia can" because I've already said that is not a justification that's just pointing to the price tag and saying "Look it's $1200". I'm perfectly capable of seeing it's price, I sort of have to to disagree with the price.

 

Normal price scaling... Remember as you go up in performance your dollar per frame goes down and the 2080 Ti when given a situation where the CPU isn't bottlenecking it can really run away with things.

 

Depending on weather the link works. Check 7:30 here:

 

 

Thats a 28% and change performance hike. EVen if the FPS per dollar was to match the 2080 it would amount to a $900 price tag. Given the way FPS per dollar drops off as you go up the tiers a $999 price tag was completely unrealistic. Weather $1200 is right is another question but you were allways going to see a sharp hike.

 

5 hours ago, Blademaster91 said:

There isn't anything new about the bracket, Nvidia gave it the x80Ti tier name, they're relying on the RTX hype to sell it for a much larger increase than the previous Ti price.

And well phones, phone sales are down quite a bit as prices increase without there being much reason to upgrade from a phone that is a few years old.

I'd really like to see some sales numbers compared to the 1000 series, though the performance upgrade really isn't worth spending over $1,000 for most people.

Except when you use the card for the feature Nvidia is marketing it does 56fps on DXR low at 1440p in Battlefield V, just IMO but that doesn't seem worth it for a $1,000+ GPU certainly when most people with that budget have a 120Hz or higher monitor.

 

See above, it's raster performance is quite good. Also DLSS should be pure performance gains once NVIDIA gets the algorithms tuned at least a little as it unloads the AA load of the raster engine, so once the Tensor cores are doing DLSS even slightly faster than the raster engine the raster engine will be able to kick the performance up a notch.

 

On Top of all that the more we delve into BFV's DXR implementation the more headscratchers i'm seeing. It's clearly extremly buggy ATM, which raises some sharp questions in my mind about how optimized it is. I think we need to see more games with it in before we write it off as unplayable in all games. Whats the release schedule look like for furthar RTX and DLSS implementations?

 

2 hours ago, leadeater said:

I know it's part of the price tag, every thing is. My point I was leaning to was, Tensors cores or any new tech before it is not unique situation. Not Tensor cores are not new, it's not a new situation.

 

What about this situation justifies the price increase, I don't need pointing out that they are part of the price. So were Pixel Shaders, Vertex Shaders, Geometry Shaders, Unified Shaders, Tessellation, GamerWorks. These are to name but a few new technologies have come in to existence over time, across generations of GPUs so what about Tensors is different that would command the higher asking price than ever before of specifically the 2080 Ti?

 

Why is just the 2080 Ti such a large increase in price when compared to the price increase of the 2080 or 2070. All I'm left with after evaluating what people have in past given is "Because they can" or "The die size for TU102 is way bigger", only one of those is a justification but I've yet to see much evidence why the increase had to be so much not that it increased at all.

 

Here's the thing, they took the existing GPU tech, slightly exceeded it's performance at the same price points, and then slapped a metric arse-ton of extra hardware on there. Node shrinks give performance gains generally because the smaller niode can yield a higher transistor count at an equivalent die yield rate. Those transistors are all packaged into increasing performance. which produces a higher performing chip at a similar price-point.

 

Turing however has had a node shrink, a die size increase, and a lot of the extra transistors dedicated to other hardware, (RT cores and Tensor Cores), and is still priced the same as last gen. Thats actually a really impressive achievement overall.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, CarlBar said:

Turing however has had a node shrink, a die size increase, and a lot of the extra transistors dedicated to other hardware, (RT cores and Tensor Cores), and is still priced the same as last gen. Thats actually a really impressive achievement overall.

It's not the same price as last gen, everything went up. GM200 was 601mm and was very, extremely cheaper, than Turing. I'm not unaware of why though as it was the 3rd generation of GPUs on 28nm. However Pascal was everything of what you just said yet the xx80 Ti only went up $50 USD not $500 USD, yes I am using FE prices for both.

 

I'm not ignoring the market factors either, the specific issue is the price of the 2080 Ti.

 

Pascal was too a major advancement in GPU technology combined with a node shrink.

 

Price to performance proportionality is the same situation on Turing as Pascal or anything else, value for your performance as it increases does go down. This doesn't however address the issue of the 2080 Ti increasing much more so than the 2080 did for the 1080.

 

Also the RT cores and Tensor cores don't cost more to fab beyond the area size, that cost would be in R&D and I doubt that cost any significant amount more than anything else they have developed and a large amount of that was done in Volta. Volta is architecturally very similar and is on the same node. And on that topic the Telsa V100 was only a 13% price increase over the Tesla P100, sure high margin products but if the much large die size or Tensor cores were so costly it had minimal effect here.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, mr moose said:

Again I have already listed my reason for why I think the price is so high.  I'm not sure what was wrong with that.

I'm not saying it's wrong, I'm pointing out that while you can say that you or anyone else doesn't actually know the cost but we can look at GPU development history which is full of expensive R&D and yet the 2080 Ti sticks out pretty damn far. Saying to me it cost money to develop tech doesn't really say much at all because that's true before, and before, and before, and before... so doesn't really say much does it.

 

What makes this time unique? It ain't Tensor cores or RT cores because those are just transistors and general R&D, there is no indication that these were excessively expensive to develop and a lot of that was in fact done by Google who were the first to release Tensor accelerator hardware and created TensorFlow.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

I'm not saying it's wrong, I'm pointing out that while you can say that you or anyone else doesn't actually know the cost but we can look at GPU development history which is full of expensive R&D and yet the 2080 Ti sticks out pretty damn far. Saying to me it cost money to develop tech doesn't really say much at all because that's true before, and before, and before, and before... so doesn't really say much does it.

 

What makes this time unique? It ain't Tensor cores or RT cores because those are just transistors and general R&D, there is no indication that these were excessively expensive to develop and a lot of that was in fact done by Google who were the first to release Tensor accelerator hardware and created TensorFlow.

If NVidia used Google's tech, that would make sense. But I don't know if they did.

 

The price increase is small and seems fine for everything but the 2080Ti. I think we can all agree it went up by a larger amount (because it did), without a clear reason why. The reason might just be that the profit, if any, on the 2070 and 2080 are small (so you get more adopters) which they try to counter with the previous Titan segment of the market. 

 

As you can see in this simplified graph, the benefit of the old titan and the new rtx 2080 Ti are fairly small, even for cinematic games.

https://docs.google.com/spreadsheets/d/1xGzdYeTiTEH_kJYwYAYdvyUHT8ZFLHYxhmReEjJvntU/edit?usp=sharing

 

But aha, you say; RTX and DLSS! Well, sure, but those are true for the RTX series - making the 2080 Ti still weirdly placed. 

 

Edit: I linked the sources I used. I wanted to get a quick graph done, so I took the first single-source I could find for most cards. The MSRPs should be fine, they come from Wikipedia. Based on the graph, a logical place for the RTX 2080Ti would be around 1k. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, daimonie said:

If NVidia used Google's tech, that would make sense. But I don't know if they did.

I don't think they did, it's more that Google created the framework and the initial technology foundation which honestly is just INT8/INT4 matrix math accelerators. Bascially Nvidia didn't have to start from nowhere.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yea... I wouldn't really count it as real time if theres this much noise

 

 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

I'm not saying it's wrong, I'm pointing out that while you can say that you or anyone else doesn't actually know the cost but we can look at GPU development history which is full of expensive R&D and yet the 2080 Ti sticks out pretty damn far. Saying to me it cost money to develop tech doesn't really say much at all because that's true before, and before, and before, and before... so doesn't really say much does it.

 

What makes this time unique? It ain't Tensor cores or RT cores because those are just transistors and general R&D, there is no indication that these were excessively expensive to develop and a lot of that was in fact done by Google who were the first to release Tensor accelerator hardware and created TensorFlow.

Nvidias R+D budget since 2017 has gone up significantly.  And I mean significantly:

 

https://ycharts.com/companies/NVDA/r_and_d_expense

 

https://spectrum.ieee.org/view-from-the-valley/semiconductors/design/hey-big-spender-for-semiconductor-rd-thats-intel

 

Seriously, they have dropped a lot of dosh into their latest products.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

It's not the same price as last gen, everything went up. GM200 was 601mm and was very, extremely cheaper, than Turing. I'm not unaware of why though as it was the 3rd generation of GPUs on 28nm. However Pascal was everything of what you just said yet the xx80 Ti only went up $50 USD not $500 USD, yes I am using FE prices for both.

 

I'm not ignoring the market factors either, the specific issue is the price of the 2080 Ti.

 

Pascal was too a major advancement in GPU technology combined with a node shrink.

 

Price to performance proportionality is the same situation on Turing as Pascal or anything else, value for your performance as it increases does go down. This doesn't however address the issue of the 2080 Ti increasing much more so than the 2080 did for the 1080.

 

Also the RT cores and Tensor cores don't cost more to fab beyond the area size, that cost would be in R&D and I doubt that cost any significant amount more than anything else they have developed and a large amount of that was done in Volta. Volta is architecturally very similar and is on the same node. And on that topic the Telsa V100 was only a 13% price increase over the Tesla P100, sure high margin products but if the much large die size or Tensor cores were so costly it had minimal effect here.

 

Yes it is the same price. The MSRP of both the 2080 and the 1080Ti at launch was $699. They're the 2 cards in the same price bracket and the 2080 despite a bigger die archives a slight performance bump over it's previous gen competitor at no MSRP hike.Like i said once you start factoring in all the pre-overclocked cards things get super muddy super fast. But MSRP for the basic version at least didn't move.

 

And my point about the RT cores and Tensor cores isn't so much the development costs, (it's a non-trivial factor but it's not the main point), as the fact that they account for the overwhelming majority of the extra transistors compared to the previous generation.

 

The whole reason you normally see a performance bump from a follow on generation is primarily down to the GPU having more transistors in it that give it more computational power to work on the rendering. You will get some uplift from architecture improvements. But the majority of the speed increase comes from packing more transistors in.

 

Turing however uses all those extra transistors, (and possibly some of those used for the normal raster engine on previous gen cards), for the new hardware.

 

SImply put the normal reason you see a major uplift from a new generation card does not apply to turing because the normal advantages of a new generation card are being used for somthing else.

 

 

I suspect thats the entire reason we got a 2080Ti at launch, they knew that with all the extra transistors being used elsewhere they simply couldn't make a 2080 with more than a very slight performance hike without it costing them too much so they settled for slight edge and put together a higher margin 2080Ti to give the advance in performance at a bearable cost to build, and possibly also recoup some of the margin they're losing on the 2080.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, mr moose said:

Nvidias R+D budget since 2017 has gone up significantly.  And I mean significantly:

 

https://ycharts.com/companies/NVDA/r_and_d_expense

 

https://spectrum.ieee.org/view-from-the-valley/semiconductors/design/hey-big-spender-for-semiconductor-rd-thats-intel

 

Seriously, they have dropped a lot of dosh into their latest products.

 

Interesting hadn't seen the mid 2017 and beyond figures. Pascal released early 2017 so that would be more in the 2014-2016 budge time line, I suspect Volta started around that July 2016 raise but I wasn't aware it had gone passed the 400mil mark.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Interesting hadn't seen the mid 2017 and beyond figures. Pascal released early 2017 so that would be more in the 2014-2016 budge time line, I suspect Volta started around that July 2016 raise but I wasn't aware it had gone passed the 400mil mark.

yep steady growth in the budget up to 2016 then after 2017 it basically doubled.  It would be foolish to attribute it all to tensor and RTX, but I can't imagine volta sucked up that much more than normal.

 

EDIT: also given all the big names went up pretty hard (Intel TMSC etc) it may be a result of die shrinks too. getting them smaller might just be taking a big chunk of R+D.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, CarlBar said:

Yes it is the same price. The MSRP of both the 2080 and the 1080Ti at launch was $699. They're the 2 cards in the same price bracket and the 2080 despite a bigger die archives a slight performance bump over it's previous gen competitor at no MSRP hike.Like i said once you start factoring in all the pre-overclocked cards things get super muddy super fast. But MSRP for the basic version at least didn't move.

It's still a price increase, the whole stack moved up so the upper and lower points are higher. Unless we're talking the very bottom or very top of the range there will be cards of current and previous gen around the same price or from a competitor. If you're happy with a generational and node shrink performance of zero for the same cost then that's rather poor in the semi conductor industry, performance trends go up not remain level otherwise it's not actually progressing.

 

Until we get much better examples of RTX, there will be, then Turing is not at all a good next generation of GPU for performance. Even so the Ray Tracing aspect isn't going to that large in overall render pipeline and composition of what is on the screen, to actually expect a wide shift over to real time Ray Tracing is very unfair though so I'm not going to do that.

 

I'm up for some really good RTX future titles and I really do hope they are good, honestly though it won't change my opinion of the 2080 Ti even if I do end up buying one anyway.

 

13 minutes ago, CarlBar said:

And my point about the RT cores and Tensor cores isn't so much the development costs, (it's a non-trivial factor but it's not the main point), as the fact that they account for the overwhelming majority of the extra transistors compared to the previous generation.

 

The whole reason you normally see a performance bump from a follow on generation is primarily down to the GPU having more transistors in it that give it more computational power to work on the rendering. You will get some uplift from architecture improvements. But the majority of the speed increase comes from packing more transistors in.

And this has generally been able to be done in the past with no or low price increases. Inflation is a thing of course and the nodes are getting more expensive to develop but it's the TSMCs of the world that bare a large amount of cost, Nvidia's stake in the node sounds a lot higher this time around since it's very customized for them. No one else is asking for 800mm2-900mm2, not even Intel.

 

16 minutes ago, CarlBar said:

SImply put the normal reason you see a major uplift from a new generation card does not apply to turing because the normal advantages of a new generation card are being used for somthing else.

I agree with your assessment about the performance aspect but not the cost because as I mentioned transistor counts and die sizes have gone up in the past, then down, then up. The post above information on R&D is a good 200mil+ more than I thought it was so I can see why the whole product range got shift up in pricing.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, mr moose said:

It would be foolish to attribute it all to tensor and RTX, but I can't imagine volta sucked up that much more than normal.

Well Volta was the start of tensor development, Turing is just reusing that along with the front end improvements. RT cores I think are an evolution/re-purpose of Tensor cores because of how restrictive the utilization of the Tensor cores is.

 

16 minutes ago, mr moose said:

EDIT: also given all the big names went up pretty hard (Intel TMSC etc) it may be a result of die shrinks too. getting them smaller might just be taking a big chunk of R+D.

The 12nm node is custom for Nvidia so I would expect they have a large financial share in it's development, which was very costly from what I heard.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Well Volta was the start of tensor development, Turing is just reusing that along with the front end improvements. RT cores I think are an evolution/re-purpose of Tensor cores because of how restrictive the utilization of the Tensor cores is.

 

The 12nm node is custom for Nvidia so I would expect they have a large financial share in it's development, which was very costly from what I heard.

I'm still curious about how they are different. There's a branch of optics related here: https://en.wikipedia.org/wiki/Paraxial_approximation

If you use those approximations, you can recast reflections, Snell's law and everything in terms of Matrix products (2nd order tensor products). 

 

So what's the difference between RT and Tensor cores? 

Link to comment
Share on other sites

Link to post
Share on other sites

No way in hell I'd be willing to sacrifice that much performance for a visual gimmick.

 

But it's new so it must be the greatest thing since sliced bread according to the fan boys that are willing to fork out any amount of money for a turd as long as it has an nvidia logo on it...

What does windows 10 and ET have in common?

 

They are both constantly trying to phone home.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Hellion said:

No way in hell I'd be willing to sacrifice that much performance for a visual gimmick.

 

But it's new so it must be the greatest thing since sliced bread according to the fan boys that are willing to fork out any amount of money for a turd as long as it has an nvidia logo on it...

visuals are performance

 

you play at 480p or a higher resolution? ok then why do you play at higher resolutions? oh visuals hmm

lol smh

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Hellion said:

No way in hell I'd be willing to sacrifice that much performance for a visual gimmick.

 

But it's new so it must be the greatest thing since sliced bread according to the fan boys that are willing to fork out any amount of money for a turd as long as it has an nvidia logo on it...

You may call the 2080 ti overpriced but it isn't a price of crap. It is objectively the fastest consumer gpu on the market for gaming and can put out higher fps than any other card on the market at 4k by a significant margin. As a tech enthusiast I will 100% sacrifice performance for Ray tracing. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, pas008 said:

visuals are performance

 

you play at 480p or a higher resolution? ok then why do you play at higher resolutions? oh visuals hmm

lol smh

Yeah why would people play woth shadows on at all or high settings if they would rather not sacrifice performance for visuals. If all anyone cared about was performance everyone would just run at the lowest visual settings. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

I agree with your assessment about the performance aspect but not the cost because as I mentioned transistor counts and die sizes have gone up in the past, then down, then up. The post above information on R&D is a good 200mil+ more than I thought it was so I can see why the whole product range got shift up in pricing.

 

Bear in mind there's a lot of complication in that as each node has its own quirks GloFo's 14nm for example (which used to be IBM's 14nm), was explicitly designed for high yields at high die sizes. But that carries other compromises in node capability.

 

5 hours ago, daimonie said:

I'm still curious about how they are different. There's a branch of optics related here: https://en.wikipedia.org/wiki/Paraxial_approximation

If you use those approximations, you can recast reflections, Snell's law and everything in terms of Matrix products (2nd order tensor products). 

 

So what's the difference between RT and Tensor cores? 

 

Ray tracing has no relation to paraxial whatsoever. I suggest watching this: 

 

 

1 hour ago, Hellion said:

No way in hell I'd be willing to sacrifice that much performance for a visual gimmick.

 

But it's new so it must be the greatest thing since sliced bread according to the fan boys that are willing to fork out any amount of money for a turd as long as it has an nvidia logo on it...

 

ANyone who expected release day usable performance hasn;t paid attention in the past when new features came out. This si normal. Engine and driver updates will eventually uptick things. Also BFV's implementation is so buggy ATM that there's a serious question as to weather the performance is representative, or if like Assassin's Creed games it just horribly optimized.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×