Jump to content

NVIDIA Announces GP102-based TITAN X with 3,584 CUDA cores

RZeroX
17 minutes ago, HKZeroFive said:

NVIDIA has a pseduo-monopoly for the high end until Vega comes out in 2017. Until then, NVIDIA can control the malleable market to its liking.

Actually, I could be wrong...

 

b0xdID8.jpg

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Notional said:

Goes for fiber too. When you have a 90+% profit margin, it means maintenance cost is not an explanation for the price. Profit margin is. But you are completely missing the point.

 

Yes, AMD, NVidia ans several big developers are not credible 9_9

 

GPUOpen effects are just as developed as GameWorks. Only difference is the legality, that allows devs to edit, mod and change the effects as they please. Not sure what you are talking about. Developers are not committing code to AMD.

 

They are allowed. No one has said otherwise or tried to prevent them for making GameWorks. I have heavily criticized them for the anti consumer result of their design and use of GW. Vendor lock in is the exact opposite of competition. So that is simply NOT true.

 

What are you on about? HairWorks had like a 20fps penalty in W3. RotTR had less than 5 fps penalty, while adding effects (water/snow/etc), while still being longer (thus more computing needed per strand of hair). Turbulence seems to work just fine:

The only reason I don't like HairWorks is it's enormous performance penalty without any increased visual fidelity to show for it. Unoptimized is how I would describe HW.

 

For the 1080 in a handful of DX12/Vulkan games, sure. But we both know that is not a proper alternative. Especially taking the 4GB of VRAM into account, as well as the power requirements. 

 

None one water cooled a Titan Z. Spending twice as much on a TZ, only to spend even more on a custom water cooling loop and block? Either way, the Titan Z was a failure as a card, so it's potential under even more expensive scenarios are irrelevant now.

 

For which games could you change the tessellation factor in the GameWorks effects directly in a settings file?

When it comes to GameWorks, you still fail to take into account, that something might be technically possible, but not be legally allowed qua their contract. AMD never signed such a contract.

 

You cannot honestly think that solid surface of pink triangles is performance well spent? You would tear AMD a new one, if they did that. It simply does not translate all that performance penalty into actual visual fidelity.

 

It is when they are proprietary and/or skewing the performance between AMD and NVidia as extremely as NVidia has done so far. From what we have seen, HairWorks are not better than the newer iterations of TressFX. Neither visually, nor performance wise.

Yeah because we all know that GameWorks games are cheaper than non gw games. Come on, that argument is moot.

 

Again, you blame something on the developers, that might easily be a restriction put on them contractually. Even CDPR hinted strongly at that. Either way, we know AMD can do it, because they added the function in their drivers. Point is, that it should not be necessary to do in the first place.

Claims without evidence of correctness or non-disprovable logic proofs of validity are worthless. I can claim to be the greatest performance engineer in the world and present evidence by creating an even better version of Linpack, but that's still not enough because it doesn't account for all platforms and performance metrics.

 

GPUOpen is maintained by AMD on github and subversion. Sure, games studios can just pull from the library and use it in-house and modify in-house as desired, but there are many developers, and the only group maintaining it is a small AMD team. Also, any developer could modify their gameworks code if they bought a license to do so, and that license is not expensive considering the tens of millions of dollars that go into game development.

 

Vendor lock-in is a form of competition. If you can create an environment so good no one wants to leave, that is a perfect victory. There's nothing preventing you from using a GSync monitor with an AMD graphics card or a FreeSync monitor with an Nvidia graphics card. There is no evidence Nvidia or AMD is making monitor manufacturers not put both standards on one monitor, so it's still not anti-competitive. CUDA was Nvidia's direct answer to provide a GPGPU compute API for the world. AMD should have developed its own, because OpenCL is still so bad it can't remotely keep up.

 

It's a 20fps hit on max with other effects going that compound on the rendering. On its own it's a 10-12 fps hit for Maxwell cards. Hairworks has the same snow/ice/water effects as TressFX. They just weren't used in TW3. Would you like me to point out the exact lines of code in the development kit that prove this, or provide the documentation? Also, The hair is not in as great a volume on Lara Croft, so it's really not as direct a comparison as you'd like to think. Hairworks is incredibly optimized. It's more optimized than TressFX actually if you push hair counts to the same count and length. HW is also more accurate in the way it handles turbulence physics.

 

I actually have a friend from Miami who used a custom EK block on the Titan X, and he beat the 295x2 in every benchmark.

 

TW3 and Batman AK can both change tessellation levels in the config files, and all games made by DICE.

 

I've read the contracts and developer TOS for Gameworks. There is nothing barring them from giving the users control over that. It's just laziness.

 

No, I wouldn't call that performance well spent and I would harangue AMD on that, but it's not Nvidia who did it, it was Crysis' developers.

 

It is better visually (too bad the only example you know of is TW3 where the character is ugly anyway), and we have no benchmark to objectively measure performance of one vs. the other, though I can guarantee you from using the Visual Studio performance profiling tools that HW is vastly superior in performance.

 

Actually they are on the whole cheaper, if by $5. Anything to cut costs will cut the price of games if demand remains steady. The fact prices haven't gone up with inflation is actually a real testament to cost cutting.

 

CDPR hinted at no such thing, and I've read the whole contract that gets given to people who want to publish GW games. Your theory is BS.

 

No, it shouldn't. Developers should get off their asses.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, HKZeroFive said:

Actually, I could be wrong...

 

b0xdID8.jpg

My money would be on a dual polaris 10 card, but even then that seems dumb. Releasing a high end graphic card like Vega or even a dual 480 at christmas makes zero sense. Something like the 460 seems like a good holiday gift for the twitch-happy kid who plays overwatch.

 

It's also possible Roy has gone senile and is randomly posting things that make him happy.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Briggsy said:

My money would be on a dual polaris 10 card, but even then that seems dumb. Releasing a high end graphic card like Vega or even a dual 480 at christmas makes zero sense. Something like the 460 seems like a good holiday gift for the twitch-happy kid who plays overwatch.

 

It's also possible Roy has gone senile and is randomly posting things that make him happy.

Eh, kinda makes zero sense posting a RX 460 teaser on a post about the GTX Titan X.

 

Roy Taylor is one of the few people I would have liked see AMD without.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Notional said:

Demand for Titan and x80ti should essentially be the exact same market, unless you really need the added VRAM, as they are the same chips. The difference lies in pricing and supply. Exactly and you can lie above that curve for a short while. The longer it takes for competition to make an alternative, the longer you can stay above that curve. Look at your standard VEL like this:

basic-conceptsofmarketing-41-728.jpg?cb=

When a brand goes above the line, it's price skimming. Such a thing can only be upheld, as long as there is no competition to provide an alternative. That has always been the case with Titan cards. The only exception was the Titan Z, that performed worse than the 295x2 at half the price. That made the Titan Z a disaster as no one in their right mind would buy such a value proposition.

That's adorable. You found a chart online and tried to interpret it.

 

Yes, companies can price skim, that's very observant of you...

But in this case, it's very easy to create the curve for the market with measurable GPU performance and price... And the Titan XP falls below that curve.

And you still have not addressed the face that the value of GPU performance drops rapidly, which is why both companies do things like the rebranding of chips, to minimize R&D as a cost.

 

23 minutes ago, Notional said:

Good, so pricing on the Titan cannot be claimed to be production cost based. You can certainly argue it's the recoup of the fixed costs of the architecture, but honestly I very much doubt that. That usually happens with regular scale of economics or outright from the professional market, like Teslas and Quattros.

 

Yeah but the first mover advantage is seriously small here, as an x80ti is a given so far. But that is why the Titan card can exist and be sold. Does not change that it's overpriced to hell, just that people are willing to shell out despite it.

I fail to see the relevance of rebranding in this case. Sure it's the same chip, but x80ti's are not full fat. And yes chips do get cheaper the longer you make them, as yields increases, and defects diminishes. In your argument, sure the fixed costs has been lowered a long time ago (specifically r&d), but this is still a somewhat different market situation.

Let me put it the other way around; why isn't NVidia launching the x80ti first, then when the stock is high, they can launch a Titan at Titan prices? Well you answered it yourself with the first mover point.

Yes. Nvidia has to recoup the $1.4 billion they spent on developing Pascal, and then become profitable for investors so they can continue to develop products.

 

And how do you fail to see the relevance of rebranding? It doesn't matter if it's "full fat", a variation of the chip, whatever... It's the fact that they develop a chip, sell it at a price, and then later re-package it at a much lower price. It's the same practice.

 

And if you can't understand why you don't sell the lower priced, re-packaged chip first.. there's no helping you...

 

31 minutes ago, Notional said:

The entire issue is that the relative price to performance line/curve/whatever has slowly risen to new heights. That is what I'm criticizing. Not that Titan cards exists, but that they are over priced, price skimming ripoffs (and yes a similarly priced AMD card would be too).

 

Well you can't argue the delta between x80 and Titan has increased when they are essentially arbitrary. When Kepler launched the 680 was the highest end card. Now we have 2 higher tiers based on a completely separate higher end chip (Gx100/102). You can as easily state that "lower end" cards are being marketed as x80 cards instead of say x60 cards, while retaining the x80 price. Again something achievable when competition is insufficient. It's brilliant market manipulation NVidia is doing. All of the sudden a single chip graphics card at 1200$ seems reasonable. That's the kind of things companies loves! And consumers are defending this behaviour?

 

That new segment was made by creating larger chips on the same node. That's all. Nothing impressive by it. But I guess we can thank 28 nm being relevant for 5+ years for that.

You can't argue any of that at all...

 

When new market segments emerge, it doesn't shift the old ones... Before the Titan series, the x60s, x70s, x80s were all seing 20-30% generational improvements, and that level of performance (factoring in generational improvements) has a certain price associated with it. They still occupy the same segment performance and price wise.

 

From the 580 to the 980, the MSRP went from $499, to $549... and.. what do you know? That matches inflation almost exactly...

You'd be correct in saying that the 1080 doesn't, but that's because the 1080 has shifted from it's usual segment. The usual 20-30% performance increase became 60%, justifying the extra $50 not accounted for by inflation. The fact that they stayed on the 28nm process for longer than usual is irrelevant because we were still seeing the same performance increases as standard per the industry. Especially if you're "viewing things from the customer's perspective".

If the current gen Titan was a 30% improvement over the last x80, then you could safely say that the Titan has shifted to take over the x80 segment.. But that has never been the case.

 

And I can fully argue that the performance delta between the x80 and the Titan has increased, and that justifies a greater price for the Titan... That's basic economics and math. As you showed in your pretty graph... As perceived value increased (i.e. relative market performance) so does the acceptable price.

47 minutes ago, Notional said:

My perspective is from the consumers point of view. Yours seems to be Nvidia's point of view.

 

Really? With you talking about what the price should be because of the cost to the companies, and how this generational gap was bigger because of a node change (irrelevant to the consumer), it seems like you're very much not talking from the consumer perspective.

As shown above, prices are still very much in line with what the consumer has come to expect from the industry over the past decade... So I'm not sure why all of a sudden things are worse for them because you don't think Nvidia is pricing appropriately compared to their costs.. Doesn't seem to be from a consumer perspective.

 

 

 

Also, in regard to your comment as a whole. You keep talking about how the Titan XP is overpriced, and how the Titan XP is way above the curve... But at no point are you providing any facts or numbers to show that that is the case...

 

The fact that we know the curve shows diminishing returns in terms of performance per dollar, and we know the cost of additional performance per dollar is nearly the same between the 1070 and the 1080, as it is between the 1080 and the Titan XP... Logically we can deduce that the Titan XP falls below the curve.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

From what I can see, the Titan X (Pascal) is expected to be "up to" 60% faster than the previous Titan X (Maxwell).

It has 60% more TFLOPS than the previous Titan X, but that doesn't translate directly to gaming performance.

 

The only rumor out there on its actual performance is 50% faster than a GTX 1080... But until benchmarks are out, it's just that... A rumor.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, -BirdiE- said:

It has 60% more TFLOPS than the previous Titan X, but that doesn't translate directly to gaming performance.

 

The only rumor out there on its actual performance is 50% faster than a GTX 1080... But until benchmarks are out, it's just that... A rumor.

That's why i had "up to" in my quote. Nvidia likes to use those two words, then put "in VR" at the very bottom of things. I have to agree with what the others are saying though. It will be unlikely if we ever see a 1080 Ti card, and that means I am left with disappointment. I just cant see myself shelling out $1200 for a 250w monster. Guess i'll wait to see what Vega has to offer. Who knows, might even pick up a GTX 1070 to hold me off until Volta. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

That's why i had "up to" in my quote. Nvidia likes to use those two words, then put "in VR" at the very bottom of things. I have to agree with what the others are saying though. It will be unlikely if we ever see a 1080 Ti card, and that means I am left with disappointment. I just cant see myself shelling out $1200 for a 250w monster. Guess i'll wait to see what Vega has to offer. Who knows, might even pick up a GTX 1070 to hold me off until Volta. 

It's really hard to say... But I feel like 60% faster than a Maxwell Titan X is the floor for the card, and 50% faster than a 1080 is the ceiling. If it's only 60% faster than a Titan XM, then that means it's ~23% faster than a 1080... which would make it overpriced.... If it's 50% faster than a 1080, that's actually a pretty good price.

 

I guess it all depends where the rumor is coming from... If it's coming from Nvidia's marketing department, it's going to be an inaccurate, cherry picked statistic.... If it's coming from someone who has actually tested the card and broke the NDA, then it might be accurate (the 50% faster than a 1080 that is).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, -BirdiE- said:

It's really hard to say... But I feel like 60% faster than a Maxwell Titan X is the floor for the card, and 50% faster than a 1080 is the ceiling. If it's only 60% faster than a Titan XM, then that means it's ~23% faster than a 1080... which would make it overpriced.... If it's 50% faster than a 1080, that's actually a pretty good price.

 

I guess it all depends where the rumor is coming from... If it's coming from Nvidia's marketing department, it's going to be an inaccurate, cherry picked statistic.... If it's coming from someone who has actually tested the card and broke the NDA, then it might be accurate (the 50% faster than a 1080 that is).

Nvidia's own blog said it was 60% faster performance than the previous Titan X. The problem is, they never specified where that performance is. If that is the case, then I am left to believe that its not 50% faster than the 1080. If it was, they would have gladly used that metric. Saying "60% faster than Previous Titan X" makes it sound better than it actually is, because the Titan X (Maxwell) is already slower than the new GTX 1080, by roughly 25-30% depending on task/game/overclocks. 

 

The fact that they used the words "up to" tells me that it will likely not be 60% faster all around, but only at very specific things. I honestly hope AMD delivers something amazing at a cheaper price, because regardless of what the others say, $1200 for a product that is only 30% faster than a $600-$700 product is just shenanigans. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MageTank said:

Nvidia's own blog said it was 60% faster performance than the previous Titan X. The problem is, they never specified where that performance is. If that is the case, then I am left to believe that its not 50% faster than the 1080. If it was, they would have gladly used that metric. Saying "60% faster than Previous Titan X" makes it sound better than it actually is, because the Titan X (Maxwell) is already slower than the new GTX 1080, by roughly 25-30% depending on task/game/overclocks. 

 

The fact that they used the words "up to" tells me that it will likely not be 60% faster all around, but only at very specific things. I honestly hope AMD delivers something amazing at a cheaper price, because regardless of what the others say, $1200 for a product that is only 30% faster than a $600-$700 product is just shenanigans. 

These all seems like very reasonable assumptions. That's disappointing.

 

Because, ya, if the 1080 is 30% faster than the Titan XM, and the Titan XP is 60% faster than the Titan XM... Then (1.6/1.3 = 1.23) the Titan XP is only 23% faster than the 1080.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, -BirdiE- said:

These all seems like very reasonable assumptions. That's disappointing.

 

Because, ya, if the 1080 is 30% faster than the Titan XM, and the Titan XP is 60% faster than the Titan XM... Then (1.6/1.3 = 1.23) the Titan XP is only 23% faster than the 1080.

But we know how well Pascal overclocks, and the Titan XP clocks are already more than 100 MHz slower than the 1080. I wouldn't despair just yet.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

But we know how well Pascal overclocks, and the Titan XP clocks are already more than 100 MHz slower than the 1080. I wouldn't despair just yet.

That's my hope. The 1080 overclocks well, but is generally more limited by the chip itself, and not temperatures...

 

If the Titan XP is clocked slower due to thermals (too many of dem CUDA cores), then throwing a waterblock on it and cooling it with my 5x120mm worth of Rad space should be enough to get a decent overclock.

 

Although.. If it's only 23% faster than a 1080 to begin with.... I think I might have trouble dishing out $1200 USD for it...

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, -BirdiE- said:

That's my hope. The 1080 overclocks well, but is generally more limited by the chip itself, and not temperatures...

 

If the Titan XP is clocked slower due to thermals (too many of dem CUDA cores), then throwing a waterblock on it and cooling it with my 5x120mm worth of Rad space should be enough to get a decent overclock.

 

Although.. If it's only 23% faster than a 1080 to begin with.... I think I might have trouble dishing out $1200 USD for it...

Even a 1x120 is plenty for a single GPU in my experience. You just need a good pump and a good fan. These days a single 1x120 Black Ice GTX Nemesis and a 2150RPM Gentle Typhoon keeps my two Titan Blacks in the home workstation in the 85-90* range with a 150 MHz overclock. Not ideal temps for sure, but I'm not into doing big overclocks anymore.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, patrickjp93 said:

Even a 1x120 is plenty for a single GPU in my experience. You just need a good pump and a good fan. These days a single 1x120 Black Ice GTX Nemesis and a 2150RPM Gentle Typhoon keeps my two Titan Blacks in the home workstation in the 85-90* range with a 150 MHz overclock.

85-90C? I hope not... (Sorry, I'm Canadian)

 

I mean, I could easily get away with less (4770k and whatever GPU I decide on will be in the loop).. But I have fun pushing my hardware to the limits, and I want to make sure thermals aren't the limiting factor. All I do is overkill.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Notional said:

-SNIP-

 

2 hours ago, MageTank said:

Nvidia's own blog said it was 60% faster performance than the previous Titan X. The problem is, they never specified where that performance is. If that is the case, then I am left to believe that its not 50% faster than the 1080. If it was, they would have gladly used that metric. Saying "60% faster than Previous Titan X" makes it sound better than it actually is, because the Titan X (Maxwell) is already slower than the new GTX 1080, by roughly 25-30% depending on task/game/overclocks. 

 

The fact that they used the words "up to" tells me that it will likely not be 60% faster all around, but only at very specific things. I honestly hope AMD delivers something amazing at a cheaper price, because regardless of what the others say, $1200 for a product that is only 30% faster than a $600-$700 product is just shenanigans. 

What a valid argument looks like...

Link to comment
Share on other sites

Link to post
Share on other sites

No one commenting on the new performance gains for int8? If I recall correctly it's more efficient to store 8 bit arrays in GPU memory than say 16 bit or 32 bit arrays as less bytes of data are used. I'm not sure how this ties into the new deep learning inferencing instruction, but I would assume that data that would normally be stored on system memory can now be compressed to fit on the GPU itself and thus reducing bottlenecks associated with the PCIe bus and latency issues accessing data on System RAM.

 

On a side note, I would assume this will have an positive impact in pooled resources or GPU clusters, especially for applications involving DMA and Infiniband. 

 

Any thoughts? I have no experience with deep learning (mechanical engineer).

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, patrickjp93 said:

Claims without evidence of correctness or non-disprovable logic proofs of validity are worthless. I can claim to be the greatest performance engineer in the world and present evidence by creating an even better version of Linpack, but that's still not enough because it doesn't account for all platforms and performance metrics.

 

GPUOpen is maintained by AMD on github and subversion. Sure, games studios can just pull from the library and use it in-house and modify in-house as desired, but there are many developers, and the only group maintaining it is a small AMD team. Also, any developer could modify their gameworks code if they bought a license to do so, and that license is not expensive considering the tens of millions of dollars that go into game development.

 

Vendor lock-in is a form of competition. If you can create an environment so good no one wants to leave, that is a perfect victory. There's nothing preventing you from using a GSync monitor with an AMD graphics card or a FreeSync monitor with an Nvidia graphics card. There is no evidence Nvidia or AMD is making monitor manufacturers not put both standards on one monitor, so it's still not anti-competitive. CUDA was Nvidia's direct answer to provide a GPGPU compute API for the world. AMD should have developed its own, because OpenCL is still so bad it can't remotely keep up.

 

It's a 20fps hit on max with other effects going that compound on the rendering. On its own it's a 10-12 fps hit for Maxwell cards. Hairworks has the same snow/ice/water effects as TressFX. They just weren't used in TW3. Would you like me to point out the exact lines of code in the development kit that prove this, or provide the documentation? Also, The hair is not in as great a volume on Lara Croft, so it's really not as direct a comparison as you'd like to think. Hairworks is incredibly optimized. It's more optimized than TressFX actually if you push hair counts to the same count and length. HW is also more accurate in the way it handles turbulence physics.

 

I actually have a friend from Miami who used a custom EK block on the Titan X, and he beat the 295x2 in every benchmark.

 

TW3 and Batman AK can both change tessellation levels in the config files, and all games made by DICE.

 

I've read the contracts and developer TOS for Gameworks. There is nothing barring them from giving the users control over that. It's just laziness.

 

No, I wouldn't call that performance well spent and I would harangue AMD on that, but it's not Nvidia who did it, it was Crysis' developers.

 

It is better visually (too bad the only example you know of is TW3 where the character is ugly anyway), and we have no benchmark to objectively measure performance of one vs. the other, though I can guarantee you from using the Visual Studio performance profiling tools that HW is vastly superior in performance.

 

Actually they are on the whole cheaper, if by $5. Anything to cut costs will cut the price of games if demand remains steady. The fact prices haven't gone up with inflation is actually a real testament to cost cutting.

 

CDPR hinted at no such thing, and I've read the whole contract that gets given to people who want to publish GW games. Your theory is BS.

 

No, it shouldn't. Developers should get off their asses.

None the less, I find the AMD and NVidia people to be quite credible in this matter. Just as I think the actual developers in question seem to be credible in that article. You claim they are not? That's your prerogative, but I believe them more, as they actually argue their points, and are actual industry insiders.

 

That's the exact point. Developers can download GPUOpen effects and mod/use them to their hearts content. That is what they did with TressFX, making PureHair. I'm sure developers could, but no one did, why do you think that is? However with all the backlash NVidia has gotten from GameWorks, and with the announcement, that they will open (more) up for it, hopefully it won't be a problem anymore. However that seems to be needed as more and more are scrapping GameWorks. The only thing that seems to be useful is HBAO/+.

 

Vendor lock in is de facto anti competitive. You are factually incorrect here.

Nothing preventing you, but you lose out on the value proposition you paid the 1-200$ premium to get in Gsync. AMD cannot prevent anyone from implementing a VESA standard in anything. The issue is that a monitor would need 2 monitor controllers (scalers) to support both, which adds such a high cost, the monitor would price itself out of the market.

 

PureHair had all the same effects added, including even more effects (water/snow/etc), yet it still only used a quarter of the performance to achieve it, while looking arguably better. Sorry HairWorks sucks, which is why no one uses it. Lara's Hair is fully (heh) as great in volume as Geralts. Unless you play it on XBone. Set the effect to high.

Second time you mention turbulence. Any sources to back up that claim?

 

So your friend spent more than twice the price on the card than a 295x2? Yeah I would demand higher perf too. But I digress. The point is that the Titan Z failed hard because of direct competition. The other Titan cards did not, so the market would be able to carry the extreme price premium.

 

DICE doesn't use GameWorks. They publicly even criticized it (well the lead dev did). In TW3 you could not. Only the AA of the tess. was editable.

 

Of course you have read that. You have no idea what contracts has been made between NVidia and any dev or what limitations the devs have been given by NVidia and/or the publisher.

 

It would be very interesting to see a game try and utilize both HairWorks and TressFX. Too bad both tech's are so limited in number of games implemented. I've seen HairWorks on other stuff. But I don't think the image/effect quality at all qualifies an additional 300% performance hit.

 

GameWorks games has the exact same price with or without, whether you compare to other games or games in the same series (like COD). So no.

 

Odd since CDPR outright said they could not optimize HairWorks for AMD: http://forums.cdprojektred.com/threads/35278-Nvidia-HairWorks?p=1658427&viewfull=1#post1658427 But I guess you know better than the devs themselves.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, -BirdiE- said:

That's adorable. You found a chart online and tried to interpret it.

 

Yes, companies can price skim, that's very observant of you...

But in this case, it's very easy to create the curve for the market with measurable GPU performance and price... And the Titan XP falls below that curve.

And you still have not addressed the face that the value of GPU performance drops rapidly, which is why both companies do things like the rebranding of chips, to minimize R&D as a cost.

 

Yes. Nvidia has to recoup the $1.4 billion they spent on developing Pascal, and then become profitable for investors so they can continue to develop products.

 

And how do you fail to see the relevance of rebranding? It doesn't matter if it's "full fat", a variation of the chip, whatever... It's the fact that they develop a chip, sell it at a price, and then later re-package it at a much lower price. It's the same practice.

 

And if you can't understand why you don't sell the lower priced, re-packaged chip first.. there's no helping you...

I linked that as a courtesy so you might learn a little, or at least get hinted to what theories to look into. Don't worry I won't bother to do that anymore, as you don't seem to comprehend them anyways.

 

The issue with rebranding is that an x80ti is not essentially a rebrand. It's a cut down, so it's never used for Titans anyways. I get your point, it just doesn't apply here.

 

Yeah 1.4 billion dollars. That includes their AI driving computers, proprietary server boards with NVLink, and deep learning. None of that has anything to do with the Titan, and should not reflect at all on the price. However how NVidia chooses to spread out the fixed costs are impossible for us to conclude on. 

 

Yes, just like I talked about earlier with first mover, and the way that pricing and supply/demand works. Still does not defend why a company should charge that much. Obviously the answer is because they can. But that is exactly why I criticize them from a consumer perspective. They can also focus on proprietary techs. and vendor lock in. Doesn't mean that's not very criticizable from a consumer perspective.

 

You might want to read my post again, because you obviously misunderstood something, when I outright conclude why you don't sell it at a lower price. Jeez. 

 

7 hours ago, -BirdiE- said:

You can't argue any of that at all...

 

When new market segments emerge, it doesn't shift the old ones... Before the Titan series, the x60s, x70s, x80s were all seing 20-30% generational improvements, and that level of performance (factoring in generational improvements) has a certain price associated with it. They still occupy the same segment performance and price wise.

 

From the 580 to the 980, the MSRP went from $499, to $549... and.. what do you know? That matches inflation almost exactly...

You'd be correct in saying that the 1080 doesn't, but that's because the 1080 has shifted from it's usual segment. The usual 20-30% performance increase became 60%, justifying the extra $50 not accounted for by inflation. The fact that they stayed on the 28nm process for longer than usual is irrelevant because we were still seeing the same performance increases as standard per the industry. Especially if you're "viewing things from the customer's perspective".

If the current gen Titan was a 30% improvement over the last x80, then you could safely say that the Titan has shifted to take over the x80 segment.. But that has never been the case.

 

And I can fully argue that the performance delta between the x80 and the Titan has increased, and that justifies a greater price for the Titan... That's basic economics and math. As you showed in your pretty graph... As perceived value increased (i.e. relative market performance) so does the acceptable price.

 

Really? With you talking about what the price should be because of the cost to the companies, and how this generational gap was bigger because of a node change (irrelevant to the consumer), it seems like you're very much not talking from the consumer perspective.

As shown above, prices are still very much in line with what the consumer has come to expect from the industry over the past decade... So I'm not sure why all of a sudden things are worse for them because you don't think Nvidia is pricing appropriately compared to their costs.. Doesn't seem to be from a consumer perspective.

 

Also, in regard to your comment as a whole. You keep talking about how the Titan XP is overpriced, and how the Titan XP is way above the curve... But at no point are you providing any facts or numbers to show that that is the case...

 

The fact that we know the curve shows diminishing returns in terms of performance per dollar, and we know the cost of additional performance per dollar is nearly the same between the 1070 and the 1080, as it is between the 1080 and the Titan XP... Logically we can deduce that the Titan XP falls below the curve.

 

You touch upon it slightly with perceived value, but you seem to ignore how that perceived value is created. That is the point. NVidia has been great at creating a market/demand, that makes such a huge price hike from older generations koth cards to the 700 series Titan Koth card and up. NVidia made a new higher end segment than the x80 cards because they could due to lack of competition. Not because they needed to do so based on costs of the products, neither fixed nor variable. Great from NVidia's perspective, bad from a consumers perspective. You can disagree all you want on that point.

 

No ones talking about inflation. Inflation cannot explain a shift to a 1000+$ new card segment. That's the whole point. NVidia simply degraded their entire lines with 10, so a normal Gx100 chip is no longer an x80 chip but 2 cards higher via x80ti and Titan series. It's clever as they can boost the price, which is good for NVidia, but you can never argue that it isn't bad for the consumer. Increasing perceived value is simply a marketing trick, nothing more.

Fact of the matter is, that before the original Titan, a 1k$ card was unheard of. NVidia has been so successful in their marketing, thus changing the perceived value, that some consumers are defending a 1200$ price point for a Titan XP, that is only a GP102 chip. Ridiculous.

 

How as a node shrink irrelevant to consumers? It lowers the price to performance significantly more than a new architecture, lowers power usage, and usually price as well. All the AMD 200/300 series rebrands and all the "turbulence" in the GPU market, was caused by the 20nm node first being delayed, then outright scrapped. It had a larger consequence than most people realize. NVidia just had enough money to throw at the problem, that no one saw it as an issue.

 

So the Titan XP being the highest priced consumer single GPU gaming card ever made (afaik), is not a fact or number that proves it overpriced? Are you that brainwashed by NVidia marketing? 

 

We can agree that the performance per dollar between the different 1000 series cards makes sense. The problem is that is starts too high and ends too high. However with the XP we simply do not know the performance yet. If the up to 60% is similar to the retarded graph NVidia sent out comparing the 1060 to the 480, then I would seriously question the pricing.

 

Conclusion is that NVidia prices the Titan cards as high as possible, because the lack of competition allows it. Not because of costs in production/r&d/whatever, and not because the card is worth it as a whole, but because of simply supply and demand mechanics. But you already know that since you have a "business degree".

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Notional said:

None the less, I find the AMD and NVidia people to be quite credible in this matter. Just as I think the actual developers in question seem to be credible in that article. You claim they are not? That's your prerogative, but I believe them more, as they actually argue their points, and are actual industry insiders.

 

That's the exact point. Developers can download GPUOpen effects and mod/use them to their hearts content. That is what they did with TressFX, making PureHair. I'm sure developers could, but no one did, why do you think that is? However with all the backlash NVidia has gotten from GameWorks, and with the announcement, that they will open (more) up for it, hopefully it won't be a problem anymore. However that seems to be needed as more and more are scrapping GameWorks. The only thing that seems to be useful is HBAO/+.

 

Vendor lock in is de facto anti competitive. You are factually incorrect here.

Nothing preventing you, but you lose out on the value proposition you paid the 1-200$ premium to get in Gsync. AMD cannot prevent anyone from implementing a VESA standard in anything. The issue is that a monitor would need 2 monitor controllers (scalers) to support both, which adds such a high cost, the monitor would price itself out of the market.

 

PureHair had all the same effects added, including even more effects (water/snow/etc), yet it still only used a quarter of the performance to achieve it, while looking arguably better. Sorry HairWorks sucks, which is why no one uses it. Lara's Hair is fully (heh) as great in volume as Geralts. Unless you play it on XBone. Set the effect to high.

Second time you mention turbulence. Any sources to back up that claim?

 

So your friend spent more than twice the price on the card than a 295x2? Yeah I would demand higher perf too. But I digress. The point is that the Titan Z failed hard because of direct competition. The other Titan cards did not, so the market would be able to carry the extreme price premium.

 

DICE doesn't use GameWorks. They publicly even criticized it (well the lead dev did). In TW3 you could not. Only the AA of the tess. was editable.

 

Of course you have read that. You have no idea what contracts has been made between NVidia and any dev or what limitations the devs have been given by NVidia and/or the publisher.

 

It would be very interesting to see a game try and utilize both HairWorks and TressFX. Too bad both tech's are so limited in number of games implemented. I've seen HairWorks on other stuff. But I don't think the image/effect quality at all qualifies an additional 300% performance hit.

 

GameWorks games has the exact same price with or without, whether you compare to other games or games in the same series (like COD). So no.

 

Odd since CDPR outright said they could not optimize HairWorks for AMD: http://forums.cdprojektred.com/threads/35278-Nvidia-HairWorks?p=1658427&viewfull=1#post1658427 But I guess you know better than the devs themselves.

No, he paid $1700 for the Titan Z. The price fell sharply a long while back if you recall.

 

CDPR can't optimize jack squat anyway. Failure to do so does not mean it can't be done. Many countries and companies failed to get a viable supersonic commercial jet working, until Concorde.

 

If AMD can optimize it via the drivers, then CDPR can optimize it from the game's side too. It's all about profiling and good old detective work, something games programmers on the whole have no clue how to do, and for those who do, short of Mike Acton, suck at royally.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, -BirdiE- said:

That's my hope. The 1080 overclocks well, but is generally more limited by the chip itself, and not temperatures...

 

If the Titan XP is clocked slower due to thermals (too many of dem CUDA cores), then throwing a waterblock on it and cooling it with my 5x120mm worth of Rad space should be enough to get a decent overclock.

 

Although.. If it's only 23% faster than a 1080 to begin with.... I think I might have trouble dishing out $1200 USD for it...

Well we can see how many TFlops the card has at different clocks to guess where performance can be at. I don't see how it couldn't hit 1800Mhz like all the other Pascal cards can if the heat were controlled so the performance should be about 40% faster at the same clocks. If heat can't be controlled and it doesn't go much higher than its base/boost clocks it's not going to be such a huge gap like we would want.

.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, AlwaysFSX said:

Well we can see how many TFlops the card has at different clocks to guess where performance can be at. I don't see how it couldn't hit 1800Mhz like all the other Pascal cards can if the heat were controlled so the performance should be about 40% faster at the same clocks. If heat can't be controlled and it doesn't go much higher than its base/boost clocks it's not going to be such a huge gap like we would want.

It's an interesting thought. Gpu boost 3 is tuned very agressively towards temperature, where going above 50c causes the software to back clocks down incrementally. In precision xoc you can override that by toggling kboost and force max boost clocks despite temperatures, and there's no loss in stability. My guess is that nvidia will tune down the gpu boost agressiveness on the titan xp and prioritize temp over power draw, giving the card a much greater voltage and temperature ceiling to play under. 

 

I still don't know how they will handle the extra heat. Using the vapor chamber on the 1080 FE only gave good thermals until the heatsink became saturated, and then it was back to normal. Titan xp FE  is going to be hitting 90 plus degrees, no doubts about it in my mind. Aftermarket cooling is the best bet on such a beast, like a good loop.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Briggsy said:

It's an interesting thought. Gpu boost 3 is tuned very agressively towards temperature, where going above 50c causes the software to back clocks down incrementally. In precision xoc you can override that by toggling kboost and force max boost clocks despite temperatures, and there's no loss in stability. My guess is that nvidia will tune down the gpu boost agressiveness on the titan xp and prioritize temp over power draw, giving the card a much greater voltage and temperature ceiling to play under. 

 

I still don't know how they will handle the extra heat. Using the vapor chamber on the 1080 FE only gave good thermals until the heatsink became saturated, and then it was back to normal. Titan xp FE  is going to be hitting 90 plus degrees, no doubts about it in my mind. Aftermarket cooling is the best bet on such a beast, like a closed loop.

It's not like Nvidia hasn't made a custom cooler before, the 590 was dual slot but the Titan Z was triple slot. They may go for a non-traditional reference design to cope with the heat. Either that or keep the card heavily down-clocked when stock. Though I don't think they'll change how GPU Boost works for one card, because it'll work as designed in the first place for the Titan XP as far as I'm aware because it should be hitting max temp stock in the first place. Or so I assume.

.

Link to comment
Share on other sites

Link to post
Share on other sites

nvidia went full retard on the naming. amd should launch a fury XP and go with all the leveling up marketing to stick it to nvidia

Link to comment
Share on other sites

Link to post
Share on other sites

Well, at least it looks pretty :l and futuristic :P! (We lack those things nowadays with gadgets...)

Groomlake Authority

Link to comment
Share on other sites

Link to post
Share on other sites

exactly!

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×