Jump to content

nVidia GeForce Partner Program: Well Intention Marketing or Anti-Competitive

WMGroomAK
4 minutes ago, Blademaster91 said:

It still says "gaming" on it,Asus and MSI have other gaming brands like "Dual" or "Armor" so its hardly a "plain package" or "white box" treatment. Nvidia wanting to use ROG,Gaming X,or Aorus to help consumers identify those sub-brands as Nvidia cards wouldn't exactly be anti-consumer,IMO.

Thats a bit out there, nV provides different quality chips and if MSI wants to have tiered nV cards, as they have now, AMD will be plain packaged. Im sure MSI wont give up on making nV armor cards, since not all chips are binned the same, and this is where the plain packaging concern comes from, and branding overtaking one. Unless AMD wants to cash out plenty of MDF, they will have basic "gaming" cards without any proper gaming branding, as in "Gigabyte Radeon RX 580 gaming card"

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

 

No, because this has really only come to light recently and is still fairly new, we have only one example of AMD not getting the Aorus branding and everything else seems to be the same.  There is not enough data to make a claim about changes in marketing material.  For that we need a lot more time and records.  

Well of course it's the only new product we've seen since.

What makes me reacting is that the packaging and design are exactly the same but the branding is different, which is odd. If the two products are variant of a common design, it makes sense they get the same branding and marketing. We'll have to see what happens with the Navi or Vega refresh cards, but it may be too late then...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

Or just do what every other manufacturer does and create another label.  Intel did it with pentiums and celerons labeling some gold and some as silver. 

And we come back to the starting point - this is concerning. It is nV overtaking existing TMs belonging to some1 else, without having to shell out the cash for the brand development, AIBs done that for them. AMD would need to heavily incentify AIBs to spend resources on creation of a new label, gaming brands are not as simple as bronze gold platinum intel shit, where they basicly had an intern work in photoshop for an hour making pretty coloured text. Gaming brand takes alot of time and effort to achieve market penetration and nV, basicly, appropriated all that AIBs effort

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, laminutederire said:

They seem to take the brand with the best image and the brand associated with high end products (ie the best coolers for both company's card before).

Yes, but that seems to be up to the manufacturers to decide. I think it makes sense to take the brand associated with high end products, and give that to the high end products (let's face it, AMD are not competitive in the high end).

I haven't seen any evidence that Asus are being forced to give ROG to Nvidia, but because it is associated with high end products it is the logical thing to do.

 

5 minutes ago, laminutederire said:

That should directly affect negatively sales for AMD and reinforce the unfair mind share bias towards Nvidia. From the way it is going, for AMD to get on top they'll have to have significantly better products for 10 years straight to get rid of the mindshare effects this implies.

Yeah, it's the halo effect.

That does not prevent them from making a new name for AMD products and market that as high end though. For example Federation of Gamers for AMD and Republic of Gamers for Nvidia. Or STRIX for Nvidia and ROC for AMD would be another example.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, and another thing to note, since nV are devious bastards, they are quite adept at what we call mnogohodovochka here in Russia (multi-step strategy). Ray tracing announcement plays well into that, expect some AAA games to be advertised as next level gaming experience with nV raytracing reallife visual expirience, stickers on cards as "READY FOR NEXTGEN RAYTRACING FIDELITY" and the other marketing mumbojumbo further reinforcing nVs position as the only proper gaming expirience gpu maker.

 

 

Oh, one last thing - gaming box is not a brand but a product name, so amd one is [Radeon] {rx 580}(gaming box) as in [brand name]{product designation}(product name)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, LAwLz said:

Yes, but that seems to be up to the manufacturers to decide. I think it makes sense to take the brand associated with high end products, and give that to the high end products (let's face it, AMD are not competitive in the high end).

I haven't seen any evidence that Asus are being forced to give ROG to Nvidia, but because it is associated with high end products it is the logical thing to do.

 

Yeah, it's the halo effect.

That does not prevent them from making a new name for AMD products and market that as high end though. For example Federation of Gamers for AMD and Republic of Gamers for Nvidia. Or STRIX for Nvidia and ROC for AMD would be another example.

Well considering Nvidia and those aib brand a 10600 or 1070 as high end, amd is up there with the 56 which competes decently with the 1070s and 1070tis.

But the real issue is that the way they market it is that high end doesn't mean high end gpu it means high end build, well binned and so forth. That means that you get a premium product for every segment as well, even for mid range cards, where AMD is competitive. You get a significant unfair advantage when marketing a 580 against a 1060 then, and that's an issue.

That's one thing.

 

The other thing is: they're not competitive on the enthusiast cards etc, but nothing stops them from stomping Nvidia with Navi. What happens then if they are locked out the high end brands? They'll sell worse than they should, and it'll prevent them from having sales that could help them staying on top. Their only way out is to fund huge marketing campaigns to establish new high end names. However they're tight enough on money as it is, so I'm afraid they can't do both. And we end up with either inferior products due to lack of r&d funds or inferiorly marketed and inferior sells of better products due to lack of marketing. (Or both bad, but in this case, that'll just bury them fast enough that we may only have Nvidia for gpus because Intel wouldn't produce gpu yet).

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, laminutederire said:

The other thing is: they're not competitive on the enthusiast cards etc, but nothing stops them from stomping Nvidia with Navi. What happens then if they are locked out the high end brands? They'll sell worse than they should, and it'll prevent them from having sales that could help them staying on top. Their only way out is to fund huge marketing campaigns to establish new high end names. However they're tight enough on money as it is, so I'm afraid they can't do both. And we end up with either inferior products due to lack of r&d funds or inferiorly marketed and inferior sells of better products due to lack of marketing. (Or both bad, but in this case, that'll just bury them fast enough that we may only have Nvidia for gpus because Intel wouldn't produce gpu yet).

Well if AMD got rid of their awful marketing team that would rather cater to miners, and invest more in selling GPUs really worth the AIBs spending on their own branding and marketing on, AMD would have better competition with their existing products.

AMD attacking Nvidia over Pascal performance per watt then releasing Vega that has higher overall power draw as one example didn't help them. Even with AMD sharing the gaming brands,their cooler or card quality is sometimes lower than the comparable Nvidia brand card, not necessarily AMD's fault though AIBs of course will sink more cost into what sells more. I don't know about AMD hurting for money,they should be making loads of profit from selling most of their cards to miners.

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Blademaster91 said:

Even with AMD sharing the gaming brands,their cooler or card quality is sometimes lower than the comparable Nvidia brand card, not necessarily AMD's fault though AIBs of course will sink more cost into what sells more.

Not usually, if comparing Strix to Strix etc. For the most part they are exactly the same other than the mounting design for the cooler otherwise the main design of the coolers is the same, the VRMs also use the same parts. It's not cost effective to source too many different components so they create common designs and product segments and apply that universally.

 

For example the Strix 1060 and Strix 480 use the same VRMs in the same layout, IR3555.

 

Strix 1060:

 

 

Strix 480:

 

 

Not sure where or why you got the impression that the AMD cards are lower quality when they are for the most part identical other than the GPU package on the PCB and memory layout, everything else is the same componentry and construction wise. This is why Nvidia is pushing for branding changes because they are basically identical to the untrained eye, you have to check the label to know if it's an Nvidia or AMD GPU.

 

Edit:

Lets play a game, name the graphics card. Btw I forgot myself which is one which lol.

 

Spoiler

177854880.jpeg

 

Spoiler

156995919.jpeg

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Yes, but that seems to be up to the manufacturers to decide. I think it makes sense to take the brand associated with high end products, and give that to the high end products (let's face it, AMD are not competitive in the high end).

I haven't seen any evidence that Asus are being forced to give ROG to Nvidia, but because it is associated with high end products it is the logical thing to do.

That does not explain why you still see 1050's with Gaming X versions but not anything from AMD. In fact if you look at MSI's product page for GPU's you'll find that all older and current AMD cards which had Gaming X versions have been removed. You can find them by searching manually for the product name using the search bar but they are hidden from the sort functionality. When it comes to the RX 580 you'll find instead of the Gaming X versions that there are Armor MK2 cards.

Spoiler

Gaming X5ab0f8113d331_GamingX.png.5fe1e38eb3cb3bd69442be8d939b83f2.pngArmorArmor.png.95c4e50c8dbeb23ce3338485c4c7012f.pngArmor MK25ab0f831e4996_ArmorMK2.png.fb1bbe1d6723d2d4fea2f1f1c852205b.png

If you inspect them you can see that the Armor and Armor MK2 are the same based on the fins as well as the name obviously. Then if you look at the specifications page you'll find that the Armor card weighs ~800g compared to ~1.2kg for the Gaming X. So it seems to me like You're basically getting shafted if you go the Radeon card.

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Blademaster91 said:

 

They already have those with the Polaris chips. Their market share grew even before the mining craze thanks to those chips. You guys should all stop to paint amd as an black company. They made errors and everything, but they do not deserve to be squeezed out of the picture. If it weren't for AMD we wouldn't have 1070tis, the 1060s would have possibly appeared months later and so on. Nvidia had inferior products for the price points so they had to push to market new cards or push cards faster. (same for the the Titan fiasco as well in recent times). If amd was nowhere to compete we would not have those things happening, because Nvidia would be so out of touch that they could release things whenever they want etc.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Carclis said:

That does not explain why you still see 1050's with Gaming X versions but not anything from AMD. In fact if you look at MSI's product page for GPU's you'll find that all older and current AMD cards which had Gaming X versions have been removed. You can find them by searching manually for the product name using the search bar but they are hidden from the sort functionality. When it comes to the RX 580 you'll find instead of the Gaming X versions that there are Armor MK2 cards.

  Hide contents

 

If you inspect them you can see that the Armor and Armor MK2 are the same based on the fins as well as the name obviously. Then if you look at the specifications page you'll find that the Armor card weighs ~800g compared to ~1.2kg for the Gaming X. So it seems to me like You're basically getting shafted if you go the Radeon card.

Interesting find, yea no AMD Gaming X to be found unless you go out of your way to search for it on their site.

https://www.msi.com/Graphics-card/Radeon-RX-580-GAMING-X-8G/Specification

https://www.msi.com/Graphics-card/GeForce-GTX-1060-GAMING-VR-6G/Specification

 

And what's this boost product naming? Is it new, don't follow MSI much so no idea. Is Aero also going Nvidia exclusive?

RADEON RX VEGA 64 AIR BOOST 8G: https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-AERO-8G-OC (url not matching product name).

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

And what's this boost product naming? Is it new, don't follow MSI much so no idea. Is Aero also going Nvidia exclusive?

RADEON RX VEGA 64 AIR BOOST 8G: https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-AERO-8G-OC (url not matching product name).

Yeah I'm not sure about that. I've never seen it before but it appears to be a name that is only used for the Vega cards. As the url suggests Aero is probably what it was before and I'm certainly familiar with their older Aero cards.

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Carclis said:

Yeah I'm not sure about that. I've never seen it before but it appears to be a name that is only used for the Vega cards. As the url suggests Aero is probably what it was before and I'm certainly familiar with their older Aero cards.

There's also Iron branding now too, think that is new.

https://www.msi.com/Graphics-card/Radeon-RX-Vega-64-IRON-8G

 

Edit:

Nvm seems to just be extending the limited edition Vega cards under a different name.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

They already have those with the Polaris chips. Their market share grew even before the mining craze thanks to those chips. You guys should all stop to paint amd as an black company. They made errors and everything, but they do not deserve to be squeezed out of the picture. If it weren't for AMD we wouldn't have 1070tis, the 1060s would have possibly appeared months later and so on. Nvidia had inferior products for the price points so they had to push to market new cards or push cards faster. (same for the the Titan fiasco as well in recent times). If amd was nowhere to compete we would not have those things happening, because Nvidia would be so out of touch that they could release things whenever they want etc.

 

You think they will be able to maintain their market share once next gen nV cards come out.  We already know DX12 and raytracing is hardware driven, there are features in hardware AMD will not have till their true next gen (nV calls it a raytrace engine).  I doubt GCN can be modified enough to handle raytracing well enough.  This is exactly why AMD wasn't part of MS's and nV's collaboration for DXR.  This is a much bigger problem than GPP, it gives GPP the leverage to do just about everything.

 

Oh when AMD was no where near to compete, delayed card releases, that's when they dropped marketshare like a rock, Maxwell.  So that same thing is going to happen with the next gen release of nV cards right?  All AMD has to counter with is 12nm Vega which might not even come to the consumer market, it might be only for HPC.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Razor01 said:

 

I don't think raytracing I'd that big of a deal right now. Amds approach on it might be picked up by companies before Nvidias solely because Nvidias is a black box using machine learning, while amds is a mix between rasterization as it's done now and raytracing. That is more familiar to devs and therefore might be easier to use for them, and most importantly it builds on more mature technology, which is something people had more time to prepare for in a way.

 

In either case that tech probably won't be seen in high profile games before Navi. Their timeline is Navi for 2019 isn't it? I would say raytracing in games should arrive end 2019-2020 the earliest. Mostly because it's new now, and a game takes at least 1 to 2 years of work to get done, and for some types of games it's even more in the 5 years range. So starting a game with from now leads us to that timeline at least. It also coincides with the PS5 expected release timeframe, which hardware should support it if it is about to be used heavily.

 

Amd may not be in that initiative because they have a significantly different approach on the subject and it was incompatible for them to work with Nvidias approach right now. And I'd guess they prefer pushing their tech through Vulkan instead of DX12 since w7 market share is still going strong for gamers, and that API won't be available for the people not wanting to jump on board of the w10 train. Whether that's a good call remains to be seen.

 

But you're right on the fact that AMD had to have a genius way to get back themselves on track since they basically need to be on par with Nvidia to survive and they need something like a 10-20% lead if they want a significant market share turn around to get aibs to ditch Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Razor01 said:

You think they will be able to maintain their market share once next gen nV cards come out.  We already know DX12 and raytracing is hardware driven, there are features in hardware AMD will not have till their true next gen (nV calls it a raytrace engine).  I doubt GCN can be modified enough to handle raytracing well enough.  This is exactly why AMD wasn't part of MS's and nV's collaboration for DXR.  This is a much bigger problem than GPP, it gives GPP the leverage to do just about everything.

The DX12 raytracing feature is hardly the be all and end all of DX12 or gaming, it's cool but it's not any different to any of the other GameWorks tech. There are often times when I look at a game with GW on and off and overall like the look more with it off, rather subjective to what you like though. Either way DXR will have nothing to do with any future market loss, if Nvidia releases a new architecture for gaming soon and it's much faster that will be the cause of it not sub features of DX12 that for the most part will be hard to spot, we're not talking cinema grade 24 hour+ frame renders.

 

14 minutes ago, Razor01 said:

Oh when AMD was no where near to compete, delayed card releases, that's when they dropped marketshare like a rock, Maxwell.  So that same thing is going to happen with the next gen release of nV cards right?  All AMD has to counter with is 12nm Vega which might not even come to the consumer market, it might be only for HPC.

That was more Maxwell refresh, GeForce 700 was a tight race but it was the GeForce 900 series that really pushed Maxwell to it's maximum potential and GCN 2 and 3 couldn't touch that, neither can GCN 4 for that matter. GCN 5 has the performance, 12 months late.

 

HPC for AMD is still mostly a dead end, none of our researches want AMD GPUs because they want CUDA. They'd rather just find extra funding than bother to learn OpenCL properly.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

snip

 

1 hour ago, leadeater said:

snip

 

Its a huge deal, When a graphics company can't fully support all DX features, those cards don't sell as well.  Case and point NV1, case and point FX series, MS created a new Direct X version for nV, even though they didn't need to.  The FX series background on DX9, nV was part of the DX 9 committee at the beginning stages of FX design, they left the committee, and look at how their cards turned out.....  Of course the performance issues, but the root cause of those performacne issues was the FX series was not designed fully for DX9, it was a hybrid chip, DX 8 with DX 9 features.

 

And think about this, Navi taped out late last year if we are to believe AMD, so by not being part of DXR, that means Navi doesn't have these new features and AMD won't have anything till oh 2019-20?

 

"early access to hardware", this gives it a whole new meaning. 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Razor01 said:

IT it a huge deal, When a graphics company can't fully support all DX features, those cards don't sell as well.  Case and point NV1, case and point FX series, MS created a new Direct X version for nV, even though they didn't need to.  The FX series background on DX9, nV was part of the DX 9 committee at the beginning stages of FX design, they left the committee, and look at how their cards turned out.....  Of course the performance issues, but the root cause of those performacne issues was the FX series was not designed fully for DX9, it was a hybrid chip, DX 8 with DX 9 features.

 

And think about this, Navi taped out late last year if we are to believe AMD, so by not being part of DXR, that means Navi doesn't have these new features and AMD won't have anything till oh 2019-20?

 

"early access to hardware", this gives it a whole new meaning. 

AMD hardware already is fully complaint with the DX12 spec, DXR  support is already confirmed for AMD as well. We'll have to wait for actual performance comparisons to know the full extent. We're also talking about an optional DX12 feature anyway which there is actually quite a few of.

 

hEq9o8.jpg

Far as optional features goes Nvidia is still actually the least feature complete when compared to AMD and Intel.

 

Quote

Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

To say Navi won't have any hardware acceleration features is a very premature thing to be saying and Nvidia wasn't the only one Microsoft was collaborating with but for the later parts to get a working demonstration only Nvidia had current working hardware.

 

Quote

But like Microsoft’s other DirectX APIs it’s important to note that the company isn’t defining how the hardware should work, only that the hardware needs to support certain features. Past that, it’s up to the individual hardware vendors to create their own backends for executing DXR commands. As a result – and especially as this is so early – everyone from Microsoft to hardware vendors are being intentionally vague about how hardware acceleration is going to work.

 

In the case of hitting the fallback layer, DXR will be executed via DirectCompute compute shaders, which are already supported on all DX12 GPUs. On the whole GPUs are not great at ray tracing, but they’re not half-bad either. As GPUs have become more flexible they’ve become easier to map to ray tracing, and there are already a number of professional solutions that can use GPU farms for ray tracing. Faster still, of course, is mixing that with optimized hardware paths, and this is where hardware acceleration comes in.

 

Microsoft isn’t saying just what hardware acceleration of DXR will involve, and the high-level nature of the API means that it’s rather easy for hardware vendors to mix hardware and software stages as necessary. This means that it’s up to GPU vendors to provide the execution backends for DXR and to make DXR run as efficiently as possible on their various microarchitectures.  When it comes to implementing those backends in turn, there are some parts of the ray tracing process that can be done in fixed-function hardware more efficiently than can be done shaders, and as a result Microsoft is giving GPU vendors the means to accelerate DXR with this hardware in order to further close the performance gap between ray tracing and rasterization.

 

For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option. Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

The fact that a lot of it is underpinned by DirectCompute means AMD is already in a very good position for it being as that is what they excel.

 

Quote

Though ultimately, the idea of hardware acceleration may be a (relatively) short-lived one. Since the introduction of DirectX 12, Microsoft’s long-term vision – and indeed the GPU industry’s overall vision – has been for GPUs to become increasingly general-purpose, with successive generations of GPUs moving farther and farther in this direction.

And as far as Microsoft and the rest of the industry is concerned having explicit hardware fixed functions for it is not the proper way to be doing it.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, leadeater said:

AMD hardware already is fully complaint with the DX12 spec, DXR  support is already confirmed for AMD as well. We'll have to wait for actual performance comparisons to know the full extent. We're also talking about an optional DX12 feature anyway which there is actually quite a few of.

 

hEq9o8.jpg

Far as optional features goes Nvidia is still actually the least feature complete when compared to AMD and Intel.

 

To say Navi won't have any hardware acceleration features is a very premature thing to be saying and Nvidia wasn't the only one Microsoft was collaborating with but for the later parts to get a working demonstration only Nvidia had current working hardware.

 

The fact that a lot of it is underpinned by DirectCompute meand AMD is already in a very good position for it being as that is what they excel.

 

And as far as Microsoft and the rest of the industry is concerned having explicit hardware fixed functions for it is not the proper way to be doing it. 

 


Tiers are different from DX versions, Tiers tell us how much something can they do not if they can do it or how less depends on how you look at it.  Direct X any version of DX needs to have those features to be labeled as such.

 

There is only one feature that nV doesn't have that is Stencil reference value form a PS.  And this is for granularity of that value, not that it can't do it, you just can't specify how small or great you want the stencil value to be. This is a legacy DX11.3 feature too.

 

Currently most applications and games we aren't even at Tier 2 max yet, we haven't even reached Tier 1 yet in some cases lol.

Quote

 

The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

 

 

This is pretty much a given at this point.  nV would not have have stated there is a magnitude of 2 to 3 times the performance increase for their cards over current cards.

 

It might not be fixed function I still don't think its tensor cores, but if its due to the pipeline...... 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Razor01 said:

Tiers are different from DX versions, Tiers tell us how much something can they do not if they can do it or how less depends on how you look at it.  Direct X any version of DX needs to have those features to be labeled as such.

 

Currently most applications and games we aren't even at Tier 2 max yet. 

Those are all optional features and the tiers tell you how well complaint they are to them. If the column says no then it can't do it.

 

As for applications and games that's not really an issue, most of the important optional features for DX12_1 are driver and hardware related and don't need explicit programming for them to be utilized, though optimization for them is always best.

 

Quote

Supported hardware is divided into three Resource Binding tiers, which define maximum numbers of descriptors that can be used for CBV (constant buffer view), SRV (shader resource view) and UAV (unordered access view); CBVs and SRVs per pipeline stage; UAVs for all pipeline stages; samplers per stage; and the number of SRV descriptor tables. Tier 3 hardware such as AMD GCN and, Intel Skylake has no limitations, allowing fully bindless resources only limited by the size of the descriptor heap, while Tier 1 (Nvidia Fermi, Intel Haswell/Broadwell) and Tier 2 (Nvidia Kepler/Maxwell) hardware impose some limits on the number of descriptors ("views") that can be used simultaneously. Additionally, buffers and textures can mixed together in the same resource heap only on hardware supporting Resource Heap Tier 2, while Tier 1 hardware requires separate memory heaps for buffers, textures, and render-target and depth stencil surfaces. Resource binding tier 1 and resource heap tier 1 are required for all supporting hardware.

 

A graphics card can be certified as complaint for DX12_1 and not support all of those optional features or not be tier 2 or 3 i.e. Pascal which is DX12_1 complaint.

 

26 minutes ago, Razor01 said:

This is pretty much a given at this point.  nV would have have stated there is a magnitude of 2 to 3 times the performance increase for their cards over current cards.

We don't actually know what the performance gain is yet, Nvidia hasn't said anything far as I've seen. Be interested to see Pascal vs Volta performance though.

 

Also you put your first comment in the quote box :P. Anyway yes they still don't have Stencil Ref and Tier 1 Resource heap is a bit useless, there is no gain to having it so you really want Tier 2 which Nvidia will most likely have in what ever comes after Volta.

 

Edit:

Found it, couldn't find it before. Nvidia did give a performance increase indication.

Quote

And while strict performance numbers aren’t being disclosed, NVIDIA stated that real time ray tracing with RTX on Volta would be “integer multiples faster” than with DXR on older hardware.

https://www.anandtech.com/show/12546/nvidia-unveils-rtx-technology-real-time-ray-tracing-acceleration-for-volta-gpus-and-later

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

Those are all optional features and the tiers tell you how well complaint they are to them. If the column says no then it can't do it.

 

As for applications and games that's not really an issue, most of the important optional features for DX12_1 are driver and hardware related and don't need explicit programming for them to be utilized, though optimization for them is always best.

 

 

A graphics card can be certified as complaint for DX12_1 and not support all of those optional features or not be tier 2 or 3 i.e. Pascal which is DX12_1 complaint.

 

We don't actually know what the performance gain is yet, Nvidia hasn't said anything far as I've seen. Be interested to see Pascal vs Volta performance though.

 

Also you put your first comment in the quote box :P. Anyway yes they still don't have Stencil Ref and Tier 1 Resource heap is a bit useless, there is no gain to having it so you really want Tier 2 which Nvidia will most likely have in what ever comes after Volta.

 

Ah yeah fixed that thx!

 

I think nV is going to market the shit out of DXR and RTX lol man, I don't see them holding back, remember DX 9.0C with the 68xx series lol.

 

Any how, this is Linus Trvalds on nV take on things of business

 

 

 

nV might do many unscrupulous things, but they aren't doing illegal things.

 

Its sucks but business is war.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Razor01 said:

I think nV is going to market the shit out of DXR and RTX lol man, I don't see them holding back, remember DX 9.0C with the 68xx series lol.

Oh I bet they will, need consumer GPUs to back that up first though. Not all of us can buy Titan V's to look at all the prettiness lol.

 

Interestingly though I was sure for the gaming release of Volta all the Tensor cores were going to be removed, being as the are AI/Tensor Flow focused but RTX hooks in to them for denoising.

 

Quote

Meanwhile NVIDIA also mentioned that they have the ability to leverage Volta's tensor cores in an indirect manner, accelerating ray tracing by doing AI denoising, where a neural network could be trained to reconstruct an image using fewer rays, a technology the company has shown off in the past at GTC. RTX itself was described as productizing certain hardware and software algorithms, but is distinct from DXR, the overlying general API.

 https://www.anandtech.com/show/12546/nvidia-unveils-rtx-technology-real-time-ray-tracing-acceleration-for-volta-gpus-and-later

 

I would have bet my car previously that tensor was going to be totally gone heh. Guess they'll just cut them back a lot, they make up a huge amount of the GV100 die and that's way way too expensive for 2070/2080 cards.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

 

Good thing for the competition is that AI denoising is not something new that Nvidia pioneered, so they might have inputs to implement it well enough.

Did Nvidia announced when they will publish documentation about exactly it does work?

Personally I'm more curious about if they use machine learning approaches elsewhere than denoising/AA. (Those are quite boring, they're just a bit computationally intensive and could even be done by a dedicated hardware chip to be fair, as you may just need the result of the frame buffer to be denoised with a pretrained network elsewhere. Maybe those tensor cores will be put on those chips separately to make them work as a different unit since they could use too much memory and force cache misses for the rendering part. (Or maybe they will be put with the rest to be part of the interleaving process to keep the cores busy. It sounds more the Nvidia way to go, with amd probably using a dedicated denoising part with IF to communicate with the rest)).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, laminutederire said:

 

not yet, maybe more on it at GDC.

 

I think they will cut the tensor cores down like they do the DP units.  1/32 the amount. Something like that.

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/19/2018 at 10:03 PM, asus killer said:

 

even in the worst scenario of no ROG for AMD i really think this is not as bad for AMD as making worst products that the competition, that's what really gets them down. And i knon Ryzen is good but coffee lake is better, Vega is good but the 1080ti is better.

 

Why are so many people in this discussion trying to bring up the fact that Nvidia's current high end GPUs are better than AMDs?

 

Yes they are!

Yes it's a problem for AMD.

Yes AMD should do better next generation on the high end.

 

All off that is true. But none of it is relevant to this discussion.

 

If lots of people think that GPP is unethical then that's what they think. The fact that Vega is inferior to the 1080ti does not change that opinion. 

 

Also an action by Nvidia cannot be justified by claiming that AMD has bigger problems. Well AMD like all large companies has a multitude of problems. The world is complex. GPP is only one of their problems. A company's performance in the market is affected by a combination of factors such as product quality, promotional quality, distribution, pricing, support, competition and lots of other factors. Everything has an impact, GPP does too. It cannot be dismissed by bringing up a more important factor such as product quality or supply issues. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×