Jump to content

NVIDIA To Devs: Compute/Graphics Toggle Is A Heavyweight Switch

Mr_Troll

No, but they know they'll be the target of antitrust litigation if they don't give this warning since AMD has no support of 12.1 features.

 

You cannot use antitrust about using features on a standardized API, come on.

 

No, even Nvidia knows this is a super niche feature, only Maxwell can use, which would screw over all Kepler (6/700 series) users. No dev is ever going to make a DX12 game right now, that requires a .1 DX12 feature set to work. That would be bonkers, as their user base and market share would be miniscule. This has nothing to do with anti trust or anti competitive.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

oh noes!

not full DX12 hardware that can't do full DX12 specs  <_<

How many times do i have to repeat myself when i say Async Compute is NOT a DX12 feature? 

 

http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far/2

 

 

 

There’s been a significant amount of confusion over what this difference in asynchronous compute means for gamers and DirectX 12 support. Despite what some sites have implied, DirectX 12 does not require any specific implementation of asynchronous compute. 

 

https://en.wikipedia.org/wiki/Direct3D#Direct3D_12_levels

 

Notice any "ACE or Async Compute" under the mandatory features? No, because it is not a mandatory feature. Directx 12 just so happened to unlock a useful feature that was not available on DX11, and it was something AMD used with Mantle (DX12 is said to have dipped into AMD's Mantle bag of tricks)

 

https://en.wikipedia.org/wiki/Mantle_(API)#GPU-bound_scenarios

 

 

 

  • Reduction of command buffers submissions
  • Explicit control of resource compression, expands and synchronizations
  • Asynchronous DMA queue for data uploads independent from the graphics engine
  • Asynchronous compute queue for overlapping of compute and graphics workloads
  • Data formats optimizations via flexible buffer/image access
  • Advanced Anti-Aliasing features for MSAA/EQAA optimizations[4][9]
  • Native multi-GPU support[4]

So, what have we learned today? Async Compute is NOT a required DX12 feature. If your GPU is claiming to be DX12 capable, and does not support Async Compute, you are still not lying. If your GPU supports Async Compute, good. You will see a performance boost on games that use it. Simple as that.

 

EDIT: @zMeul provided evidence showing that Async Compute might be seen as a mandatory feature in DX12. While it is not directly stated as a mandatory feature, it is heavily implied by MS. Therefore, the wall of text above this edit should be considered incorrect until proven otherwise. 

 

http://linustechtips.com/main/topic/461881-nvidia-to-devs-computegraphics-toggle-is-a-heavyweight-switch/?p=6198828

 

 

 

I still don't see anywhere in this link that it is a mandatory feature, but seeing as it is from MS, and they took the time to include it among other DX12 guides, i'll accept it. 

 

Thanks for providing this information, i genuinely appreciate it when i am wrong, and have something to learn from it. Sadly, deleting my previous post will remove precious context, so i'll go edit it to this post, so show the exact point in which i eat my words. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You cannot use antitrust about using features on a standardized API, come on.

 

No, even Nvidia knows this is a super niche feature, only Maxwell can use, which would screw over all Kepler (6/700 series) users. No dev is ever going to make a DX12 game right now, that requires a .1 DX12 feature set to work. That would be bonkers, as their user base and market share would be miniscule. This has nothing to do with anti trust or anti competitive.

Not true. Intel got reamed for it back when it released its own math kernel library in substitution of the standard c++ math library.

 

It has everything to do with it. If you're the chief player in the market and you get developers using your stuff, you get more GPU customers who want to play games optimized for your stuff while simultaneously locking out the competition for that generation, regardless of it being a standard API. The FTC would be on Nvidia's tail within a week. If Nvidia encourages people to use proprietary features at this point (12.1 being exclusive to Nvidia right now and falling under the definition for legal purposes) it is inviting antitrust litigation, end of discussion.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

it would probably increase performance on any gpu to follow those steps

 

Can you back this up? Because it doesn't seem to be the case.

Link to comment
Share on other sites

Link to post
Share on other sites

Not true. Intel got reamed for it back when it released its own math kernel library in substitution of the standard c++ math library.

 

It has everything to do with it. If you're the chief player in the market and you get developers using your stuff, you get more GPU customers who want to play games optimized for your stuff while simultaneously locking out the competition for that generation, regardless of it being a standard API. The FTC would be on Nvidia's tail within a week. If Nvidia encourages people to use proprietary features at this point (12.1 being exclusive to Nvidia right now and falling under the definition for legal purposes) it is inviting antitrust litigation, end of discussion.

What do you think NVIDIA does with those thick NDA contracts?

I'm pretty sure if any Gameworks contract would leak, alot of companys and NVIDIA customers would be receiving some fat compensations.

Link to comment
Share on other sites

Link to post
Share on other sites

Well Nvidia is telling devs how to best optimize for Maxwell. Which is ok. No surprises.

I'm sure AMD tells devs how to optimize for GCN.

 

One thing the conspiracy theorist in me is a bit worried about is the Nvidia partnered game Ark Survival Evolved and it's DX12 patch. It was supposed to come out a month ago and be the first DX12 game. Also important as an Unreal Engine 4 game. But then when the AOTS DX12 benchmark controversy blew up they mysteriously delayed the patch on the final day citing unspecified driver issues. Which is really strange because the rest of the game already runs like shit and they had no problem in releasing it... They have gone quiet since.

Link to comment
Share on other sites

Link to post
Share on other sites

I still don't see anywhere in this link that it is a mandatory feature, but seeing as it is from MS, and they took the time to include it among other DX12 guides, i'll accept it. 

 

Thanks for providing this information, i genuinely appreciate it when i am wrong, and have something to learn from it. Sadly, deleting my previous post will remove precious context, so i'll go edit it to this post, so show the exact point in which i eat my words. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well Nvidia is telling devs how to best optimize for Maxwell. Which is ok. No surprises.

I'm sure AMD tells devs how to optimize for GCN.

 

One thing the conspiracy theorist in me is a bit worried about is the Nvidia partnered game Ark Survival Evolved and it's DX12 patch. It was supposed to come out a month ago and be the first DX12 game. But then when the AOTS DX12 benchmark controversy blew up they mysteriously delayed the patch on the final day citing unspecified driver issues. Which is really strange because the rest of the game already runs like shit and they had no problem in releasing it... They have gone quiet since.

Ark devs were pretty active on Reddit, and they completly dodge DX12 subject, after spreaing the hype of the DX12 releases in social media... pardon my expression but these NVIDIA actions are so fucking desgusting I can't even.

Link to comment
Share on other sites

Link to post
Share on other sites

I still don't see anywhere in this link that it is a mandatory feature, but seeing as it is from MS, and they took the time to include it among other DX12 guides, i'll accept it. 

 

Thanks for providing this information, i genuinely appreciate it when i am wrong, and have something to learn from it. Sadly, deleting my previous post will remove precious context, so i'll go edit it to this post, so show the exact point in which i eat my words.

who said anything about being mandatory?! or optional for that matter

you reply implied that async compute wasn't even part of DX12 - the link I posted shows your assumption was wrong

the matter of fact is DX12 is still not in it's final form - multi-gpu support, in it's various forms, is still to be seen

the job to clarify all this BS is MS', not AMD's, not nVidia's and not Intel's - MS classifies the capabilities of a GPU

Link to comment
Share on other sites

Link to post
Share on other sites

who said anything about being mandatory?! or optional for that matter

you reply implied that async compute wasn't even part of DX12 - the link I posted shows you assumption was wrong

the matter of fact is DX12 is still not in it's final form - multi-gpu support, in it's various forms, is still to be seen

the job to clarify all this BS is MS', not AMD's, not nVidia's and not Intel's - MS classifies the capabilities of a GPU

No, read that wall of text again. I said it was not a DX12 feature (in that it was not listed on the mandatory or optional feature lists) and that it was made possible by DX12, due to the bits that came from Mantle. I stated, with proof, that the parts involving Async Compute came from Mantle, and that it was absorbed by DX12, therefore, allowing DX12 to take advantage of the same mantle features. I probably should have worded the "not a feature" part a little better, but i did clarify my intent later on in that very same post.

 

Either way, i already admitted i was wrong, and i provided an edit to that previous post linking towards your evidence. You beat me fair and square sir.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

No, read that wall of text again. I said it was not a DX12 feature (in that it was not listed on the mandatory or optional feature lists) and that it was made possible by DX12, due to the bits that came from Mantle

this is a lie - I would like you to source this from AMD's or MS' "mouths" directly and not from rumor mills

something you need to understand: both MS and SONY had specific HW requirements for the APUs AMD would produce - at that point in time, MANTLE wasn't even a thing

MANTLE became a thing when AMD, SONY and EA (DICE) started development of BF4 - the low level API for the PS4, GNM is even lower than MANTLE or even DX12 (!!!!!!)

A more crucial issue is that, while the PS4 toolchain is designed to be familiar to those working on PC, the new Sony hardware doesn't use the DirectX API, so Sony has supplied two of their own.

"The graphics APIs are brand new - they don't have any legacy baggage, so they're quite clean, well thought-out and match the hardware really well," says Reflections' expert programmer Simon O'Connor.

"At the lowest level there's an API called GNM. That gives you nearly full control of the GPU. It gives you a lot of potential power and flexibility on how you program things. Driving the GPU at that level means more work."

http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4
Link to comment
Share on other sites

Link to post
Share on other sites

We have some fine know-it-all specimens in this thread.

 

Keep going guys. Learn me some more stuff about development that you know nothing about.

Link to comment
Share on other sites

Link to post
Share on other sites

this is a lie - I would like you to source this from AMD's or MS' "mouths" directly and not from rumor mills

something you need to understand: both MS and SONY had specific HW requirements for the APUs AMD would produce - at that point in time, MANTLE wasn't even a thing

MANTLE became a thing when AMD, SONY and EA (DICE) started development of BF4 - the low level API for the PS4, GNM is even lower than MANTLE or even DX12 (!!!!!!)

 

Here you go, from AMD rep R. Huddy:

 

 

DX 12 brings a lot of the goodness that Mantle brought. We had a lot of conversations with Microsoft about what we were doing with Mantle, and in those conversations, they said, 'OK, if you really can solve this problem of building a better throughput system that runs on Windows, then we'd like to take that to Windows as well and we'll make that the extra software functionality that comes in DX 12.' So that's how DX 12 has come about.

Source: http://www.techradar.com/news/computing/pc/amd-s-richard-huddy-on-the-state-of-pc-graphics-mantle-2-and-apus-1255575/2

Link to comment
Share on other sites

Link to post
Share on other sites

@bogus
you AMD guy is full of shit! MS rejected MANTLE as the API for XB1 because MS was already developing DirectX 11.X - a low level API for the XB1
DX12 is a direct evolution of DX11.X - the specific features DX11.X has, are now part of DX12

source: http://blogs.windows.com/buildingapps/2013/10/14/raising-the-bar-with-direct3d/

---

AMD can only claim two things:

  1. MANTLE was the push MS needed to bring DX11.X to Windows PCs, as DX12
  2. GLNext (Vulkan) is the only API MANTLE was based on
Link to comment
Share on other sites

Link to post
Share on other sites

Not true. Intel got reamed for it back when it released its own math kernel library in substitution of the standard c++ math library.

 

It has everything to do with it. If you're the chief player in the market and you get developers using your stuff, you get more GPU customers who want to play games optimized for your stuff while simultaneously locking out the competition for that generation, regardless of it being a standard API. The FTC would be on Nvidia's tail within a week. If Nvidia encourages people to use proprietary features at this point (12.1 being exclusive to Nvidia right now and falling under the definition for legal purposes) it is inviting antitrust litigation, end of discussion.

 

That post makes no sense. Of course making a proprietary kernel library, forcing devs away from standardized libraries can be seen as abuse of market position, but how is that relevant here?

 

It's not "your stuff" as in Nvidia's. It's a feature set in an open standard. AMD could probably emulate it or outright support it in Arctic Island cards. So far it's en exclusive feature to NVidia Maxwell (and only Maxwell) cards, and therefor a small niche.

From What I've read, ROV and similar 12.1 features does not seem to be very important. Especially as most games will rely heavily on async compute due to the consoles. Again, sure games will have specific code paths for NVidia, but that should not have any negative consequences for AMD.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Still waiting for DX12 games. 

 

8373718.jpg

Corsair 760T White | Asus X99 Deluxe | Intel i7-5930k @ 4.4ghz | Corsair H110 | G.Skill Ripjawz 2400mhz | Gigabyte GTX 970 Windforce G1 Gaming (1584mhz/8000mhz) | Corsair AX 760w | Samsung 850 pro | WD Black 1TB | IceModz Sleeved Cables | IceModz RGB LED pack

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

What do you think NVIDIA does with those thick NDA contracts?

I'm pretty sure if any Gameworks contract would leak, alot of companys and NVIDIA customers would be receiving some fat compensations.

That would require you prove what is in the contract. It's not the same as blatantly (publically) encouraging use of proprietary tech.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

That would require you prove what is in the contract. It's not the same as blatantly (publically) encouraging use of proprietary tech.

in short, we would need TWO contracts to leak.... including Nvidia reps signatures......

Link to comment
Share on other sites

Link to post
Share on other sites

That post makes no sense. Of course making a proprietary kernel library, forcing devs away from standardized libraries can be seen as abuse of market position, but how is that relevant here?

It's not "your stuff" as in Nvidia's. It's a feature set in an open standard. AMD could probably emulate it or outright support it in Arctic Island cards. So far it's en exclusive feature to NVidia Maxwell (and only Maxwell) cards, and therefor a small niche.

From What I've read, ROV and similar 12.1 features does not seem to be very important. Especially as most games will rely heavily on async compute due to the consoles. Again, sure games will have specific code paths for NVidia, but that should not have any negative consequences for AMD.

There was no forcing at all. It was just a better implementation of the same library functions. It's the same thing in different clothing.

Maxwell is the dominant market architecture, not niche at all. Regardless of AMD's future ability to support it, the fact they can't now combined with Nvidia's capitalization on it would cause undue harm to AMD and its ability to compete by the letter of U.S. law. Nvidia can't publicly encourage that.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

There was no forcing at all. It was just a better implementation of the same library functions. It's the same thing in different clothing.

Maxwell is the dominant market architecture, not niche at all. Regardless of AMD's future ability to support it, the fact they can't now combined with Nvidia's capitalization on it would cause undue harm to AMD and its ability to compete by the letter of U.S. law. Nvidia can't publicly encourage that.

from what i understood of the whole library thing.... the DX11 libraries are open to every dev. If Nvidia made an optimized version then good for them. But it is essentially the same libraries

Link to comment
Share on other sites

Link to post
Share on other sites

@Mr_Troll Fix your post, or I'll lock it for not following the posting guidelines.

what exact guidelines did i not follow?

 

1. Your thread must include some original input and what your personal opinion on the topic is. - did that

2.Your thread must include a link to at least one reputable source - did that

3. Your thread should also include quotes from the cited sources - did that

4.The title of your thread must be relevant to the topic - it is

Intel Core i7 7800x @ 5.0 Ghz with 1.305 volts (really good chip), Mesh OC @ 3.3 Ghz, Fractal Design Celsius S36, Asrock X299 Killer SLI/ac, 16 GB Adata XPG Z1 OCed to  3600 Mhz , Aorus  RX 580 XTR 8G, Samsung 950 evo, Win 10 Home - loving it :D

Had a Ryzen before ... but  a bad bios flash killed it :(

MSI GT72S Dominator Pro G - i7 6820HK, 980m SLI, Gsync, 1080p, 16 GB RAM, 2x128 GB SSD + 1TB HDD, Win 10 home

 

Link to comment
Share on other sites

Link to post
Share on other sites

what exact guidelines did i not follow?

 

1. Your thread must include some original input and what your personal opinion on the topic is. - did that

2.Your thread must include a link to at least one reputable source - did that

3. Your thread should also include quotes from the cited sources - did that

4.The title of your thread must be relevant to the topic - it is

 

He wants you to use the quote function on the quotes (the little speech bubble next to the twitter icon, so the text is put into a square like your post is inside of my post here.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

He wants you to use the quote function on the quotes (the little speech bubble next to the twitter icon, so the text is put into a square like your post is inside of my post here.

oh ok... going to edit the post right now

Intel Core i7 7800x @ 5.0 Ghz with 1.305 volts (really good chip), Mesh OC @ 3.3 Ghz, Fractal Design Celsius S36, Asrock X299 Killer SLI/ac, 16 GB Adata XPG Z1 OCed to  3600 Mhz , Aorus  RX 580 XTR 8G, Samsung 950 evo, Win 10 Home - loving it :D

Had a Ryzen before ... but  a bad bios flash killed it :(

MSI GT72S Dominator Pro G - i7 6820HK, 980m SLI, Gsync, 1080p, 16 GB RAM, 2x128 GB SSD + 1TB HDD, Win 10 home

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×