Jump to content

AMD: No Such Thing As 'Full Support' For DX12 Today

HKZeroFive

Xbone uses a heavily customized version of DX11, dubbed DX11.x which includes features from DX12 including asynchronous compute.  Phil Spencer made it pretty clear that dx12 won't benefit the xbone, and my understanding is that the reasons why it wouldn't benefit from dx12 is because dx11.x already includes the features that are needed(AC).   Good article about the history and explaining AMD's long game is here.

 

This is largely true. The Xbone is already using a low level implementation of DX11.x, and DX12 might bring a few goodies that developers can use (this will probably be the most useful when porting occurs) the consoles have already enjoyed the overall benefits that low level access would bring. 

 

Which makes this all the more frustrating in some areas. Consoles are already being developed from the off with low level in mind. No legacy DX9/DX10 to give a shit about. At the very least, when DX12 rolls out in full to the Xbone, they'll be able to dump DX11 entirely for first party games too if they want to, pushing things further along. 

 

If only PC developers were so gung ho in dropping old DX versions and just moving ahead, even if it meant pissing off users with outdated hardware. I've said this before, but you can't clamour that consoles hold back development of games when PC developers are still catering to DX9/DX10 and not leverage the substantial increases in power with todays hardware and APIs. Old stuff needs to be left behind for progress to occur, and sometimes its going to cost people a pretty penny. 

Link to comment
Share on other sites

Link to post
Share on other sites

posted this in general, but Scott Wasson and David Kanter have a brief discussion of async compute and aots

 

 

 

God help anyone who bought a current nvidia card for vr.

I am impelled not to squeak like a grateful and frightened mouse, but to roar...

Link to comment
Share on other sites

Link to post
Share on other sites

posted this in general, but Scott Wasson and David Kanter have a brief discussion of async compute and aots

 

 

 

God help anyone who bought a current nvidia card for vr.

 

 

I'm disgusted watching this video, because Scott Wasson and David Kanter are laughing at owners of 980 or better nvidia cards. Regardless if people are faking to, planning to, have returned a 980 card or better. Its a consumer right to be upset about being mislead. Gamers buy expensive video cards to last a certain amount of time. If nvidia said maxwell has dx12 support and it does not. ALso this has not been confirmed. Consumers are expecting dx12 support.

 

Who are they to say who should or shouldn't be upset. Scott Wasson pretty much said the same shit when 970 had it's 3.5 GB problem.  

Test ideas by experiment and observation; build on those ideas that pass the test, reject the ones that fail; follow the evidence wherever it leads and question everything.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm disgusted watching this video, because Scott Wasson and David Kanter are laughing at owners of 980 or better nvidia cards. Regardless if people are faking to, planning to, have returned a 980 card or better. Its a consumer right to be upset about being mislead. Gamers buy expensive video cards to last a certain amount of time. If nvidia said maxwell has dx12 support and it does not. ALso this has not been confirmed. Consumers are expecting dx12 support.

Who are they to say who should or shouldn't be upset. Scott Wasson pretty much said the same shit when 970 had it's 3.5 GB problem.

Consumers expecting anything beyond what's on the box and in the manuals are fooling themselves. Nvidia is a business, not a charity. If anyone should have employed planned obsolescence, it should have been AMD. Lower prices at higher performance with only the hardware needed for a given API would have led to higher volume sales. Instead AMD looks cheap and uncompetitive meanwhile it has 5-year-old cards aging nicely, hurting its own sales. At some point, consumers need to stop being whiny children and grow up. Is Nvidia pulling a dick move? Yes. Vote with your wallets and stop bogging down discussion threads and going on an Internet crusade. It's truly aggravating at this point.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Oxide developer responded over at overclock. Here is the latest from today: I noted bolded something below of interest.

 

"Wow, lots more posts here, there is just too many things to respond to so I'll try to answer what I can.

/inconvenient things I'm required to ask or they won't let me post anymore
Regarding screenshots and other info from our game, we appreciate your support but please refrain from disclosing these until after we hit early access. It won't be long now.
/end

Regarding batches, we use the term batches just because we are counting both draw calls and dispatch calls. Dispatch calls are compute shaders, draw calls are normal graphics shaders. Though sometimes everyone calls dispatchs draw calls, they are different so we thought we'd avoid the confusion by calling everything a draw call.

Regarding CPU load balancing on D3D12, that's entirely the applications responsibility. So if you see a case where it’s not load balancing, it’s probably the application not the driver/API. We’ve done some additional tunes to the engine even in the last month and can clearly see usage cases where we can load 8 cores at maybe 90-95% load. Getting to 90% on an 8 core machine makes us really happy. Keeping our application tuned to scale like this definitely on ongoing effort.

Additionally, hitches and stalls are largely the applications responsibility under D3D12. In D3D12, essentially everything that could cause a stall has been removed from the API. For example, the pipeline objects are designed such that the dreaded shader recompiles won’t ever have to happen. We also have precise control over how long a graphics command is queued up. This is pretty important for VR applications.

Also keep in mind that the memory model for D3d12 is completely different the D3D11, at an OS level. I’m not sure if you can honestly compare things like memory load against each other. In D3D12 we have more control over residency and we may, for example, intentionally keep something unused resident so that there is no chance of a micro-stutter if that resource is needed. There is no reliable way to do this in D3D11. Thus, comparing memory residency between the two APIS may not be meaningful, at least not until everyone's had a chance to really tune things for the new paradigm.

Regarding SLI and cross fire situations, yes support is coming. However, those options in the ini file probablly do not do what you think they do, just FYI. Some posters here have been remarkably perceptive on different multi-GPU modes that are coming, and let me just say that we are looking beyond just the standard Crossfire and SLI configurations of today. We think that Multi-GPU situations are an area where D3D12 will really shine. (once we get all the kinks ironed out, of course). I can't promise when this support will be unvieled, but we are commited to doing it right.

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

Also, we are pleased that D3D12 support on Ashes should be functional on Intel hardware relatively soon, (actually, it's functional now it's just a matter of getting the right driver out to the public).

Thanks!"

 

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2130#post_24379702

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

So basically Async Compute is software driven 

Test ideas by experiment and observation; build on those ideas that pass the test, reject the ones that fail; follow the evidence wherever it leads and question everything.

Link to comment
Share on other sites

Link to post
Share on other sites

So basically Async Compute is software driven 

 

Looks like it for Nvidia, as their Compute Queue can't do anything when there's a draw call happening. The very definition of Async compute is compute calculations happening out-of-sync with the cpu draw calls, so Nvidia must be mimicking the feature somehow through drivers.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

So basically Async Compute is software driven 

 

 

Sounds like it.  I've learned a lot of new things about how gpus tick reading threads on this story, and probably completely comprehended almost none of it completely.  But one thing that stuck out is that one of the ways maxwell got so incredibly efficient is by removing a bunch of power consuming hardware schedulers in the gpu.  It was not an issue in dx11, the driver could be written to handle much of the desired workflows.  But apparently this hurts nvidia with asynch compute style workloads, apparently there was a hidden cost to that efficiency gain of maxwell by removing those hardware schedulers.  I never heard any of this before, all the tech review sites I went to just marveled at the power consumption of maxwell gpus while they pointed in horror at hawaii and other amd gpus in comparison.  There was zero mention of tradeoffs, what was lost or gained by each approach, it was very superficial analysis.  And now we know more about why the chips were built the way they were.  

I am impelled not to squeak like a grateful and frightened mouse, but to roar...

Link to comment
Share on other sites

Link to post
Share on other sites

http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html 

 

Looks like Nvidia is sorting it out.

 

Looks like we can now close this thread and stop the silly arguments.

 

312ajcw.jpg

System Specs:

CPU: Ryzen 7 5800X

GPU: Radeon RX 7900 XT 

RAM: 32GB 3600MHz

HDD: 1TB Sabrent NVMe -  WD 1TB Black - WD 2TB Green -  WD 4TB Blue

MB: Gigabyte  B550 Gaming X- RGB Disabled

PSU: Corsair RM850x 80 Plus Gold

Case: BeQuiet! Silent Base 801 Black

Cooler: Noctua NH-DH15

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×