Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

after reading the entire thread... looks like DX 12 and Vulkan are basically renamed versions of Mantle...good job AMD...  lol

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

after reading the entire thread... looks like DX 12 and Vulkan are basically renamed versions of Mantle...good job AMD...  lol

you most have missed the last few..

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

after reading the entire thread... looks like DX 12 and Vulkan are basically renamed versions of Mantle...good job AMD...  lol

Vulkan is all but confirmed as it has been publicly spoken that amd gave all their documentation to kronos as said they have permission to copy their entire design if they wish.

Link to comment
Share on other sites

Link to post
Share on other sites

I wasn't trying to insult anyone directly or otherwise. I just said to go the diplomatic route and properly balance it all out.

 

LOL maybe it was this  :lol:

 

 

 

It also makes a pretty compelling argument for DX12 basically being repackaged Mantle only, regardless of what Microsoft said how they've been working on it for years it seems pretty convenient that AMD was just ready and Nvidia is not. 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's funny reading really old articles that still reference Windows 9. http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

[removed]

 

True story - even though the "low overhead" thing took alot of learnings from Mantle, and for some reasons it's what it's being marketed more, there's more to DX12 then this.

 

Sometimes we agree patrick.

Edited by Godlygamer23
Link to comment
Share on other sites

Link to post
Share on other sites

I'm just gonna throw this out there.... Nvidia stated that 970 and 980 were fully compliant with direct X 12, although early and I'm just speculating here ... what happens if the cards aren't compatible with the compute aspect? Therefore not fully compliant?

 

Ryzen Ram Guide

 

My Project Logs   Iced Blood    Temporal Snow    Temporal Snow Ryzen Refresh

 

CPU - Ryzen 1700 @ 4Ghz  Motherboard - Gigabyte AX370 Aorus Gaming 5   Ram - 16Gb GSkill Trident Z RGB 3200  GPU - Palit 1080GTX Gamerock Premium  Storage - Samsung XP941 256GB, Crucial MX300 525GB, Seagate Barracuda 1TB   PSU - Fractal Design Newton R3 1000W  Case - INWIN 303 White Display - Asus PG278Q Gsync 144hz 1440P

Link to comment
Share on other sites

Link to post
Share on other sites

LOL maybe it was this  :lol:

No, because the truth is Nvidia built Maxwell for DX 11 and won the round because of it, and Pascal is designed for DX 12 with minimal revision over Maxwell to make it a cheap transition and allow denser libraries to fit more in. Nvidia builds for the now and reaps the profits for doing it. It's not about preparedness. Perhaps DX 12 rolled out a couple months earlier than Nvidia wanted because of Mantle, but so what? Pascal and Arctic Islands will go head to head, Nvidia will have the superior memory configurations since it will use Samsung's 48GB, 1.5TB/s offerings and AMD's stuck on Hynix's due to its priority access and exclusivity deal up front.

 

Did DX 12 use some parts of Mantle? Yes. That's undeniable in the light of day even without access to the code itself. Was Nvidia unprepared for DX 12 as a whole even with the Mantle additions? Yes, just not in the way that's consumer-friendly.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just gonna throw this out there.... Nvidia stated that 970 and 980 were fully compliant with direct X 12, although early and I'm just speculating here ... what happens if the cards aren't compatible with the compute aspect? Therefore not fully compliant?

There are compliance tiers. To support the API you must fall into the aspects outlined in Tier 1. Maxwell 2 is tier 2 compliant. GCN 1.x is tier 3 compliant where the biggest differences are resource binding levels and asynchronous compute, as well as lesser aspects that boil down to little more than numerical limits of how much can be bundled together in a single frame or resource. Frankly the only consequential difference between T2 and T3 is asynchronous compute. Nvidia can do some driver magic to restructure the kernels to make the asynchronous jobs work more smoothly synchronously via SIMD translations like it did in DX 11, but there will be limitations to that. It's why no one should be surprised the R9 290X is equal to the 980TI. When you drop the overhead and leverage the compute better, the two cards are very similar in compute specs. As I've mentioned elsewhere, Nvidia released Maxwell to win the last round of DX 11. Pascal was planned for DX 12. Let's see who wins round 1 of DX 12 in another 8 months or so.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No, because the truth is Nvidia built Maxwell for DX 11 and won the round because of it, and Pascal is designed for DX 12 with minimal revision over Maxwell to make it a cheap transition and allow denser libraries to fit more in. Nvidia builds for the now and reaps the profits for doing it. It's not about preparedness. Perhaps DX 12 rolled out a couple months earlier than Nvidia wanted because of Mantle, but so what? Pascal and Arctic Islands will go head to head, Nvidia will have the superior memory configurations since it will use Samsung's 48GB, 1.5TB/s offerings and AMD's stuck on Hynix's due to its priority access and exclusivity deal up front.

 

Did DX 12 use some parts of Mantle? Yes. That's undeniable in the light of day even without access to the code itself. Was Nvidia unprepared for DX 12 as a whole even with the Mantle additions? Yes, just not in the way that's consumer-friendly.

When am i going to get the benchmarks that you promised me? Surely you want me off of your back by now.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

No, because the truth is Nvidia built Maxwell for DX 11 and won the round because of it, and Pascal is designed for DX 12 with minimal revision over Maxwell to make it a cheap transition and allow denser libraries to fit more in. Nvidia builds for the now and reaps the profits for doing it. It's not about preparedness. Perhaps DX 12 rolled out a couple months earlier than Nvidia wanted because of Mantle, but so what? Pascal and Arctic Islands will go head to head, Nvidia will have the superior memory configurations since it will use Samsung's 48GB, 1.5TB/s offerings and AMD's stuck on Hynix's due to its priority access and exclusivity deal up front.

 

Did DX 12 use some parts of Mantle? Yes. That's undeniable in the light of day even without access to the code itself. Was Nvidia unprepared for DX 12 as a whole even with the Mantle additions? Yes, just not in the way that's consumer-friendly.

 

Damn it, I don't think you got the hint  :lol:

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

When am i going to get the benchmarks that you promised me? Surely you want me off of your back by now.

ignored patrick long time ago... hes a perfect example of NV washed up fanboy that just talks trash without proof

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

ignored patrick long time ago... hes a perfect example of NV washed up fanboy that just talks trash without proof

I always have proof. Whether or not you're willing to deal with the facts as they stand is up to you.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

ignored patrick long time ago... hes a perfect example of NV washed up fanboy that just talks trash without proof

Yeah, but he can't ignore me. His math has been proven wrong, and his last resort is a benchmark that only he has ever seen. I want to see it. If i am wrong, i will admit i am wrong and move on. However, i doubt i will ever lay eyes on this benchmark. After all, how can you test two different iGPU's at 4k resolution in a game without both of them running at 4fps? It won't even offer a valid comparison anyways. 

 

Come on @patrickjp93. You would not respond in the private messages. You have to defend your pride now, since its back out in the open. Give me that benchmark so i can validate it with my own hardware. You can put all of this to an end if you maxresdefault.jpg

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

There are compliance tiers. To support the API you must fall into the aspects outlined in Tier 1. Maxwell 2 is tier 2 compliant. GCN 1.x is tier 3 compliant where the biggest differences are resource binding levels and asynchronous compute, as well as lesser aspects that boil down to little more than numerical limits of how much can be bundled together in a single frame or resource. Frankly the only consequential difference between T2 and T3 is asynchronous compute. Nvidia can do some driver magic to restructure the kernels to make the asynchronous jobs work more smoothly synchronously via SIMD translations like it did in DX 11, but there will be limitations to that. It's why no one should be surprised the R9 290X is equal to the 980TI. When you drop the overhead and leverage the compute better, the two cards are very similar in compute specs. As I've mentioned elsewhere, Nvidia released Maxwell to win the last round of DX 11. Pascal was planned for DX 12. Let's see who wins round 1 of DX 12 in another 8 months or so.

I think Nvidia had Maxwell planned to be fully DX12 functioning but cut it out as soon as the knew it was going to be 28nm instead of 20nm.

Because they also scratched Double Precision and Unified Virtual Memory probably because the SMM's are so big in Maxwell that it simply wouldn't work in 28nm.

I have no doubt Pascal will change it up again.

It's quiet funny a year ago I asked why AMD cards are on paper way faster than Nvidia but can't keep up in actual real world performance seems I've got my answer now.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

I think Nvidia had Maxwell planned to be fully DX12 functioning but cut it out as soon as the knew it was going to be 28nm instead of 20nm.

Because they also scratched Double Precision and Unified Virtual Memory probably because the SMM's are so big in Maxwell that it simply wouldn't work in 28nm.

I have no doubt Pascal will change it up again.

It's quiet funny a year ago I asked why AMD cards are on paper way faster than Nvidia but can't keep up in actual real world performance seems I've got my answer now.

And with this we learned a vital lesson...

 

Ask questions well ahead of time, so that you get the answer around the point in time where it actually matters.

Link to comment
Share on other sites

Link to post
Share on other sites

And with this we learned a vital lesson...

 

Ask questions well ahead of time, so that you get the answer around the point in time where it actually matters.

I'm really glad that I didn't upgrade to Maxwell as I had originally planned as I don't upgrade if the card is still on the same nm process.

Nobody answered me back when I asked and I thought it was probably some optimization issue on the driver side of AMD seems it's more of an API bottleneck on AMD cards.

The benchmark seems to be really accurate if you look at the actual Tflops of the GPU's.

R9-290X= 5,6Tflops

GTX980Ti= 5,62 Tflops (with stock clock so under normal usage it's probably around 6Tflops)

That explains why the R9-290X comes so close to the 980Ti in the benchmark.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

Does GCN 1.0 have asynchronous compute? 7970 etc...?

 

And any news on whether Ark Survival Evolved will use it, at least for AMD GPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

Does GCN 1.0 have asynchronous compute? 7970 etc...?

 

And any news on whether Ark Survival Evolved will use it, at least for AMD GPUs?

GCN 1.0, 1.1 and 1.2 all suport async shading...

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

Does GCN 1.0 have asynchronous compute? 7970 etc...?

 

And any news on whether Ark Survival Evolved will use it, at least for AMD GPUs?

If it's DX12, it should.  If it doesn't, there is a bias.

Link to comment
Share on other sites

Link to post
Share on other sites

I think Nvidia had Maxwell planned to be fully DX12 functioning but cut it out as soon as the knew it was going to be 28nm instead of 20nm.

Because they also scratched Double Precision and Unified Virtual Memory probably because the SMM's are so big in Maxwell that it simply wouldn't work in 28nm.

I have no doubt Pascal will change it up again.

It's quiet funny a year ago I asked why AMD cards are on paper way faster than Nvidia but can't keep up in actual real world performance seems I've got my answer now.

Exactly. Pascal is Maxwell with native 64 and 16-bit support and the remaining DX 12 bells and whistles. Nvidia cut back to match the environment. Now it's going to pull a play from Intel's playbook, the tick-tock. Pascal is a Tick with some extras. Volta is going to be a Tock.

 

In compute it's because even the best Data Scientists in the world can't get OpenCL to match or beat CUDA, so it's either an AMD hardware or driver problem. In gaming it was DX 11 drivers and having all the DX 12 equipment instead of more DX 11 euipment on the die to make a stronger GPU for the time.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

GCN 1.0, 1.1 and 1.2 all suport async shading...

Great thanks.

Some time back I helped my neighbor with a build which included a 280x which he has been having a blast with. He is a focused on gaming and not benchmarks so he won't upgrade stupidly every year. After you help somebody else with a build it's very satisfying to see the choices pay off with long term support... both in terms of drivers and hardware even as newer games come out.

Link to comment
Share on other sites

Link to post
Share on other sites

Exactly. Pascal is Maxwell with native 64 and 16-bit support and the remaining DX 12 bells and whistles. Nvidia cut back to match the environment. Now it's going to pull a play from Intel's playbook, the tick-tock. Pascal is a Tick with some extras. Volta is going to be a Tock.

 

In compute it's because even the best Data Scientists in the world can't get OpenCL to match or beat CUDA, so it's either an AMD hardware or driver problem. In gaming it was DX 11 drivers and having all the DX 12 equipment instead of more DX 11 euipment on the die to make a stronger GPU for the time.

CUDA vs OpenCL AMD Stream is 99% sure to be a software/firmware optimization thing...

 

There is more money and fine tuning behind CUDA. While OpenCL is gaining ground, it is litterally being forced to go door to door, looking for devs who want to join.

CUDA has simply become a "safe haven".... Everyone knows it works, and the old static state of "if it works, why change anything" has settled in.

 

For a program built around CUDA, it is not simply to swap the APIs and patch the mis-matches... Although i fear that is what has been done in many cases with OpenCL... just get it to work to shut up those who beg for OpenCL support. Then keep going as before..

Link to comment
Share on other sites

Link to post
Share on other sites

If it's DX12, it should.  If it doesn't, there is a bias.

 

ARK appears to be a gameworks title, and developers under the Gameworks agreement cannot optimize a game in any way that hinders Nvidia GPU performance. that being said, Oxide were able to create a separate Render Path in Ashes-based on Nvidia Hardware ID-to disable Async Compute specifically for Nvidia. Because ARK is coming to Xbone, there is little reason why they aren't using Async Compute there, but not on PC. 

 

Unless its time consuming to create a separate render path for Nvidia hardware (which they might have to do anyway in the code if the Game is on Xbone), than Async being turned off for AMD hardware on PC would seem counter-productive and deliberate for ARK.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

I always have proof. Whether or not you're willing to deal with the facts as they stand is up to you.

I don't see it.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×