Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

Was just gonna say, the Broadwell link he gave was broken. Thanks for that.

 

@patrickjp93 http://www.3dmark.com/fs/2495489

 

Clocked in at 1040mhz, a 20mhz overclock. I do not see it being anywhere near as low as the Iris Pro 6200 scores. Care to share exactly which two results you are comparing? 

Always ultra, as Intel's real-life gaming performance far outpaces its 3DMark scores.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

@patrickjp93 and Magetank Let us contiue this in private shall we?

 

We are taking up a LOT of "thread space"... and we have thoroghly derailed this topic...

Can I be invited too? I won't be discussing a lot, but I'd like to see how this plays out. :D

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

ctrl+c then ctrl+v

http://www.3dmark.com/search?_ga=1.223412381.786654399.1440966265#/?mode=advanced&url=/proxycon/ajax/search/gpu/fs/R/1039/500000?minScore=0&gpuName=Intel Iris Pro Graphics 6200

 

That was a ctrl-c + ctrl-v, and that's what it gives me.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Sigh. We all know sheer compute amd>nvidia, however even with dx12 compute isn't everything.

We have benchmarks for aots that has the fury x and 980ti completely deadlocked.

I believe the extreme tech one does fury x vs 980ti, but I'm sure by now there are many others around.

So I mean really. For the 290x to beat the 980ti it has to also beat the fury x.

Compute isn't everything obviously.

Anyways we had moved past that point.

You can say that the fury x is held back by pixel fill rate, but both the fury x and the 290x share the same base fill rate with the 290x having much lower bilinear filtering texel rates.

So unless something else is holding the fury x back relative to the 290x.... It still doesn't make sense.

Again we had finished this conversation already and you jumping in with an off the cuff remark adds nothing constructive.

Feel free to discuss with Patrick. I'm rather done with this.

According to Oxide their results aren't really saturating on compute, and their Async. usage was very meager. It's a testament to how the nVidia cards are not built to handle Asynchronous parallelism that they had to take another route entirely on that hardware. 290x and Fury X both make headway from Async, but there isn't enough and compute isn't heavy enough in the benchmark to allow Fury X to pull away from it as yet.

Link to comment
Share on other sites

Link to post
Share on other sites

According to Oxide their results aren't really saturating on compute, and their Async. usage was very meager. It's a testament to how the nVidia cards are not built to handle Asynchronous parallelism that they had to take another route entirely on that hardware. 290x and Fury X both make headway from Async, but there isn't enough and compute isn't heavy enough in the benchmark to allow Fury X to pull away from it as yet.

Their Async use wasn't meager. It was basic, only at the highest level (where it generally should be just as with designing any thread-parallel application to minimize overhead). It wasn't tuned/optimized to great depth. There are 2 very different implications by your claim and theirs.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Their Async use wasn't meager. It was basic, only at the highest level (where it generally should be just as with designing any thread-parallel application to minimize overhead). It wasn't tuned/optimized to great depth. There are 2 very different implications by your claim and theirs.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

Modest rather than meager. Off by a bit.
Link to comment
Share on other sites

Link to post
Share on other sites

Modest rather than meager. Off by a bit.

Well exactly. You do what's obvious first. Everything non-obvious tends to be small in impact and ever more difficult/obfuscating to implement. Transforming a render or lighting function to actual OpenCL compute is an arduous task, and the benefits may be small beans if the stream processors only churn through the 20% that wasn't getting through the dedicated hardware fast enough.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Modest rather than meager. Off by a bit.

Your quote really just makes me wish that ARK would release their DX12 client so we can see how a completely different engine performs with this, and since it will be in consumer hands (and not just journalists), we can expect more information.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia is playing a perfectly stable game with the same strategies IBM uses to this day to maintain its near-monopoly in mainframes and financial servers. It's not dangerous at all, because AMD isn't a threat, and frankly neither is Intel unless Intel finds a way to surpass IBM in scale-up architectures and beat Nvidia in GPGPU compute. And each year Intel fails to do that, both of its competitors are building up R&D with new features ready for deployment as needed. Nvidia has a 5-year lead on Intel in GPGPU compute that isn't shrinking. IBM has a similar length lead. Nvidia is perfectly safe for now.

 

No, because consumers are as stupid as cattle when they go shopping. You and I, and the LTT community in general, are exceptions, outliers, not the rule nor average. We can't turn that tide. It's pointless to think you can. Unless you want to slander Nvidia and end up in jail via a smear campaign, you're not going to have any effect. Exactly, their emotions tell them to go with what is trusted and bought the most, and cheap price tags generally mean cheap products. Most consumers are not discerning.

 

Yes, old folks and non-savvy parents buy them for 3-4 years, maybe more, but they buy something they're sure will suit their needs. Gamers tend to buy every two years, and they like to buy the best of the best with whatever wallet size they have. Nvidia has taken that crown for the past 4 years give or take. Consumers are stupid and will follow Nvidia clear to their financial blood letting rooms like they always do.

 

AMD has priority access over Nvidia to Hynix's HBM 2, but it also has an exclusivity deal in that same contract. AMD cannot ask Samsung for any chips until the successor to Arctic Islands. AMD is screwed on that front because, unlike Nvidia, it doesn't have the money to buy itself out of such a contract. Actually Samsung makes all of the fastest DDR4 right now, and it plans to release 4266MHz chips by December which puts it ahead of Hynix for the foreseeable future. In GDDR5 Samsung builds the highest density chips at 7GHz, whereas Hynix produces but has no buyers for 8GHz GDDR5 and won't have any since all of Pascal is going to be HBM 2.

 

It's not a speculation when you actually have insider knowledge (perks of actually being an investor). I'm not surprised at all. Nvidia built the minimum needed to kick AMD's tail for the last years of DX 11. Now it's going to build the minimum required to beat AMD for the first 1.5 years of DX 12 and will include 12.1 and try to get its developers to use 12.1 features to further snub and further damage AMD's hold on the market as is its right to protect and further its own sales. AMD is naive and will fall short once again by chasing features no API will use for 5 years instead of building enough of a card to compete with Nvidia on equal or better footing, something Koduri could do if he wasn't such a prideful snob.

 

Nvidia's success speaks for itself, I'm not denying that. My point is simply that their cult like following are starting to realize cloud 9 comes with a lot of caveats.

 

Price premiums on everything, which they have to eat due to vendor lock in. The planned obsolescence will kick in faster now that Maxwell sucks at async shaders/compute.

IBM is in their own eco bubble. They are more focused on business solutions than IT hardware anyways; a transition made over many years now. AMD on the other hand are improving their CPU IPC and will, as the only player on the market, be able to provide a full server system with both CPU and GPU's. But this is about consumers, and their behaviour is indeed different.

 

It's true consumers as a whole are not smart; nvidia's large market share is proof of this. But in PC gaming, people tends to be more educated than the average. This primarily due to reviews like LTT (also a reason why these needs to be non biased, and not recieve 10000$ worth of graphics cards for editing rigs). Those reviews are going to send a different picture, if Ashes of th Singularity is anything to go by. People will net forget that their 700$ GPU is being beatens by a 400$ 2 year old GPU after 1 year. This creates badwill, that lasts a long while.

 

Do you have a source to AMD being stuck with SKH? Either way, it will be good enough. We still haven't seen what Samsung can do, nor how it will perform. I doubt this is an issue. Only time will tell. No one cares about GDDR5 anymore.

 

Looking at Steams hardware survey shows that people do not switch their GPU's that often. Kepler and Maxwell will be here for a long while. None of them being able to do async shaders could hurt them a lot. No ones going to make games just for 12.1, with such a tiny marketshare. Pascal will not change that. GCN should support 12.1 in Arctic Islands (if they still use GCN).

 

I'll stick to facts, not what nvidia wants their investors to believe. DX12 devs will use async shaders/compute. They already do in Ashes and lots of console games it seems.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

It's true consumers as a whole are not smart; nvidia's large market share is proof of this.

Wow, amazing insult there, buddy.  Why don't we all just wait for more benchmarks of multiple games before we start bringing out the jump to conclusions mat.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia's success speaks for itself, I'm not denying that. My point is simply that their cult like following are starting to realize cloud 9 comes with a lot of caveats.

 

Price premiums on everything, which they have to eat due to vendor lock in. The planned obsolescence will kick in faster now that Maxwell sucks at async shaders/compute.

IBM is in their own eco bubble. They are more focused on business solutions than IT hardware anyways; a transition made over many years now. AMD on the other hand are improving their CPU IPC and will, as the only player on the market, be able to provide a full server system with both CPU and GPU's. But this is about consumers, and their behaviour is indeed different.

 

It's true consumers as a whole are not smart; nvidia's large market share is proof of this. But in PC gaming, people tends to be more educated than the average. This primarily due to reviews like LTT (also a reason why these needs to be non biased, and not recieve 10000$ worth of graphics cards for editing rigs). Those reviews are going to send a different picture, if Ashes of th Singularity is anything to go by. People will net forget that their 700$ GPU is being beatens by a 400$ 2 year old GPU after 1 year. This creates badwill, that lasts a long while.

 

Do you have a source to AMD being stuck with SKH? Either way, it will be good enough. We still haven't seen what Samsung can do, nor how it will perform. I doubt this is an issue. Only time will tell. No one cares about GDDR5 anymore.

 

Looking at Steams hardware survey shows that people do not switch their GPU's that often. Kepler and Maxwell will be here for a long while. None of them being able to do async shaders could hurt them a lot. No ones going to make games just for 12.1, with such a tiny marketshare. Pascal will not change that. GCN should support 12.1 in Arctic Islands (if they still use GCN).

 

I'll stick to facts, not what nvidia wants their investors to believe. DX12 devs will use async shaders/compute. They already do in Ashes and lots of console games it seems.

In gaming? No. In MMOs specifically, yes. That said, you vastly overestimate the effects of the few over the many.

 

AOTS is one game. Let's wait for a couple more, and for Nvidia to do their driver magic for DX 12.

 

Steam is a small sample size compared to the whole market, and it doesn't carry most of the AAA games and nor is it the primary vendor for the vast majority of them. It's hugely Skewed to lower end gaming.

 

Not a legal source. That would be an insider trading violation. I suppose I could anonymously leak it somewhere, but suffice it to say it's in writing and would cost AMD more than the supply deal will be worth to deviate from it.

 

Lots? No, not yet.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I was confused when I saw AotS in the title, thinking "Attack of the Show is still a thing?!"

Link to comment
Share on other sites

Link to post
Share on other sites

Wow, amazing insult there, buddy.  Why don't we all just wait for more benchmarks of multiple games before we start bringing out the jump to conclusions mat.

 

That comment was mostly focused on supporting a company practising anti consumer behaviour, like vendor lock in, planned obsolescence and underhanded tactics with their PR and so on. When a Valve dev outright calls nvidia the graphics mafia, that should be worrying. People end up getting stuck with obsolete overpriced crap, that requires high switching costs to get out of (replacing your 1200$ gsync monitor for instance).

 

In gaming? No. In MMOs specifically, yes. That said, you vastly overestimate the effects of the few over the many.

 

AOTS is one game. Let's wait for a couple more, and for Nvidia to do their driver magic for DX 12.

 

Steam is a small sample size compared to the whole market, and it doesn't carry most of the AAA games and nor is it the primary vendor for the vast majority of them. It's hugely Skewed to lower end gaming.

 

Not a legal source. That would be an insider trading violation. I suppose I could anonymously leak it somewhere, but suffice it to say it's in writing and would cost AMD more than the supply deal will be worth to deviate from it.

 

Lots? No, not yet.

 

Well depends on what the devs wants to achieve with their games. New tech results in new use cases and different needs. If console devs are using this heaily, that will leak over to the PC gaming eventually. At least if people wants PC master race.

 

The entire point of DX12 is to be less dependant on high level drivers, outright replacing codes and shaders in games. Im sure nvidia could implement it, but at what performance cost and consequences?

 

Steam is very much the largest tripple A platform out here. Sure it doesn't have EA, Blizzard or LoL games, but those do not hold the vast majority either. Sure Steam also includes casual gamers on laptops, but so do Origin (Sims, Plants vs Zombies, etc.)

If you have better bumbers, please present them.

 

No purpose in talking non confirmable sources. Even if they are true, we still don't have AMD's side. Even if we did, reality looks quite different once these products actually hits the shelf.

 

Well I only base it on the dev. Those games could come out in half a year or be 3 years away. Difficult to say.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

That comment was mostly focused on supporting a company practising anti consumer behaviour, like vendor lock in, planned obsolescence and underhanded tactics with their PR and so on. When a Valve dev outright calls nvidia the graphics mafia, that should be worrying. People end up getting stuck with obsolete overpriced crap, that requires high switching costs to get out of (replacing your 1200$ gsync monitor for instance).

 

No, it was not.  If you meant to do that, you clearly picked the wrong phrasing for it.  And where are your other sources?  Wherei s this valve employee?  AMD's side?  Who cares about the vendors, let the benchmarks decide.

Link to comment
Share on other sites

Link to post
Share on other sites

No, it was not.  If you meant to do that, you clearly picked the wrong phrasing for it.  And where are your other sources?  Wherei s this valve employee?  AMD's side?  Who cares about the vendors, let the benchmarks decide.

 

It was and is. I have discussed this point several times on this forum, even in this very thread. Should not be difficult putting two and two together. The Valve employee (at the time) is Rich Geldreich: http://richg42.blogspot.dk/2014/05/the-truth-on-opengl-driver-quality.html

 

Sources for what else? I care about what the vendors do that is not in my interest, with the products they want to sell me.​

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Damn that's crazy, that thread blew up literally and had some crazy fanboy-ism like I have never seen before O: even worse than all the years of when the original fx chips came out damn

Link to comment
Share on other sites

Link to post
Share on other sites

Annnndddd not too many programs or games seem to utilise said power. Mining seems to be an exception.

Annnnnnnnd?

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Damn that's crazy, that thread blew up literally and had some crazy fanboy-ism like I have never seen before O: even worse than all the years of when the original fx chips came out damn

 

And this actually surprises you? i'de be worried if it didn't turn out like this.

Link to comment
Share on other sites

Link to post
Share on other sites

This all came about due to Oxide's upcoming DX12 RTS, Ashes of the Singularity. When they released the benchmark, various places found that AMD GPUs had a massive performance gains (partly DX11 driver overhead, partly because GCN's ACEs are doing nothing on DX11), but NV GPUs actually do worse in DX12.

 

As noted here.

NV fired some pretty fine words at Oxide, dissing their game as not representative of DX12 games, and claiming they had a MSAA bug. Oxide fires back saying the bug is actually in NV's drivers and offered to help them fix it.

They made a nice blog about DX11 vs DX12 where they clarified it and that they were not out to gimp any hardware, but they play fair by the DX12 book.

This escalated on the tech forums, getting heated accusations thrown around, and so Oxide came into the discussion with this bombshell:

 

Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

 

Followed up with this:

 

 

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path.
Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that
.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute.

 

And finally this, they basically challenge NV to prove them wrong.

 

 

There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

 

It looks like Oxide is angry at NV since NV tried to make them look like fools despite the problem being with their hardware, so Oxide took it personally and go public. And now, AMD chimes in! u/AMD_Robert (Robert Hallock):

 

NVIDIA claims "full support" for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching.

GCN has supported async shading since its inception, and it did so because we hoped and expected that gaming would lean into these workloads heavily. Mantle, Vulkan and DX12 all do. The consoles do (with gusto). PC games are chock full of compute-driven effects.

 

Looks like this is escalating, time to prepare the popcorn and see Nvidia's response!

As to why Async Compute/Shaders are so important in DX12 & future cross-platform games:

  1. Compute is used for global illumination, dynamic lighting, shadows, physics, post-processing (including even AA). If it can be offloaded from the main rendering pipeline and done asynchronously in parallel, it can lead to major performance gains. As such, GPUs that support it will see major performance uplift and in theory, GPUs that do not support it, will have no benefit, it reverts back to the normal serial rendering of graphics & compute.

  2. Async Shaders are vital for a good VR experience, as it helps lower latency of head movement to visual/photon output. I posted on this topic awhile ago:

•  i7 4770k @ 4.5ghz • Noctua NHL12 •  Asrock Z87 Extreme 4 •  ASUS GTX 780 DCII 1156/6300 •

•  Kingston HyperX 16GB  •  Samsung 840 SSD 120GB [boot] + 2x Seagate Barracuda 2TB 7200RPM •

•  Fractal Design Define R4  •  Corsair AX860 80+ Platinum •  Logitech Wireless Y-RK49  •  Logitech X-530  •

Link to comment
Share on other sites

Link to post
Share on other sites

The title is slightly incorrect and biased, but that's what makes this highly humorous.

 

Current Radeon cards do not have 100% DX12 support either, it would take a lot of digging for the specifics but it's possible. Secondly, Maxwell is almost two years old at this point and DX12 is not relevant. Nothing uses it, nothing will use it for another year. By then Pascal will have released and then these "discussions" should be reopened.

 

Alas, until then the children will come to flame and rage because they're -sunshine and rainbows-.

 

Shitstorm line.


.

Link to comment
Share on other sites

Link to post
Share on other sites

Repost m80: http://linustechtips.com/main/topic/440575-oxides-kollack-responds-to-aots-and-vendor-bias-conspiracies/

As you can see, the thread is somewhat of a shitstorm. I am partially to blame :P

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

Considering that this is Nvidia we're talking about, I'm not surprised that they lied about DX12 support.

They didn't. There's multiple tiers of DX12 support. Just because they don't have asynchronous shaders (in Maxwell) doesn't mean they don't support DX12.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×