Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

Oop, sorry. You ninja'ed me.

Does anyone know what the affects of nvidia relying on a virtualized scheduler will be on their ability to virtualize async compute engines? To me it seems like if they had hardware schedulers they would be better off.

It's all about preemption.

Say you have a graphic shader, in a frame, that is taking a particularly long time to complete, with Maxwell2, you have to wait for this graphic shader to complete before you can execute another task. If a graphic shader takes 16ms, you have to wait till it completes before executing another graphic or compute command. This is what is called "slow context switching". Every millisecond, brings down your FPS for that frame. So if you have 16ms for a graphic shader, 20ms for a compute task and 5 ms for a copy command, you end up with 41ms for that frame. This wasn't important for DX11 and nVIDIA primarily designed their Kepler/Maxwell/Maxwell 2 architectures with DX11 in mind.

1F8or6k.jpg

 

With GCN, you can execute out of order and the ACEs will check for errors and re-issue, if needed, to correct an error. Out of order means that you don't need to wait for one task to complete before you work on the next. So say, on GCN, that same Graphic shader task takes 24ms, in that same 24ms you can do a bunch of other tasks in parallel (like the compute and copy command above). So your frame ends up being only 24ms.

rOyDweg.png

 

Developers need to be super careful about how they program for Maxwell 2, if they aren't... then far too much latency will be added to the frame. If a particular frame is already high latency... then you can't use Asynchronous Compute on it with Maxwell 2. This is even once nVIDIA fix their driver issue.

 

From all the sources I've seen, Pascal is set to fix this problem. I just don't think nVIDIA thought the industry would jump on DX12 the way it is right now... pretty much every single title, in 2016, will be built around the DX12 API. We'll even get a few titles in a few months in 2015.

 

I wouldn't underestimate nVIDIAs capacity to fix their driver. But like I've told many people... wait and see on the performance boost from their Asynchronous Compute solution.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

It's all about preemption.

 

-snip-

 

Small tip for this forum, use Ctrl+Shift+V for pasting cause sometime it will look weird for night theme user and vice versa.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Small tips for this forum, use Ctrl+Shift+V for pasting cause sometime it will look weird for night theme user and vice versa.

*fixed*

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

I have started a thread at Overclock.net which contains the information on the situation as of today: http://www.overclock.net/t/1572716/directx-12-asynchronous-compute-an-exercise-in-crowd-sourcing#post_24385652

 

This should help curb any mis-information on this topic. Enjoy!

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, since we're now on topic...

I'm very conflicted on what Kollack said:

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

This driver that supposedly fully implements AC, but I thought NVIDIA was restricted by their hardware? Or is there some theoretical way that NVIDIA could possibly fully use AC through software?

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

I know a lot people who are upset at nvidia atm. However, as a consumer I can't help but feel mislead. I'm currently not an nvidia card owner. But i'm a owner of a high end graphic card. Based on the current information available to consumers owning a maxwell2 or fury cards is a waste of money. This is because there is no "full" dx12 support. I will not put all the blame on nvidia or AMD, because as a consumer I should have done better research.  

 

So here is my question too you. Going forward how can this be avoided? Where is a resource on the net that can explain in layman terms the differences in gpu designs and the benefit? I'm more upset at myself because I knew better. I should have never gotten a new gpu right before a die shrink and new microarchitecture. But I was running on old 6970 cross. In less than a years time, I feel my fury-x and maxell2 are nothing more than a cup coasters. 

Test ideas by experiment and observation; build on those ideas that pass the test, reject the ones that fail; follow the evidence wherever it leads and question everything.

Link to comment
Share on other sites

Link to post
Share on other sites

I know a lot people who are upset at nvidia atm. However, as a consumer I can't help but feel mislead. I'm currently not an nvidia card owner. But i'm a owner of a high end graphic card. Based on the current information available to consumers owning a maxwell2 or fury cards is a waste of money. This is because there is no "full" dx12 support. I will not put all the blame on nvidia or AMD, because as a consumer I should have done better research.  

 

So here is my question too you. Going forward how can this be avoided? Where is a resource on the net that can explain in layman terms the differences in gpu designs and the benefit? I'm more upset at myself because I knew better. I should have never gotten a new gpu right before a die shrink and new microarchitecture. But I was running on old 6970 cross. In less than a years time, I feel my fury-x and maxell2 are nothing more than a cup coasters. 

Not entirely.  There are still plenty of DX11 games out now and in the next year until Pascal arrives.  And it's not like you can't sell your old card to help pay for it.  We go in to the his hobby knowing our shit will be out of date soon.  If you wait to futureproof, you end up buying nothing.  I mean, I don't feel SCAMMED that I upgraded my 770 to 970, then to a 980ti, because I know the cards I previously had were sold to help subsidize the next, and it's not like the cards are defective.  Although if you were going to get that G-sync/Free-sync monitor, I'd wait now.

 

Everyone just needs to chill out, read up on updates that we get, and just use that for future purchases.  Firesales won't benefit you.

Link to comment
Share on other sites

Link to post
Share on other sites

I know a lot people who are upset at nvidia atm. However, as a consumer I can't help but feel mislead. I'm currently not an nvidia card owner. But i'm a owner of a high end graphic card. Based on the current information available to consumers owning a maxwell2 or fury cards is a waste of money. This is because there is no "full" dx12 support. I will not put all the blame on nvidia or AMD, because as a consumer I should have done better research.  

 

So here is my question too you. Going forward how can this be avoided? Where is a resource on the net that can explain in layman terms the differences in gpu designs and the benefit? I'm more upset at myself because I knew better. I should have never gotten a new gpu right before a die shrink and new microarchitecture. But I was running on old 6970 cross. In less than a years time, I feel my fury-x and maxell2 are nothing more than a cup coasters. 

 

I share your pain. I'm actually quite angry at it all myself. It seems that, over the course of time, Tech reviewers started to focus more on benchmarks rather than researching the ins and outs of various GPU architectures. The goal appears to be, by most tech journalists, to obtain free samples in order to cover new products on day 1. They act, most of the time, like 3rd party PR agents for the big two (AMD/nVIDIA). They quote the PR material, never using critical thinking skills to question any information they receive, and then publish their articles based on what AMD or nVIDIA have stated. Its exactly a mirror image of our current Political reality. Where a "journalist", from the NYT, will ask a spokesperson from the Pentagon "Does the US use cluster bombs?" and if the spokesperson answers "No" then they'll simply publish an article which states "The United States of America does not use cluster bombs". Of course, anyone walking throughout Iraq, Afghanistan, Libya, and Syria will tell you that this is patently false. The use of cluster bombs is illegal under international law as well. That's why the Pentagon won't self incriminate itself. For this reason we have, what I term to be, real journalists... such as Glenn Greenwald and Jeremy Scahill (to name a few) who actually do the investigative work required in order to expose these lies.

We have the same thing, admittedly on a smaller scale, happening in the tech industry. I've been around for a long time... I was there when toms hardware and Anandtech opened up shop. I remember how tech journalism used to be far more indepth and took the claims from companies with a grain of salt... relying on their own investigative and research capabilities. I suppose that this became less profitable over time. In order to gain access, you have to play nice.

A prime example of this is how AMD is denying review samples to publications which didn't play nice. If you don't show the products, of either of the two big companies in a good light, you lose access. A loss of access means a loss of revenue for the tech publications.

Who's fault is this? Schills. Tech publications who didn't do research began to spread and schilled for access. This caused a loss of business to the publications who did not schill. It created an incentive model built around schilling.

 

I, personally, modeled the way I approached this whole issue to the way Glenn Greenwald approached the whole NSA issue. I figure that, in the end, the truth will be known. Like people hate on Glenn, people hate on me. That's fine though, they just don't realize that they'll gain, as consumers, in the end.

Now where does the hate come from? It comes from Partisanship. Team Red vs Team Green. People throwing insults at one another, dividing the consumer base into two camps, rather than realizing they're both being played. Ask a Kepler user how he/she feels playing GameWorks titles? I'm certain you'll hear a lot of negativity. Some folks dished out over 1K for Graphic cards (Titan Black comes to mind) which, a generation after, struggles to play many of the new GameWorks titles, lacks proper driver support as well. I mean you'll be surprised at the amount of GTX 680 users who get bluescreens nowadays.

 

The market is set, in such a way, that we're mere consumers. Expected to keep gobbling up new Graphic cards, each generation, for bragging rights. While some folks can afford this, after 2008, it has become quite hard to keep up with the new launches.

The PR, from both companies, has become incredibly hostile. Each side is throwing insults at the other. The partisan fans, of each, quote those arguments rather than doing their own research (though admittedly, it shouldn't be up to the consumer to do all of this research if we have a Tech Press in place).

I don't like any of this. I yearn for the days of Anand and Thomas Pabst. The days when tech journalists caught the ATi Radeon 8500 bilinear filtering cheat or the nVIDIA GeForceFX driver compiler which circumvented Pixel Shader 2.0 operations. At least, back then, I could trust the Tech press. Nowadays... I trust very few folks.

 

I've been following Linus, for example, every since he started over at NCIX (where I purchase most of my gear). I thoroughly enjoyed his earlier work at NCIX. I used to wait for a new video, start it up, and watch it as I ate pizza and played video games. I like the guy. And I hope he keeps doing what he does. I'd like to see him covering this stuff more closely and asking the tough questions. I think he's got the talent to get it done.

 

I've rambled enough. Time to get back to drinking Cocal Cola and eating some fine Moroccan foods.

 

ttyl bro and take care.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, since we're now on topic...

I'm very conflicted on what Kollack said:

This driver that supposedly fully implements AC, but I thought NVIDIA was restricted by their hardware? Or is there some theoretical way that NVIDIA could possibly fully use AC through software?

 

Maxwell 2 support Asynchronous Compute... in hardware, its Asynchronous Warp Schedulers support the feature. People who claimed it couldn't support the feature were being misled because of a driver issue... this is what misled Kollock as well. The driver reported the features as working, but he couldn't get 'er done. So as far as he knew... the hardware couldn't support it.

People jumped onto that conclusion... it spread throughout the web like wild fire. There are limitations to the Asynchronous Compute capabilities of Maxwell 2's implementation through HyperQ... but it does support the feature.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

I look at this topic and the shit-storm it brewed then ask myself why are people so rabid in the amd/nvidia garbage before we even get to the issue affecting anyone, my theory is that everyone on both sides of the arguments in these flame wars has extreme confirmation bias to the point that facts don't even register into their thoughts. It kind of reminds me of religion and all the bs that brings. Now with this issue can Maxwell support async compute, yes. Can it support it natively, no. Does it's support(when implemented) match up to amd's, no. Will this matter in 2 months, probably not.

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

Maxwell 2 support Asynchronous Compute... in hardware, its Asynchronous Warp Schedulers support the feature. People who claimed it couldn't support the feature were being misled because of a driver issue... this is what misled Kollock as well. The driver reported the features as working, but he couldn't get 'er done. So as far as he knew... the hardware couldn't support it.

People jumped onto that conclusion... it spread throughout the web like wild fire. There are limitations to the Asynchronous Compute capabilities of Maxwell 2's implementation through HyperQ... but it does support the feature.

So in other words until Asyncronous Compute was actually going to be useful, Nvidia didn't have their drivers configured to utilise it, which has lead to the overall 'panic' that's been going on.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I look at this topic and the shit-storm it brewed then ask myself why are people so rabid in the amd/nvidia garbage before we even get to the issue affecting anyone, my theory is that everyone on both sides of the arguments in these flame wars has extreme confirmation bias to the point that facts don't even register into their thoughts. It kind of reminds me of religion and all the bs that brings. Now with this issue can Maxwell support async compute, yes. Can it support it natively, no. Does it's support(when implemented) match up to amd's, no. Will this matter in 2 months, probably not.

 

There has been a belief for years now that AMD were not doing anything, not innovating and not pulling their weight in the gaming industry. I can remember many tech reviewers on podcasts in the past few years ripping into AMD for not being innovative, or not being competitive. When information suddenly comes out that shatters the reality of a lot of consumers and tech reviewers alike, the blowback is going to cause a few quakes, it's unavoidable. 

 

For myself, the only consequence would be VR performance, but I went AMD this time having already heard credible rumors about which side was doing better. For gaming it shouldn't be significant - at least outside of benchmarks scores.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

There has been a belief for years now that AMD were not doing anything, not innovating and not pulling their weight in the gaming industry. I can remember many tech reviewers on podcasts in the past few years ripping into AMD for not being innovative, or not being competitive. When information suddenly comes out that shatters the reality of a lot of consumers and tech reviewers alike, the blowback is going to cause a few quakes, it's unavoidable. 

 

For myself, the only consequence would be VR performance. For gaming it shouldn't be significant - at least outside of benchmarks scores.

I think the thing that shatters reality is that AMD was thinking years ahead-which is why they implemented bridge-less crossfire for up to 99% scaling-which is really good for 4K gaming when running a multi card setup. Unlike Nvidia's bridged SLI which is outright bad scaling..

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

So in other words until Asyncronous Compute was actually going to be useful, Nvidia didn't have their drivers configured to utilise it, which has lead to the overall 'panic' that's been going on.

 

Well, the panic had a lot to do with the fact that nVIDIA had been working with Kollock, of Oxide, for over a year trying to get it to work. When they couldn't get it to work, nVIDIA "pressured" Kollock into deactivating the feature entirely (on AMD GPUs as well) so as to save face once the alpha benchmark released (as per Kollock). Kollock refused, he instead suggested that nVIDIA and he work on a Vendor ID specific path, to get the Post Processing effects (Glare and Lighting) working. Kollock's conclusion was that, as far as he knew, Maxwell 2 couldn't do Async compute (eventhough the driver exposed it). This is the solution we have right now when we run the Ashes of the Singularity benchmark. The AMD GPUs are running Async Compute and the nVIDIA GPUs are not.

 

Some people seized on the "nVIDIA pressuring a developer" while others seized on the "Maxwell 2 can't handle Async compute". The former is a proper controversy, the latter, still required a response from nVIDIA. 

 

My suggestion is that, like the first time around, we wait for nVIDIA to fix the issue and work with Oxide to implement Asynchronous Compute before we draw any conclusions.

We also have to remember that Ashes of the Singularity:

  1. Barely makes use of Asynchronous Compute
  2. 30% of the engine is in compute, when completed Kollock stated 50% of the engine will be handled by the compute rather than Graphic pipeline

 

It's still an Alpha benchmark... we ought to keep that in mind. A LOT can and will change be the time the game is ready to be released.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

Maxwell 2 support Asynchronous Compute... in hardware, its Asynchronous Warp Schedulers support the feature. People who claimed it couldn't support the feature were being misled because of a driver issue... this is what misled Kollock as well. The driver reported the features as working, but he couldn't get 'er done. So as far as he knew... the hardware couldn't support it.

People jumped onto that conclusion... it spread throughout the web like wild fire. There are limitations to the Asynchronous Compute capabilities of Maxwell 2's implementation through HyperQ... but it does support the feature.

Yes, I do recall AMD's Robert Hallock commenting on an article (it was misleading) that the Maxwell architecture just couldn't have the same capabilities for AS, in comparison to AMD's.

If you don't mind me quoting you:

The Asynchronous Warp Schedulers are in the hardware. Each SMM (which is a shader engine in GCN terms) holds four AWSs. Unlike GCN, the scheduling aspect is handled in software for Maxwell 2. In the driver there’s a Grid Management Queue which holds pending tasks and assigns the pending tasks to another piece of software which is the work distributor. The work distributor then assigns the tasks to available Asynchronous Warp Schedulers. It’s quite a few different “parts” working together. A software and a hardware component if you will.

Do you think that NVIDIA has the capacity to 'fully implement' AC or is it something that will have to wait until Pascal?

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, I do recall AMD's Robert Hallock commenting on an article (it was misleading) that the Maxwell architecture just couldn't have the same capabilities for AS, in comparison to AMD's.

If you don't mind me quoting you:

Do you think that NVIDIA has the capacity to 'fully implement' AC or is it something that will have to wait until Pascal?

 

This wasn't directed at me but that depends entirely on how many commands in queue is required for "full implementation" if 31 or less then yes, if more i'd wager no

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, I do recall AMD's Robert Hallock commenting on an article (it was misleading) that the Maxwell architecture just couldn't have the same capabilities for AS, in comparison to AMD's.

If you don't mind me quoting you:

Do you think that NVIDIA has the capacity to 'fully implement' AC or is it something that will have to wait until Pascal?

 

I think that we may be looking beyond Pascal for fine grain preemption support. The little birds have sang my way, I'll be adding some new information in my post at Overclock.net and I'll update you all here...

 

It has the potential to be quite illuminating as well as turning 2016 into a rather interesting year :)

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

Oh well... we're all waiting for Pascal anyway.

Mobo: Z97 MSI Gaming 7 / CPU: i5-4690k@4.5GHz 1.23v / GPU: EVGA GTX 1070 / RAM: 8GB DDR3 1600MHz@CL9 1.5v / PSU: Corsair CX500M / Case: NZXT 410 / Monitor: 1080p IPS Acer R240HY bidx

Link to comment
Share on other sites

Link to post
Share on other sites

I know a lot people who are upset at nvidia atm. However, as a consumer I can't help but feel mislead. I'm currently not an nvidia card owner. But i'm a owner of a high end graphic card. Based on the current information available to consumers owning a maxwell2 or fury cards is a waste of money. This is because there is no "full" dx12 support. I will not put all the blame on nvidia or AMD, because as a consumer I should have done better research.  

 

So here is my question too you. Going forward how can this be avoided? Where is a resource on the net that can explain in layman terms the differences in gpu designs and the benefit? I'm more upset at myself because I knew better. I should have never gotten a new gpu right before a die shrink and new microarchitecture. But I was running on old 6970 cross. In less than a years time, I feel my fury-x and maxell2 are nothing more than a cup coasters. 

I don't understand how were misled by AMD. AMD does fully support dx12. The features they don't support are dx 12.1 features, which we've known for a while now. AMD has never claimed to support dx 12.1.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Fok, should have waited for Volta instead upgrading to a 970.

So... We don't know if Maxwell has Async right?

Maybe the developer wasn't able to use it correctly, or Maxwell really sucks at that task.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, i hope my gtx 970 supports Async compute then otherwise not going to be very happy....

• FX-8320  GTX 970  M5A97 R2  Corsair H100I GT  500W PSU  RIPJAWS 8GB DDR3  SAMSUNG 840 EVO 120GB  1TB HDD 

 
Link to comment
Share on other sites

Link to post
Share on other sites

Maxwell 2 support Asynchronous Compute... in hardware, its Asynchronous Warp Schedulers support the feature. People who claimed it couldn't support the feature were being misled because of a driver issue... this is what misled Kollock as well. The driver reported the features as working, but he couldn't get 'er done. So as far as he knew... the hardware couldn't support it.

People jumped onto that conclusion... it spread throughout the web like wild fire. There are limitations to the Asynchronous Compute capabilities of Maxwell 2's implementation through HyperQ... but it does support the feature.

 

But does it actually? Is that not just an assumption, like the accusation of Kollock's conclusion? Even if NVidia manages to emulate the scheduler and make real async compute work, it will still have a large performance penalty. Either way, NVidia dun goofed.

 

So in other words until Asyncronous Compute was actually going to be useful, Nvidia didn't have their drivers configured to utilise it, which has lead to the overall 'panic' that's been going on.

 

Problem is that the cards do not have a hardware scheduler, which means it has to be emulated via the drivers. No matter how you look at it, it will result in higher latency and poorer performance. In the end it all depends on the utilization in the upcoming games. How much is compute, how many queues will be used, etc. If post processing, lighting and so on will be done in compute, we might see this be a big issue in every kind of game.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Pascal and Arctic Islands cannot arrive soon enough.

Can't wait to see how these new 14/16nm cards will render all our 28nm cards useless.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Can't wait to see how these new 14/16nm cards will render all our 28nm cards useless.

Plot twist: TSMC can't get 16nm FF+ yields up due to large die sizes. 14/16nm delayed until 2017.

2nd Plot Twist: Intel reaches 10nm ahead of schedule and tries to make a dGPU based on the Cannonlake graphics engine in late 2016.

3rd Plot Twist: Samsung decides to try producing dGPUs for AMD and Nvidia.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×