Jump to content

So they said "ASHES IS ONLY BETA, THERE WIL BE A DRIVER ONCE THE GAME IS OUT" THE GAME IS OUTT !!!!!!!

El Diablo
18 minutes ago, Dan Castellaneta said:

Jesus Christ, please give this a fucking break. You're all over YouTube, you're all over Linus Tech Tips spreading stupid ass shit.

I'm sorry but please, stop.

How else is he supposed to show he's an AMD fanboy and his ignorance about async compute?

An avid PC Enthusiast for 20 years.

Current Rig: i7-5820k - GTX 980TI SC+ - 32GB Corsair DDR4-3200 - 500GB 850 EVO M.2 - GA-X99-SLI

Kids' Rigs: FX-6300 - GTX 960 - 8GB Corsair DDR3-1600 - 1TB 7200RPM - GA-78LMT-USB3

Wife's Rig: FX-9590 - R9 Fury X - 32GB Corsair DDR3-2400 - 512GB 950 EVO M.2 - Asus 970 PRO Gaming/Aura

Link to comment
Share on other sites

Link to post
Share on other sites

On 03/04/2016 at 1:59 AM, El Diablo said:

Giving a async compute driver to maxwell cards is like giving a hyper threading firmware upgrade to a cpu that has no hyper threading

 

the cpu will run waaaay slower,,,

It's more like forcing hyperthreading on a CPU whose cores are all at full utilisation and would gain nothing from advanced scheduling techniques. Async Compute helps AMD because around 20% of the Fury X's stream processors are waiting for tasks at any given time. This isn't the case with Maxwell, which is why Maxwell doesn't gain anything, and actually loses performance when you try and force this.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Michamus said:

What's Ashes market cap? I have a feeling Nvidia doesn't care about a Indy RTS game that is running a few frames slower. I get the feeling the only reason we're seeing a major difference is because the dev team front loaded command lists to exceed 92. This is really the only time the R9 will start to perform better on Async compute and to pretend it's any more than a few frames per second is disingenuous.

 

 Besides, I hope they don't fix it, so I can watch people continue to say that Maxwell doesn't support Async Compute, even though it's been a feature of CUDA since 2008.

Go get your eyes checked

index.php?ct=articles&action=file&id=207

 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, othertomperson said:

It's more like forcing hyperthreading on a CPU whose cores are all at full utilisation and would gain nothing from advanced scheduling techniques. Async Compute helps AMD because around 20% of the Fury X's stream processors are waiting for tasks at any given time. This isn't the case with Maxwell, which is why Maxwell doesn't gain anything, and actually loses performance when you try and force this.

No, it helps them because they actually have asynchronous shaders. Otherwise the 390x would not be matching a 980 ti.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ivan134 said:

No, it helps them because they actually have asynchronous shaders. Otherwise the 390x would not be matching a 980 ti.

...like I said, Asynchronous Shaders can only have any effect at all if you have shaders that are underutilised at any moment in time. This was clearly happening with AMD cards, but Nvidia are under the impression that this is not happening with Maxwell.

 

The 290X has always been an absolute beast in terms of raw compute power, but has always suffered from a lack of optimisation. Now we finally have an API that is able to utilise that, I'm not sure where the surprise is coming from.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, othertomperson said:

...like I said, Asynchronous Shaders can only have any effect at all if you have shaders that are underutilised at any moment in time. This was clearly happening with AMD cards, but Nvidia are under the impression that this is not happening with Maxwell.

 

The 290X has always been an absolute beast in terms of raw compute power, but has always suffered from a lack of optimisation. Now we finally have an API that is able to utilise that, I'm not sure where the surprise is coming from.

Being a beast at raw compute would mean that the gtx 580 would also be a great dx 12 card. It's not just about raw compute power (more specifically DP), because with that logic, the 780 ti should be beating the 980 and 980 ti too. The 290x/390x would also be beating the Fiji cards since it also has better DP. If you don't have asynchronous shaders, you are not going to see improvements using asynchronous compute regardless of how good the card is at compute.

 

Your theory of diminishing returns would also only apply to the Fiji cards, except all GCN 1.1 and GCN 1.2 cards are seeing improvements too.

Edited by ivan134

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, ivan134 said:

Being a beast at raw compute would mean that the gtx 580 would also be a great dx 12 card. It's not just about raw compute power (more specifically DP), because with that logic, the 780 ti should be beating the 980 and 980 ti too. The 290x/390x would also be beating the Fiji cards since it also has better DP. If you don't asynchronous shaders, you are not going to see improvements using asynchronous compute regardless of how good the card is at compute.

 

Your theory of diminishing returns would also only apply to the Fiji cards, except all GCN 1.1 and GCN 1.2 cards are seeing improvements too.

 

You seem to be a bit confused. AMD have been utilising asynchronous shaders, for some reason, for the last five years. Nvidia have not. Since the 600 series Nvidia have focused on using the GPU as efficiently as possible in a serial pipeline since that is all that Directx 11 has utilised. This is why the Maxwell is seemingly paradoxically their least powerful and best performing generation of recent years, and the GTX 580 still holds up in terms of sheer power. The 580 is a good example of a card that, had it supported Async Compute, could have been a very good Directx 12 card (assuming game devs were going to support async comp, which is still a big if).

 

What this means is that under Directx 11 Maxwell GPUs are already very highly utilised and wouldn't gain much from async shaders as the GTX 580 would.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, othertomperson said:

It's more like forcing hyperthreading on a CPU whose cores are all at full utilisation and would gain nothing from advanced scheduling techniques. Async Compute helps AMD because around 20% of the Fury X's stream processors are waiting for tasks at any given time. This isn't the case with Maxwell, which is why Maxwell doesn't gain anything, and actually loses performance when you try and force this.

Yes, Async fixes a fundamental problem on AMD GPUs. Which is inefficiency as you point out, all of the cores are simply not being used all of the time. It doesn't increase the performance in DX12 over DX11, as much as it fixes a fundamental flaw which could not be fixed in DX11. AMD did a poor job with chip design in regards to efficiency, but regardless of how well you design your chip. The consumer is gonna use it for hundreds of different games, making it impossible to design a chip well enough to stay at near 100% efficiency for all games.

 

No, Nvidia cards are not able to utilize 100% of the cores 100% of the time. Granted Nvidia has done a better job (much better in fact) in keeping their cards efficient, but there are still inherent problems related to trying to run hundreds of different games on a range of different graphics cards. Nvidia is efficient, but propper Async support would still help.

 

Then there is the issue of. Can Maxwell even properly utilize Async Compute? Short answer, no the workload pipeline doesn't allow it. But does it matter? No. Async is not some magical ''enable for +20% performance''. It's a time consuming to implement feature of DX12 which allows for 5-10% performance increase, depending on how inefficient the GPU was without it.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, othertomperson said:

 

You seem to be a bit confused. AMD have been utilising asynchronous shaders, for some reason, for the last five years. Nvidia have not. Since the 600 series Nvidia have focused on using the GPU as efficiently as possible in a serial pipeline since that is all that Directx 11 has utilised. The 580 is a good example of a card that, had it supported Async Compute, could have been a very good Directx 12 card (assuming game devs were going to support async comp, which is still a big if).

 

What this means is that under Directx 11 Maxwell GPUs are already very highly utilised and wouldn't gain much from async shaders as the GTX 580 would.

No, they have not because it's impossible to do so with dx 11. It's clear now you don't know what asynchronous compute is, because AMD only 1st introduced it with Mantle and it's a dx 12 feature now. Maxwell also supports asynchronous compute, but does not have the actual hardware to do it.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ivan134 said:

No, they have not because it's impossible to do so with dx 11. It's clear now you don't know what asynchronous compute is, because AMD only 1st introduced it with Mantle and it's a dx 12 feature now. Maxwell also supports asynchronous compute, but does not have the actual hardware to do it.

 

You seem very confused now. Software's utilisation of a feature and hardware's potential to support said feature are entirely independent. My CPU doesn't magically stop supporting Hyperthreading just because I start running single threaded applications.

 

2 minutes ago, MMKing said:

Yes, Async fixes a fundamental problem on AMD GPUs. Which is inefficiency as you point out, all of the cores are simply not being used all of the time. It doesn't increase the performance in DX12 over DX11, as much as it fixes a fundamental flaw which could not be fixed in DX11. AMD did a poor job with chip design in regards to efficiency, but regardless of how well you design your chip. The consumer is gonna use it for hundreds of different games, making it impossible to design a chip well enough to stay at near 100% efficiency for all games.

 

No, Nvidia cards are not able to utilize 100% of the cores 100% of the time. Granted Nvidia has done a better job (much better in fact) in keeping their cards efficient, but there are still inherent problems related to trying to run hundreds of different games on a range of different graphics cards. Nvidia is efficient, but propper Async support would still help.

 

Then there is the issue of. Can Maxwell even properly utilize Async Compute? Short answer, no the workload pipeline doesn't allow it. But does it matter? No. Async is not some magical ''enable for +20% performance''. It's a hard to implement feature of DX12 which allows for 5-10% performance increase, depending on how inefficient the GPU was without it.

The claim that Maxwell is efficient enough to not benefit from Async Compute comes from an Nvidia engineer quoted here

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, othertomperson said:

The claim that Maxwell is efficient enough to not benefit from Async Compute comes from an Nvidia engineer quoted here

The claim is that.

''First, if Async Compute is a way to increase performance, what matters in the end is overall performance. If GeForce GPUs are the most efficient basis than the Radeon GPU, the use of multi engine in an attempt to boost performance is not a top priority.''

 

First of all, it's important to note that these were not the exact words. The article was translated from French to English, it's only fair that it's mentioned as we discuss the statement.

 

Second, it was not claimed that Maxwell is in fact efficient enough to not benefit from Async Compute at all. The Nvidia engineer (keep this in mind as well) said that if the GeForce GPU was more efficient than the competitor, attempting to increase performance by further increasing the efficiency was not a top priority. It was not stated that Async was useless or near enough useless as to make no difference. Only that the gains Async would bring was significant enough, to not be a top priority.

 

Nvidia may be a large company, but that doesn't mean that all projects are given the green light. There may be other more important work to be done, rather than attempting to improve the workload pipeline. In fact, considering how inefficient AMD GPUs are, and how low AMDs market share currently is. It may be in Nvidias interest to not support Async, as propper Async support would encourage games developer to use a piece of software that ultimately benefits AMD more than Nvidia.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, othertomperson said:

 

You seem very confused now. Software's utilisation of a feature and hardware's potential to support said feature are entirely independent. My CPU doesn't magically stop supporting Hyperthreading just because I start running single threaded applications.

 

The claim that Maxwell is efficient enough to not benefit from Async Compute comes from an Nvidia engineer quoted here

You seem to be jumping all over the place. You said:

36 minutes ago, othertomperson said:

 

You seem to be a bit confused. AMD have been utilising asynchronous shaders, for some reason, for the last five years. Nvidia have not. Since the 600 series Nvidia have focused on using the GPU as efficiently as possible in a serial pipeline since that is all that Directx 11 has utilised. This is why the Maxwell is seemingly paradoxically their least powerful and best performing generation of recent years, and the GTX 580 still holds up in terms of sheer power. The 580 is a good example of a card that, had it supported Async Compute, could have been a very good Directx 12 card (assuming game devs were going to support async comp, which is still a big if).

 

What this means is that under Directx 11 Maxwell GPUs are already very highly utilised and wouldn't gain much from async shaders as the GTX 580 would.

I would like to know how they've been doing the impossible since you can't do that in dx 11. You said before that it was tied to raw compute capabilities, but ignored my question of why the 780 ti is not beating a 980 and a 390x is not beating a Fury X if that's the case. You said it's about diminishing returns from using too many SPs in the Fury X that makes a lot of them redundant, but also didn't answer why performance improvements are seen all the way down to the 380.

 

The architecture isn't inefficient, the API is. Raw compute capabilities are not going to help the architecture if it doesn't tailor itself to make use of asynchronous compute by having ACEs (asynchronous compute engines), which is why the 390x is not beating a Fury X. You are partially right that the Fury X is not at it's max potential, but that is because it's an inbalanced chip, having only 64 ROPs, which is the same number as the Fury X. Had it been 96 or 128, it would most likely have been beating the 980 ti even in dx 11. If you want evidence as to just how inefficient dx 11 is, with asynchronous compute turned on, an i3 performs like an i7 and even beats it in certain scenarios.

 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, MMKing said:

The claim is that.

''First, if Async Compute is a way to increase performance, what matters in the end is overall performance. If GeForce GPUs are the most efficient basis than the Radeon GPU, the use of multi engine in an attempt to boost performance is not a top priority.''

 

First of all, it's important to note that these were not the exact words. The article was translated from French to English, it's only fair that it's mentioned as we discuss the statement.

 

Second, it was not claimed that Maxwell is in fact efficient enough to not benefit from Async Compute at all. The Nvidia engineer (keep this in mind as well) said that if the GeForce GPU was more efficient than the competitor, attempting to increase performance by further increasing the efficiency was not a top priority. It was not stated that Async was useless or near enough useless as to make no difference. Only that the gains Async would bring was significant enough, to not be a top priority.

 

Nvidia may be a large company, but that doesn't mean that all projects are given the green light. There may be other more important work to be done, rather than attempting to improve the workload pipeline. In fact, considering how inefficient AMD GPUs are, and how low AMDs market share currently is. It may be in Nvidias interest to not support Async, as propper Async support would encourage games developer to use a piece of software that ultimately benefits AMD more than Nvidia.

More like it wouldn't help them milk their customers. Nvidia advertised Maxwell as dx 12 cards. They went a step further to support dx 12_1 while ignoring the component of dx 12_0 that would have prevented them from selling fairy tales about power consumption.

DirectX-12_GeForce-GTX-980-Ti-Support.pn

 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, ivan134 said:

You seem to be jumping all over the place. You said:

I would like to know how they've been doing the impossible since you can't do that in dx 11. You said before that it was tied to raw compute capabilities, but ignored my question of why the 780 ti is not beating a 980 and a 390x is not beating a Fury X if that's the case. You said it's about diminishing returns from using too many SPs in the Fury X that makes a lot of them redundant, but also didn't answer why performance improvements are seen all the way down to the 380.

 

The architecture isn't inefficient, the API is. Raw compute capabilities are not going to help the architecture if it doesn't tailor itself to make use of asynchronous compute by having ACEs (asynchronous compute engines), which is why the 390x is not beating a Fury X. You are partially right that the Fury X is not at it's max potential, but that is because it's an inbalanced chip, having only 64 ROPs, which is the same number as the Fury X. Had it been 96 or 128, it would most likely have been beating the 980 ti even in dx 11. If you want evidence as to just how inefficient dx 11 is, with asynchronous compute turned on, an i3 performs like an i7 and even beats it in certain scenarios.

-Video snipped-

 

 

1. It doesn't really matter if it's the DX11 API that is inefficient, or if it's the Fury X and/or GCN 1.2 instruction set that is inefficient. The fact is that the Nvidia Maxwell is more efficient than the competitor at DX11 performance. This translates to the 390 costing the same as the 970, despite the fact that the 390 chip is larger than the 970 chip. So AMD has taken a cut in profits in order to compete with Nvidia.

 

2. Better utilization for multi-core CPUs is an inherent DX12 feature, and is not related to Async Compute.

 

Lastly, i think you messed up a sentence or two there. Why would the 390x outperform the Fury x? The Fury x has more cores, more TMUs and an equal amount of ROPs. It may very well be that the 390x is a better balanced chip. But only synthetic software which intentionally bottlenecks due to a lack of ROPs is gonna have the Fury X and the 390x perform equally. No games or synthetic benchmarks is gonna have the 390x perform better.

 

 

Edit: As for the image you linked. I agree. Filthy filthy liars.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

What is interesting, my friend has a MSI gaming laptop bought last year that features an i7-5700HQ and a GTX 960M. He loses 6 FPS when he switches to DX12 from DX11 in Ashes.

How is it possible that you switch to a better API and you lose frames? That's stupid. Note that 960M is essentially a 750Ti.

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MMKing said:

 

1. It doesn't really matter if it's the DX11 API that is inefficient, or if it's the Fury X and/or GCN 1.2 instruction set that is inefficient. The fact is that the Nvidia Maxwell is more efficient than the competitor at DX11 performance. This translates to the 390 costing the same as the 970, despite the fact that the 390 chip is larger than the 970 chip. So AMD has taken a cut in profits in order to compete with Nvidia.

 

2. Better utilization for multi-core CPUs is an inherent DX12 feature, and is not related to Async Compute.

 

Lastly, i think you messed up a sentence or two there. Why would the 390x outperform the Fury x? The Fury x has more cores, more TMUs and an equal amount of ROPs. It may very well be that the 390x is a better balanced chip. But only synthetic software which intentionally bottlenecks due to a lack of ROPs is gonna have the Fury X and the 390x perform equally. No games or synthetic benchmarks is gonna have the 390x perform better.

I was pointing out the flaw in his argument that it's somehow tied to the compute capabilities of a GPU since the 390x has better DP than a Fury X. As for the CPU, we're not seeing the better CPU utilization in other dx 12 games that don't use async compute.

 

There is a similar flaw in your arguement. You are trying to compare chip size of 2 different architectures and trying to find some correlation to dx 12 (or 11) performance. The 980 ti has a larger die size than a Fury X so utilization or redundancies don't apply here. This would mean that the Fury X is more efficient than the 980 ti.

 

Guys it's very simple: Maxwell has no ACEs so they don't see performance increases with asynchronous compute.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Morgan MLGman said:

What is interesting, my friend has a MSI gaming laptop bought last year that features an i7-5700HQ and a GTX 960M. He loses 6 FPS when he switches to DX12 from DX11 in Ashes.

How is it possible that you switch to a better API and you lose frames? That's stupid. Note that 960M is essentially a 750Ti.

All Maxwell cards are losing fps when you switch to dx 12 in Ashes.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ivan134 said:

All Maxwell cards are losing fps when you switch to dx 12 in Ashes.

bullshit

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, i_build_nanosuits said:

bullshit

Ashes of the Singularity (Beta) - High Quality - Async Shader PerformanceAshes of the Singularity (Beta) - High Quality - Async Shading Perf. GainAshes of the Singularity (Beta) - Extreme Quality - Async Shading Perf. Gain

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, ivan134 said:

More like it wouldn't help them milk their customers. Nvidia advertised Maxwell as dx 12 cards. They went a step further to support dx 12_1 while ignoring the component of dx 12_0 that would have prevented them from selling fairy tales about power consumption.

DirectX-12_GeForce-GTX-980-Ti-Support.pn

 

wait until other games start to hit on those features though!

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/5/2016 at 9:36 PM, El Diablo said:

talk about no freedom of speech

You are sadly misinformed if you think your freedoms in one country transfer to a private forum on the internet (which is owned by a company in another country). Rest assured, if you went into another person's house (even in the US) and started ranting and raving about something the home owner doesn't care about, your freedom of speech would mean nothing when they force you to leave. Please refresh your knowledge of said freedoms you feel entitled to or one day you'll be very upset when you lose them.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, ivan134 said:

I was pointing out the flaw in his argument that it's somehow tied to the compute capabilities of a GPU since the 390x has better DP than a Fury X. As for the CPU, we're not seeing the better CPU utilization in other dx 12 games that don't use async compute.

 

There is a similar flaw in your arguement. You are trying to compare chip size of 2 different architectures and trying to find some correlation to dx 12 (or 11) performance. The 980 ti has a larger die size than a Fury X so utilization or redundancies don't apply here. This would mean that the Fury X is more efficient than the 980 ti.

 

Guys it's very simple: Maxwell has no ACEs so they don't see performance increases with asynchronous compute.

This part of the argument completely went over my head.

 

The intention of comparing chip sizes of two different architectures was in the context of production cost, price and profit. The message that i was trying to communicate was that. Since Maxwell is a better designed chip for the purpose of DX11 rendering. AMD is forced to produce a larger chip and sell it at a lower price in relation to it's size, in order to compete with Nvidia.

 

I am not trying to make a correlation in performance between the 970 and the 390 based on chip size. The fact is that the 390 (GCN1.1) chip is near dead even with the 970, despite being about 10% larger. And the fully unlocked 980 is performing better than the 390x, despite the fact that the 390x is also 10% larger. Part of this is, is the increased amount of memory controllers on the AMD chip, which allows for more GDDR5 chips, which in turn helps at 1440p and especially 4K performance. But the fact of the matter is that the Maxwell 900 series is more efficient in relation to chip size, forcing AMD to manufacture larger chips in order to compete.

 

 

I agree with your statement regarding the lack of ACEs on Maxwell. As of now, when forced to, the Maxwell gains no performance with Async on. In fact, we are seeing a slight reduction in performance.

 

Nvidias word on the issue:

''DirectX 12 also enables new visual effects and rendering techniques on GPUs with the necessary hardware, such as the GeForce GTX 980 Ti and other 2nd Generation Maxwell GPUs. Effects and techniques include, but are not limited to, Volume Tiled Resources, Conservative Rasters, Raster Order Views, Tiled Resources, Typed UAV Access, Bindless Textures, and Async Compute.''

Source: http://www.geforce.com/whats-new/articles/introducing-the-geforce-gtx-980-ti

 

 

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ivan134 said:

Go get your eyes checked

[Snipped]

 

I can't seem to find a reputable source for that image. Also, it's showing far more extreme differences in frame-rates than other reputable reports have shown. The fact that the 390x is extremely beating the 980ti in your graph has me extremely skeptical of the validity of it. I'd be deeply interested in knowing your source for this image.

 

To me, it looks like someone took the i3-4330 frame-rates with a GTX 980 and re-labeled them as GTX 980ti. Every AOTS DX12 bench I've seen has the 390x head to head with GTX 980, with a margin (in the 390x favor) of 5%-7%.

An avid PC Enthusiast for 20 years.

Current Rig: i7-5820k - GTX 980TI SC+ - 32GB Corsair DDR4-3200 - 500GB 850 EVO M.2 - GA-X99-SLI

Kids' Rigs: FX-6300 - GTX 960 - 8GB Corsair DDR3-1600 - 1TB 7200RPM - GA-78LMT-USB3

Wife's Rig: FX-9590 - R9 Fury X - 32GB Corsair DDR3-2400 - 512GB 950 EVO M.2 - Asus 970 PRO Gaming/Aura

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×