Jump to content

AMD silently nerfing the performance of older GCN cards

Repost
Just now, M.Yurizaki said:

But PC gamers believe 25 FPS is "unplayable"

still a lot more better than 16. That's 10 fps right there. And btw hate to be that guy but 16/25=0.57 which is 57%

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, M.Yurizaki said:

On the other side I'm thinking "how much of a performance benefit do old cards see and is the absolute performance after the fact even worth writing about?"

 

For instance, if a GCN 1.0 card without Async Compute gets 16 FPS and with ASync Compute gets 25 FPS, it's still a 50% increase in performance, but would you still care?

Well that's a pretty big difference. Realistically though, is been 16 fps with it off, 16.5 with it on for GCN 1 cards.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, M.Yurizaki said:

But PC gamers believe 25 FPS is "unplayable"

but its the difference between being a dia show and the illusion of smooth movement, i could donate the card if it got 25 fps, thats still doable for console peasants and people that just don't have shitloads of money for a pc, but 16 fps completely removes that. i also wonder if this affects non gaming applications such as folding or minin or cracking, stuff like that.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, tlink said:

but its the difference between being a dia show and the illusion of smooth movement, i could donate the card if it got 25 fps, thats still doable for console peasants and people that just don't have shitloads of money for a pc, but 16 fps completely removes that. i also wonder if this affects non gaming applications such as folding or minin or cracking, stuff like that.

Nah, it doesn't. Async compute strictly has to do with graphics processing.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ivan134 said:

Nah, it doesn't. Async compute strictly has to do with graphics processing.

okay well if we go by what other people say than its barely of influence to the cards, thanks for telling tough.

Link to comment
Share on other sites

Link to post
Share on other sites

Either way the point is, is the end result even worth pursuing?

 

I mean, I was rolling my eyes when HD 7000 owners were going "YAY DX12/VULKAN SUPPORT!" By the time "proper" cards came out, those cards would be 3+ generations old. If you're already maxing out the potential of an old card, a new API isn't going to do much else.

 

I'm still under the belief that DX12/Vulkan will do one of two things:

  • Increase the graphical fidelity given the same system requirements
  • Reduce system requirements for the same graphical fidelity.

And my money's on the latter because well, accessibility. It would make a Core i7 U processor with an eGPU make more sense.

Link to comment
Share on other sites

Link to post
Share on other sites

Hm was it a certain reason something. I have R9 290 and didn't see any difference.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, ivan134 said:

GCN 1 cards were showing basically no benefit from it. I still don't know if I agree with turning it off though.

390 is GCN 2 and not affected by this. To this day, GCN 2 cards have been showing the most gains from async compute.

And the 390 was a re-branded 290 on a slightly refined manufacturing process...

USEFUL LINKS:

PSU Tier List F@H stats

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Kloaked said:

Had this been Nvidia, I would see tons of torches and pitchforks on this forum.

And rightfully so. However I doubt anyone would defend this. I can only assume it's due to an error, or games assuming they have more ACEs, and spams them to the point where these cards suffer from it. Idk. Let's see tomorrow when ReLive is out, if the issue is solved.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Doobeedoo said:

Hm was it a certain reason something. I have R9 290 and didn't see any difference.

You shouldn't see a difference. Your card is GCN 2.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, TheRandomness said:

And the 390 was a re-branded 290 on a slightly refined manufacturing process...

That's not true at all. It shows relatively large performance differences in certain scenarios, particularly tessellation. That's why it's a more advanced version of GCN. Here is a comparison of Polaris, Tonga and Tahiti.

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Carclis said:

That's not true at all. It shows relatively large performance differences in certain scenarios, particularly tessellation. That's why it's a more advanced version of GCN. Here is a comparison of Polaris, Tonga and Tahiti.

The gains were not "large" they were about what was expected from a refinement. I don't get why you are defensive at this nobody is denying that there is advancement from Hawaii to Grenada but saying is "relatively large performance" it's just disengenious.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Carclis said:

That's not true at all. It shows relatively large performance differences in certain scenarios, particularly tessellation. That's why it's a more advanced version of GCN. Here is a comparison of Polaris, Tonga and Tahiti.

No, he's right. The 390 is just a 290 with better power delivery and slightly better power consumption. There was no architectural change. Tessellation improvements came with GCN 3 and much better GCN 4.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, zMeul said:

it's absolutely done intentionally

there are far more Radeon HD7xxx series owners than they are RX4xx series

 

Of course there are. Those are much older cards, while not old enough to have been wiped out by games' demands yet.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Carclis said:

That's not true at all. It shows relatively large performance differences in certain scenarios, particularly tessellation. That's why it's a more advanced version of GCN. Here is a comparison of Polaris, Tonga and Tahiti.

Well, it's true on some level, because I can flash a 390 BIOS to my 290 and it'd still be perfectly fine.

USEFUL LINKS:

PSU Tier List F@H stats

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Notional said:

And rightfully so. However I doubt anyone would defend this. I can only assume it's due to an error, or games assuming they have more ACEs, and spams them to the point where these cards suffer from it. Idk. Let's see tomorrow when ReLive is out, if the issue is solved.

It wouldn't work that way since ACEs are just hardware schedulers and they'll process whatever they can get their grubby mitts on. Besides, saturating your scheduler is a good thing and in fact is necessary to get the most performance out of GCN.

 

Supposedly this is why when GCN runs an NVIDIA optimized game it suffers, because NVIDIA favors shorter job queues.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, M.Yurizaki said:

It wouldn't work that way since ACEs are just hardware schedulers and they'll process whatever they can get their grubby mitts on. Besides, saturating your scheduler is a good thing and in fact is necessary to get the most performance out of GCN.

 

Supposedly this is why when GCN runs an NVIDIA optimized game it suffers, because NVIDIA favors shorter job queues.

 

The point is, that if the DX12 game assumes the GCN game has 8 aces instead of 2, exclusively on the GCN1 arch, it might be flooded. This could cause an issue where everything waits on the extra computing to be done, which might introduce huge latency/stutter issues, simply because the card cannot keep up, and loses the synchronization between the different threads.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Notional said:

The point is, that if the DX12 game assumes the GCN game has 8 aces instead of 2, exclusively on the GCN1 arch, it might be flooded. This could cause an issue where everything waits on the extra computing to be done, which might introduce huge latency/stutter issues, simply because the card cannot keep up, and loses the synchronization between the different threads.

Then the RX 480 would suffer because Polaris 10 only has 4 ACEs.

 

Also in this case, the game would wait until the GPU is ready before sending in another command list. It's not going to keep pumping command lists and expect the GPU to be able to process it in a target time. Hard coding and expecting a real-time result for an application that requires neither is not only dumb, but you should be taken out back and beaten with a bat.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Misanthrope said:

-snip-

 

14 minutes ago, ivan134 said:

-snip-

 

13 minutes ago, TheRandomness said:

-snip-

I was under the impression that the full 300 series was based on GCN 3 when in fact it only includes the 380/X and Fury. My mistake 9_9

CPU - Ryzen Threadripper 2950X | Motherboard - X399 GAMING PRO CARBON AC | RAM - G.Skill Trident Z RGB 4x8GB DDR4-3200 14-13-13-21 | GPU - Aorus GTX 1080 Ti Waterforce WB Xtreme Edition | Case - Inwin 909 (Silver) | Storage - Samsung 950 Pro 500GB, Samsung 970 Evo 500GB, Samsung 840 Evo 500GB, HGST DeskStar 6TB, WD Black 2TB | PSU - Corsair AX1600i | Display - DELL ULTRASHARP U3415W |

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Carclis said:

That's not true at all. It shows relatively large performance differences in certain scenarios, particularly tessellation. That's why it's a more advanced version of GCN. Here is a comparison of Polaris, Tonga and Tahiti.

Polaris, Tonga, and Tahiti are completely irrelevant to the 290 and 390, since they use Hawaii and Grenada respectively. Where Grenada is an almost unmodified Hawaii.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, TOMPPIX said:

so they nerfed cards that are almost 5 years old, 7000 series was released q1 2012 i believe. i don't really see a problem here.

First paper launch was in December 2011, but yeah effectively the beginning of 2012.

 

Also it's unclear whether this is even a nerf.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, huilun02 said:

The real question is, was there a loss to begin with? The people who are supposedly affected would be Tahiti users (HD7900 and R9 280 variants)

That one member noticed the two games weren't supporting Async compute on his card. That does not mean its the same case for all games with Async compute. And if there are still games with Async compute running on GCN 1.X, that means there was no driver level nerf and its the games at fault instead.

Not two games, someone write a program for testing Async Compute. It's disabled in the driver.

 

I see a lot of misunderstand since i post my finding.

In a first place, i started to search why GCN 1.0 was not supported in Rise Of The Tomb Raider last patch.

When Maxwell was accused to not support Async Compute, someone create a program for testing Async Compute. It worked on GCN 1.0, 1.1 and 1.2 back in time.

So i just wanted to verify if that still the case and that how i find that Async Compute was disabled on news drivers. Then i post on Reddit.

After that, many ask me to test a game that support Async Compute on GCN 1.0. So i tried Ashes Of Singularity. You know the end. That just confirm that Async Compute was disabled on news drivers. Old drivers performs way better because of Async Compute.

DirectX12 driver 16.3.1 Async Compute off : http://i.imgur.com/aiV1pSg.png

DirectX12 driver 16.3.1 Async Compute on : http://i.imgur.com/CGrb4yM.png

DirectX12 drivers superior to 16.9.2 Async Compute off :http://i.imgur.com/yiSSRCE.png

DirectX12 drivers superior to 16.9.2 Async Compute on :http://i.imgur.com/Fch5V8w.png

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Kwee said:

Not two games, someone write a program for testing Async Compute. It's disabled in the driver.

 

I see a lot of misunderstand since i post my finding.

In a first place, i started to search why GCN 1.0 was not supported in Rise Of The Tomb Raider last patch.

When Maxwell was accused to not support Async Compute, someone create a program for testing Async Compute. It worked on GCN 1.0, 1.1 and 1.2 back in time.

So i just wanted to verify if that still the case and that how i find that Async Compute was disabled on news drivers. Then i post on Reddit.

After that, many ask me to test a game that support Async Compute on GCN 1.0. So i tried Ashes Of Singularity. You know the end. That just confirm that Async Compute was disabled on news drivers. Old drivers performs way better because of Async Compute.

DirectX12 driver 16.3.1 Async Compute off : http://i.imgur.com/aiV1pSg.png

DirectX12 driver 16.3.1 Async Compute on : http://i.imgur.com/CGrb4yM.png

DirectX12 drivers superior to 16.9.2 Async Compute off :http://i.imgur.com/yiSSRCE.png

DirectX12 drivers superior to 16.9.2 Async Compute on :http://i.imgur.com/Fch5V8w.png

My bad then if I've been spreading misinformation. I was going by memory of previous benchmarks I'd seen. AMD definitely has some explaining to do then.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×