Jump to content

Is the RX480 that impressive ??

Jiraiya2016
1 minute ago, othertomperson said:

All but one of those is a higher end rebrand of a generation ago, and the Fury is a cut-down version of the Fury X which was supposed to compete with the Titan X. It didn't. The cut down version of the Titan X is the 980 Ti, which beat the Fury X let alone the Fury.

The 980 ti beats the Fury x because the Fury x is a bad overclocker, which is most likely due to the reference PCB if the nitro fury is anything to go by. The Fury x is tied or faster at stock. 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ivan134 said:

Also async compute relies on double precision, but you call its gains irrelevant so idk if it matters to you. 

With gains of max 5-10% even on a Fury X (according to the Hitman devs at GDC) which is all but built for this singular purpose yes it is irrelevant.

 

Also you are the first person I've ever heard attribute async compute down to double precision. Are you saying I should be shelling out for a Tesla? What is your source here?

 

3 minutes ago, ivan134 said:

The 980 ti beats the Fury x because the Fury x is a bad overclocker, which is most likely due to the reference PCB if the nitro fury is anything to go by. The Fury x is tied or faster at stock. 

So disagrees every benchmark that compared each at stock.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, othertomperson said:

With gains of max 5-10% even on a Fury X (according to the Hitman devs at GDC) which is all but built for this singular purpose yes it is irrelevant.

 

Also you are the first person I've ever heard attribute async compute down to double precision. Are you saying I should be shelling out for a Tesla? What is your source here?

Asynchronous compute is doing graphical workloads in a parallel manner instead of serial. That is literally what double precision computing is. 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ivan134 said:

Asynchronous compute is doing graphical workloads in a parallel manner instead of serial. That is literally what double precision computing is. 

It's more about allocating it's 4000-or-so cores more effectively between simultaneous graphical and computational workloads rather than all being focused on either one or the other. Whatever a GPU does is parallel. A GPU is specifically for many many many relatively weak but readily repeatable tasks to be carried out in parallel. This is the whole point of Pascal's improved preemption. It can't do this innately like AMD's hardware, but it can switch with a very low latency and hope to mitigate the difference. And given only a few devs have taken advantage of this and only one has said that there was any point in doing so, I need more evidence of it being worthwhile as far as gaming goes.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, othertomperson said:

With gains of max 5-10% even on a Fury X (according to the Hitman devs at GDC) which is all but built for this singular purpose yes it is irrelevant.

 

Also you are the first person I've ever heard attribute async compute down to double precision. Are you saying I should be shelling out for a Tesla? What is your source here?

Also I've aid in the past that AMD should be ashamed of touting Hitman as some async compute flagship. When i heard that game would use it, i assumed they would use it for some really complex AI. Most of the NPC have very limited preset instructions which the exception of the few that can actually affect your mission. So essentially, they're asynchronously computing nothing in that game. Async compute has very real gains in aots and total war warhammer. 

Edited by ivan134
Complex not computex

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ivan134 said:

Also I've aid in the past that AMD should be ashamed of touting Hitman as some async compute flagship. When i heard that game would use it, i assumed they would use it for some really computex AI. Most of the NPC have very limited preset instructions which the exception of the few that can actually affect your mission. So essentially, they're asynchronously computing nothing in that game. Async compute has very real gains in aots and total war warhammer. 

In other words, it's a very niche feature that like two games could benefit from and the rest needn't bother.

 

We're talking about the same company that put 8GB vram on a 390 and touted that as some kind of performance win. Their marketing only cares about numbers and couldn't give a shit about actual results, that much is clear.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, othertomperson said:

In other words, it's a very niche feature that like two games could benefit from and the rest needn't bother.

How would most games not benefit from better AI? That is probably the biggest area of stagnation in gaming. 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ivan134 said:

How would most games not benefit from better AI? That is probably the biggest area of stagnation in gaming. 

And yet the CPU power has been here to do it all along. Anyone with a modern i3 or above has had ample headroom for better AI to take advantage of. It's not lack of async that is the issue here.

 

Anyway it's approaching 5AM, and I should go to sleep. You won't convince me that async compute is a necessary part of Dx12 or Vulkan. What will is some games actually using it and getting a performance boost from it.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, othertomperson said:

It's more about allocating it's 4000-or-so cores more effectively between simultaneous graphical and computational workloads rather than all being focused on either one or the other. Whatever a GPU does is parallel. A GPU is specifically for many many many relatively weak but readily repeatable tasks to be carried out in parallel. This is the whole point of Pascal's improved preemption. It can't do this innately like AMD's hardware, but it can switch with a very low latency and hope to mitigate the difference. And given only a few devs have taken advantage of this and only one has said that there was any point in doing so, I need more evidence of it being worthwhile as far as gaming goes.

 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, othertomperson said:

And yet the CPU power has been here to do it all along. Anyone with a modern i3 or above has had ample headroom for better AI to take advantage of. It's not lack of async that is the issue here.

 

Anyway it's approaching 5AM, and I should go to sleep. You won't convince me that async compute is a necessary part of Dx12 or Vulkan. What will is some games actually using it and getting a performance boost from it.

There are 3 games out that get boosts from it (yes, i know,  hitman isnt a good showing for it but there are still gains) 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, ivan134 said:

 

You should watch that again a few times followed by this if you think that contradicted anything that I said:

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

My god, I thought Nvidia fans were bad about tessellation but damn. What are you guys going to do in a year when Asyc isn't what it was hyped to be?

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

This node jump is a bit disappointing in general. AMD has used it to take existing performance and reduce the price a bit (but its not an amazing price) on a super small 214mm^2 die. Then Nvidia has also decreased their chip size down for their upper mainstream cards but at least delivered no performance highs but with quite a hefty price bump. Both have delayed their top end cards until next year with HBM2 releasing which is going to be the big jump in performance.

 

AMD is even marketing with crossfire to address competing with Nvidia again. That actually went well in the 4870X2/5970 days but nowadays crossfire has a bad rap for microstutter and the reviewers can prove that AMD had a serious problem and continues to have a problem with microstutter. I don't know if its a good strategy to lead with.

 

Just disappointed in general, we waited 2 extra years for 16nm (20nm +FINFET) instead of just 20nm and we got 1.76x the transistor density. That is half the rate Moore's law would suggest if not less. The industry has ground to a halt on CPUs and this is beginning of the GPU slow progress as well it seems. But mostly its disappointing to see the prices, they are poor all round.

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of you compare just performance numbers to past cards which is just wrong, the RX 480 is 1.7 times the performance per watt
It has support for key DX 12 features and a lot more
index.php?ct=articles&action=file&id=201

 

Also I don't know about you guys but the slide said the tflops well be >5, meaning they are yet to settle on core clock meaning it might be clocked higher than 1266 mhz that was shown in the benchmark.

Slowly...In the hollows of the trees, In the shadow of the leaves, In the space between the waves, In the whispers of the wind,In the bottom of the well, In the darkness of the eaves...

Slowly places that had been silent for who knows how long... Stopped being Silent.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, othertomperson said:

You should watch that again a few times followed by this if you think that contradicted anything that I said:

 

 

Fast forward to 3:30 in your video.

Edited by ivan134

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, othertomperson said:

It's more about allocating it's 4000-or-so cores more effectively between simultaneous graphical and computational workloads rather than all being focused on either one or the other. Whatever a GPU does is parallel. A GPU is specifically for many many many relatively weak but readily repeatable tasks to be carried out in parallel. This is the whole point of Pascal's improved preemption. It can't do this innately like AMD's hardware, but it can switch with a very low latency and hope to mitigate the difference. And given only a few devs have taken advantage of this and only one has said that there was any point in doing so, I need more evidence of it being worthwhile as far as gaming goes.

http://wccftech.com/async-compute-praised-by-several-devs-was-key-to-hitting-performance-target-in-doom-on-consoles/

 

http://gearnuke.com/mirrors-edge-catalyst-reach-new-levels-gpu-optimizations-via-async-compute/

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

If someone says async is not worth it... to who? The gamer doesn't implement it, and if a developer implements it, then that's great for any gamer who has a card that's good with async. Having 5%+ FPS is definitely worth simply having a card that can receive that additional 5%.

 

There are many tech features on a graphics card, and for one of them to give a 10% FPS boost over stock, and in addition to other tech features, is a big deal. A graphics card base performance, + DX12 / Vulkan, + asyc compute, + overclock... everything adds up. And a 10% FPS boost on its own, especially in a game that delivers sub-60 FPS without it, is a very big deal, both in card value, and in gaming experience.

 

I'll take 5% or 10% FPS gains in any game, any day, and I sort of think that anyone dismissing 5% + FPS games as not impressive is making themselves out to be an irrational fanboy of companies which make cards without good async. But I hope that async will develop over time to become easier to work with.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, ivan134 said:

I'd make a guess that AMD's better async-handling is why the next consoles from Microsoft, Sony, and Nintendo are all being powered by AMD GPUs.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Delicieuxz said:

I'd make a guess that AMD's better async-handling is why the next consoles from Microsoft, Sony, and Nintendo are all being powered by AMD GPUs.

It's the reason why even the current ones use them. They've been using the technique on consoles for a while before AoTS. It's the reason they're able to get such good performance despite the terrible APUs they have. Infamous: Second Son and Uncharted 4 look really fucking impressive for console games.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Delicieuxz said:

I'd make a guess that AMD's better async-handling is why the next consoles from Microsoft, Sony, and Nintendo are all being powered by AMD GPUs.

I'm imagine AMD being the only company bidding might have had more to do with it.

 

1 hour ago, Delicieuxz said:

If someone says async is not worth it... to who? The gamer doesn't implement it, and if a developer implements it, then that's great for any gamer who has a card that's good with async. Having 5%+ FPS is definitely worth simply having a card that can receive that additional 5%.

 

There are many tech features on a graphics card, and for one of them to give a 10% FPS boost over stock, and in addition to other tech features, is a big deal. A graphics card base performance, + DX12 / Vulkan, + asyc compute, + overclock... everything adds up. And a 10% FPS boost on its own, especially in a game that delivers sub-60 FPS without it, is a very big deal, both in card value, and in gaming experience.

 

I'll take 5% or 10% FPS gains in any game, any day, and I sort of think that anyone dismissing 5% + FPS games as not impressive is making themselves out to be an irrational fanboy of companies which make cards without good async. But I hope that async will develop over time to become easier to work with.

To devs, dear. AAA devs are not going to devote a massive amount of time and resources into something that barely has an impact.

 

1 hour ago, ivan134 said:

It's the reason why even the current ones use them. They've been using the technique on consoles for a while before AoTS. It's the reason they're able to get such good performance despite the terrible APUs they have. Infamous: Second Son and Uncharted 4 look really fucking impressive for console games.

Async doesn't make games look better, it makes them more efficient at handling compute tasks while also rendering graphics by reducing idle time. It potentially makes AI and physics better.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, othertomperson said:

I'm imagine AMD being the only company bidding might have had more to do with it.

I'd imagine that Nvidia's lack of applicability for console manufacturers could have something to do with their participation.

 

Quote

To devs, dear. AAA devs are not going to devote a massive amount of time and resources into something that barely has an impact.

Well, maybe that would be true, however, 5 - 10% isn't considered 'barely impacting' in FPS gains. ivan134 provided some links that show what some AAA game devs have to say about async compute:

 

Quote

And so, async seems to be worth having to devs, which makes it worth having to gamers.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

This is just devolving to personal attacks at this point. Locking the thread for cleanup.

System CPU : Ryzen 9 5950 doing whatever PBO lets it. Motherboard : Asus B550 Wifi II RAM 80GB 3600 CL 18 2x 32GB 2x 8GB GPUs Vega 56 & Tesla M40 Corsair 4000D Storage: many and varied small (512GB-1TB) SSD + 5TB WD Green PSU 1000W EVGA GOLD

 

You can trust me, I'm from the Internet.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm opening the thread again but please be aware that attacks and accusation about the user you arguing/discussing with are not permitted. If i spot anymore personall attacks or fanboy accusations I will be issuing warnings.

System CPU : Ryzen 9 5950 doing whatever PBO lets it. Motherboard : Asus B550 Wifi II RAM 80GB 3600 CL 18 2x 32GB 2x 8GB GPUs Vega 56 & Tesla M40 Corsair 4000D Storage: many and varied small (512GB-1TB) SSD + 5TB WD Green PSU 1000W EVGA GOLD

 

You can trust me, I'm from the Internet.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×