Jump to content

RX480 a letdown?

Flowey
4 hours ago, ivan134 said:

Pascal is better at asynchronous compute my ass. Can't wait to hear the excuses people come up with after people have accused Oxide, who worked more with Nvidia on AotS, of bias towards AMD. Watch Dogs 2 and BF1 are next.

 

Ivan, not telling the whole story? Noooo, I'm so shocked. You must have missed it. here

 

Yep, that Fury X is choking on a ****. Guess it doesn't have async ether, huh?

 

maxresdefault.jpg

 

 

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, App4that said:

Ivan, not telling the whole story? Noooo, I'm so shocked. You must have missed it. here

 

Yep, that Fury X is choking on a ****. Guess it doesn't have async ether, huh?

 

maxresdefault.jpg

 

 

Retarded post from app4that and failing basic comprehesion? Yup, business as usual. I'm showing performance regression like we saw in Maxwell and you're showing me a 1070 and 1080 are faster than a Fury X.....because those 2 things are definitely related. Derp.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ivan134 said:

Retarded post from app4that and failing basic comprehesion? Yup, business as usual. I'm showing performance regression like we saw in Maxwell and you're showing me a 1070 and 1080 are faster than a Fury X.....because those 2 things are definitely related. Derp.

Ivan deflecting as usual. Here's your degradation. This is DX11, have a tissue handy.

 

 

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, App4that said:

Ivan deflecting as usual. Here's your degradation. This is DX11, have a tissue handy.

 

 

What? Again, in what is any of this related to performance regression? Nevermind. I think you've stressed your pea brain enough today.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, ivan134 said:

What? Again, in what is any of this related to performance regression? Nevermind. I think you've stressed your pea brain enough today.

I know it's hard for you to compare things when not presented side by side, but look at the Fury X in DX11, then the DX12 video. Dropped frames.

 

obama-mic-drop.jpg

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

Can someone explain to me how any of this is on topic here?

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Holy shit this tread ended up being a shitshow of hate and bashing against everything AMD!!!Thought we were somehow out of that n'vidia circlejerk, I don't want to be a bitch but to me the card delivered everything it promised, nothing more, nothing less. AMD never boasted their power consumption would be insane, and it's, in fact, much better then the r9 390 with the same performance. We're talking about a mid-market card people, not a high end offering like the 1070 and 1080 are, shit, how hard to understand is that. I'll be the first the shit down AMD's neck if the 490, or whatever the name of the card is, consume power like it's nobody's business.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Ryan_Vickers said:

Can someone explain to me how any of this is on topic here?

Yep, shitshow of hate, I'm sorry for that, shouldave known a legit innocent question would turn up like this, is it time to nuke the thing yet or do we get another chance at redemption?:D

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, App4that said:

I know it's hard for you to compare things when not presented side by side, but look at the Fury X in DX11, then the DX12 video. Dropped frames.

so basically what we've learned is that total war warhammer degrades in equal measure on both AMD and Nvidia cards when moving to dx12.
That's good to know for anyone basing their entire opinion of the 480's capabilities on a single dx12 game (dx12 port is more accurate).

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Briggsy said:

so basically what we've learned is that total war warhammer degrades in equal measure on both AMD and Nvidia cards when moving to dx12.
That's good to know for anyone basing their entire opinion of the 480's capabilities on a single dx12 game (dx12 port is more accurate).

No xD one test is at 1080 and the other 1440. Me and Ivan go way back, many chains have been pulled.

 

But you're right that the game is a horrible example of DX12 optimization.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

I am really pissed about a few things, the way things work in some country's.

 

In the US, you can get the 480 for 200 dollars and as AMD said, they want it to be like this for everyone. They want more people to get their hands on a card that can support VR for ''under 200 dollars''

So, here's the joke.. I live in Estonia and the monthly average wage is around 1000 euros and the RX 480 costs 350 euros! That is almost 400 dollars! What the hell?! And what makes this joke a ''jerry'', our neighbor Finland has the same price on graphics cards but more than twice the income per month!!! How is this even possible?

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, App4that said:

I know it's hard for you to compare things when not presented side by side, but look at the Fury X in DX11, then the DX12 video. Dropped frames.

 

obama-mic-drop.jpg

Then that is an anomaly exclusive to the fury x (or maybe the entire Fiji line, no others testz to look at) that may or may not be fixed with a driver update, because other GCN cards showed improvement the game (at work till 11pm so will links later). So in the end, a double precision focused architecture still needed for asynchronous compute 

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

The current reference card is a letdown for sure, imo. The issue of the power delivery is two fold - for one it may damage your motherboard which is the most important problem. The second issue which derives from that is that I very much suspect that the card could have been quite a bit faster with a slightly better power delivery system (aka 8-pin instead of 6-pin) - I guess we'll know for sure when partner cards arrive. In hindsight it just boggles my mind that AMD thought that they could release the card in its current state and it's another disappointment from AMD.

CPU: AMD R5 5600x | Mainboard: MSI MAG B550m Mortar Wifi | RAM: 32GB Crucial Ballistix 3200 Rev E | GPU: MSI RTX 2070 Armor | Case: Xigmatek Aquila | PSU: Corsair RM650i | SSDs: Crucial BX300 120GB | Samsung 840 EVO 120GB | Crucial m500 120GB | HDDs: 2x Seagate Barracuda 4TB | CPU Cooler: Scythe Fuma 2 | Casefans: Bitfenix Spectre LED red 200mm (Intake), Bequiet Pure Wings 2 140mm (Exhaust) | OS: Windows 10 Pro 64 Bit

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, ivan134 said:

Then that is an anomaly exclusive to the fury x (or maybe the entire Fiji line, no others testz to look at) that may or may not be fixed with a driver update, because other GCN cards showed improvement the game (at work till 11pm so will links later). So in the end, a double precision focused architecture still needed for asynchronous compute 

Here, one more time with feeling. Your Fury X getting curb stomped by the 1070 that costs less and is around the same performance as the 980ti the Fury X is supposed to compete with using the API that you say gives it an advantage.

 

 

 

 

 

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, App4that said:

Here, one more time with feeling. Your Fury X getting curb stomped by the 1070 that costs less and is around the same performance as the 980ti the Fury X is supposed to compete with using the API that you say gives it an advantage.

 

 

 

 

 

i think i know the reson Pascal does so well. I bet even Maxwell would do "surprisingly well" in that game compared to AMD.

Its because of HOW Async Compute and DX12 is implemented.

 

A game with 32 queues of Compute work will run faster on maxwell then any GCN card. Because Maxwell is faster at computing (more efficient pipeline).

However the strength of GCN lies in its brute force. GCN has the capacity to run 64 compute queues AND graphics. Whilst Maxwell, and pascal (to a lesser degree thanks to super granular preemption) can only run 32 compute OR graphics.

This is a "design flaw" if you can even call it that, in Nvidias design. And a design strength in GCN. However this comes at the cost of power draw, as the ACEs draws plenty of power during full load, this would be added ontop of regular draw (although there is probably some dynamic down-clocking going on to preserve TDP. I know Nvidia runs their compute units slower to preserve TDP).

 

 

There may also be a more logical and less technical explanation.

AMDs drivers arent optimized for the DX12 mode yet because they are slow AF sometimes.

 

 

That being said. The real "interesting" DX12 comparisons that SHOULD be made is not amd vs nvidia. What should be done is Maxwell vs Pascal. How does these GPUs behave under DX12?

Did pascal deliver on their promise? I hope it did.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Prysin said:

 

 

That being said. The real "interesting" DX12 comparisons that SHOULD be made is not amd vs nvidia. What should be done is Maxwell vs Pascal. How does these GPUs behave under DX12?

Did pascal deliver on their promise? I hope it did.

 

Completely agree, especially what I left of the post.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, App4that said:

Completely agree, especially what I left of the post.

i want to see a 1070 Aftermarket vs Titan X. As that should be a very close comparison, performance wise under DX11. Thus the only "wild factor" under DX12 should be Async Compute. As both Maxwell and Pascal meets the same "feature level" in DX12.

 

EDIT: Also, get in skype more

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Prysin said:

i want to see a 1070 Aftermarket vs Titan X. As that should be a very close comparison, performance wise under DX11. Thus the only "wild factor" under DX12 should be Async Compute. As both Maxwell and Pascal meets the same "feature level" in DX12.

 

EDIT: Also, get in skype more

 

We would see a good comparison if Jay would get the AotS benchmark working. The fact he can't is odd.

 

And I know, been running around a lot. so pop in on my phone more than not.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, DMMR said:

I am really pissed about a few things, the way things work in some country's.

 

In the US, you can get the 480 for 200 dollars and as AMD said, they want it to be like this for everyone. They want more people to get their hands on a card that can support VR for ''under 200 dollars''

So, here's the joke.. I live in Estonia and the monthly average wage is around 1000 euros and the RX 480 costs 350 euros! That is almost 400 dollars! What the hell?! And what makes this joke a ''jerry'', our neighbor Finland has the same price on graphics cards but more than twice the income per month!!! How is this even possible?

Bro, blame the economy and the rich jerks that are pulling strings behind the scenes. :(

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, App4that said:

We would see a good comparison if Jay would get the AotS benchmark working. The fact he can't is odd.

 

And I know, been running around a lot. so pop in on my phone more than not.

I think its not him not getting it to "work". But more like him not actually understanding it properly.

It is also very dynamic. No one benchmark run is 100% similar. There IS some variation. Although minute, it IS there. Even OXIDE devs has come out on twitter and stated so.

That being said. The variations shouldnt be large enough to ever compromise the integrity of the benchmark utility.

 

Also, Ashes is a bit pain in the ass. It takes 3.5 minutes to RUN the benchmark each time. you need like 3-5 benchmark runs to get a reliable average.

In short, it takes shitloads of time per GPU.

 

Looking at some of the questions and replies Jay get/make on twitter, i also think that he is rather lazy. He wont include anything that is time consuming or complicated/out of his element. Because he seems, not unwilling, but unmotivated to change his methodology.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×