Jump to content

Fury X now equal to 980 ti @1440p and superior @4k

potoooooooo

And you can add a memory overclock to AMD Cards maybe not the Fury so much i dont honestly know about HBM overclocking but thats why the overclocking is kind of a Bleh point you can overclock both and also your not guaranteed too be able to overclock it far ive had some shitty overclockers and some good ones on both AMD and Nividia side. So if one card looses at stock speeds  and you overclock it and it wins you can just overclock the other card aswell and it will most likey still win as it doesnt even need to overclock as much as the other since its starting FPS is higher already. only time this matters IMO is price if X card can overclock around stock speeds of Y card but is $50 bucks cheaper then yeah that makes a lot of sense 

Memory overclocks don't do much for the Fury/X.

 

Maxwell's particularly good with overclocking.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Not true AT ALL. Take for example GTA V, or Shadow of Mordor, or modded Skyrim. They will all use 4GB or more at 1440p. 

there's a difference between use up and need.

Link to comment
Share on other sites

Link to post
Share on other sites

Not true AT ALL. Take for example GTA V, or Shadow of Mordor, or modded Skyrim. They will all use 4GB or more at 1440p. 

check your facts. in Mordor, the 980TI, Titan X, and Fury X all run within a few fps of each other at 4k with the high res texture pack installed and being used. GPU power is still the limiting factor, not vram.

Link to comment
Share on other sites

Link to post
Share on other sites

Memory overclocks don't do much for the Fury/X.

 

Maxwell's usually particularly good with overclocking.

fixed it for you.

 

there is nobody, anywhere. That guarantees you anything, not even a single Hz over the specifications on the box.

Do not ever forget that.

Link to comment
Share on other sites

Link to post
Share on other sites

Not true AT ALL. Take for example GTA V, or Shadow of Mordor, or modded Skyrim. They will all use 4GB or more at 1440p. 

 

True indeed. At 4k you won't be able to max out any of those games without dipping below 40 fps on a single fury x because the gpu itself will bottleneck your performance before the vram does. When all is said and done, the lowered settings together with the removal of the caching that happens at 1440p (because games actually use more vram than they strictly need if it is available, just like windows uses more ram if you have more) result in the vram not being a bottleneck. Don't you think that if the vram was such a problem it would show in benchmarks? And yet, the fury x's relative performance is above a stock 980ti. 4gb of vram are not a bottleneck in a single card scenario at this time.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

fixed it for you.

 

there is nobody, anywhere. That guarantees you anything, not even a single Hz over the specifications on the box.

Do not ever forget that.

What exactly are you trying to prove here Prysin? You yourself are the one that always preaches how easy the FX 6300 overclocks. You are also the same person that still recommends that CPU over every other budget CPU, and part of the reason you do so is again, how easy it overclocks. For someone that uses overclocking in his own personal arguments, i find it odd that you are now pointing out the same logic against someone else. 

 

You are right, nobody ever guarantees overclocking on the box. However, to deny factual evidence that is historically accurate would be blatant ignorance. It is a known fact that Maxwell cards, even the 750 and 750 Ti's, have headroom for overclocking. Even if you don't touch the voltage. Even if your using a reference cooler. While nothing is guaranteed, you can still get free performance out of them. If you look at every single Maxwell Overclocking thread on every forum, you will see that even worst case scenarios, offer a pretty decent performance boost. 

 

All GPU companies are VERY tame on their listed specifications. Even their GPU boost numbers are tame. My card boosts quite a bit higher than its advertised boost, and i am using an ITX case and reference GPU. Their stock clock speeds are no different, as they are clocked tame enough to work in even the worst situations (terrible case airflow, hot ambient climates, etc). Nvidia does it.  AMD does the same thing. Same with Intel/AMD CPUs. None of them release their chips at their absolute best potential (Except maybe the FX 9590, which has no room to go anywhere, and is hot as balls out of the box). They do this because its the only way they can guarantee advertised performance to everyone. It's also a contributing factor for why cards overclock different. Sure, the silicon lottery has a lot to do with it, but people often neglect climate, and the case/surrounding components being used with the hardware.

 

I think when looking at the evidence, saying that Maxwell is a "good overclocking architecture" is an accurate statement. Now, the cards might be good at overclocking, it does not mean that all of them will achieve similar OC's, for the reasons i stated above.

 

TL:DR? Nothing is ever guaranteed, @Prysin is correct in that aspect. However, we can apply statistics to pretty much anything in this world, and come to a general conclusion on a subject. Be it Maxwell overclocking well, or FX 6300's "easily overclocking" to a certain number. It's time to start putting aside these biased opinions. If 4/5 dentists recommend a toothpaste, it's good toothpaste, right? Well, if 4/5 Maxwell cards are good overclockers, then Maxwell is a good overclocking architecture.

 

Come to think of it... dentists profit off of poor dental hygiene. What if they say its good, but it's really bad? I am scared now.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

fixed it for you.

 

there is nobody, anywhere. That guarantees you anything, not even a single Hz over the specifications on the box.

Do not ever forget that.

I know that. That said, to discount Maxwell's general advantage in this area, to me, is a mistake.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Good for Fury X. Too bad the hype train has already left. Now I'm more waiting for HBM2 Gpu's for next upgrade.

ROG X570-F Strix AMD R9 5900X | EK Elite 360 | EVGA 3080 FTW3 Ultra | G.Skill Trident Z Neo 64gb | Samsung 980 PRO 
ROG Strix XG349C Corsair 4000 | Bose C5 | ROG Swift PG279Q

Logitech G810 Orion Sennheiser HD 518 |  Logitech 502 Hero

 

Link to comment
Share on other sites

Link to post
Share on other sites

Too bad you still use way more power and produce more heat :-(

Huh? The Fury X produces roughly the same heat as the 980 Ti (Technically, the 980 Ti will produce more heat if you adjust the power limit, something almost everyone does to achieve that higher Maxwell OCing i discussed earlier) and considering the card is shipped with a factory water cooler, TDP is not an issue, even in ITX cases. Power consumption is higher, but its to be expected. After all, you are also powering a pump with the thing. If i remember correctly, the pumps are 1A on 12V, so thats 12W for the pump alone. I would expect a power consumption difference in gaming to be on average, 30w. At most, maybe 50w in worst case scenario. Granted, people buying these high end GPU's tend to care less about power consumption, as they usually pair them with powerful PSU's, but if you are an ITX guy like myself, power consumption can still be very important. 

 

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/25

 

That "AMD is a space heater" joke is dying fast, as AMD is making great strides in catching up to Nvidia in terms of efficiency. With what they are promising for Arctic Islands, it would not surprise me to see them match Nvidia's efficiency. I just hope both Nvidia and AMD get creative with their cards. I would love to see some smaller, more powerful cards in the future.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

snip

Certainly on the top end this is true. I don't agree that they have shown any capability to scale down efficiently. Even the fury-nano only achieves high efficiency because literally power locked and severely under-clocked (whereas Nvidia shows the ability to downsize gpu dies themselves, thus allowing more predictable performance under load.)

 

Now that said, Fiji is AMD's only new gpu of this generation, so it's not like they tried and failed. They just didn't try. It will be interesting to see if they try next gen or just keep re-branding.

 

(I am being generous here by calling the Tonga GPU last gen, because if you count it current gen, then from a power perspective it failed to keep up, even if it was just as good if not better than the best kepler gpu's on a performance per watt).

 

Considering that for almost 6 months an 165W reference card was the fastest single gpu in the world, shows great promise in where the market is moving from an efficiency perspective.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Certainly on the top end this is true. I don't agree that they have shown any capability to scale down efficiently. Even the fury-nano only achieves high efficiency because literally power locked and severely under-clocked (whereas Nvidia shows the ability to downsize gpu dies themselves, thus allowing more predictable performance under load.)

 

Now that said, Fiji is AMD's only new gpu of this generation, so it's not like they tried and failed. They just didn't try. It will be interesting to see if they try next gen or just keep re-branding.

 

(I am being generous here by calling the Tonga GPU last gen, because if you count it current gen, then from a power perspective it failed to keep up, even if it was just as good if not better than the best kepler gpu's on a performance per watt).

 

Considering that for almost 6 months an 165W reference card was the fastest single gpu in the world, shows great promise in where the market is moving from an efficiency perspective.

Arctic Islands will be a ground up GCN 2.0 architecture...

I'd also like to say that they did try with the Fury lineup as they got heat and power consumption under control, sure not as good as nvidia, but it is much better. 

Link to comment
Share on other sites

Link to post
Share on other sites

Arctic Islands will be a ground up GCN 2.0 architecture...

I'd also like to say that they did try with the Fury lineup as they got heat and power consumption under control, sure not as good as nvidia, but it is much better. 

I prefaced by saying at the high end certainly.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

My argument for the FX 6300 is that OC is a bonus. I duly note, every time that at stock it is neck and neck with the i3 4130 and that with an OC it will crawl up to around a i3 4360. I also note that depending on the game, it may actually start catching up to the lower end locked i5's. As is seen in The Witcher 3 where it solidly beats a i3, but loses to the 4690k.

I will argue that to some extent, it is easier to get a good stable OC with CPUs, due to the many tools out there, then it is with GPUs. That is not to say it is hard.

I have not tried to OC Nvidia GPUs, so i will refrain from discussing how you do it with them. AMD on the other hand is quite easy, CCC allows for a very easy way to get some extra juice without risking to completely brick your card with voltage increases.

I know that. That said, to discount Maxwell's general advantage in this area, to me, is a mistake.

Well, in the case of Fiji vs Maxwell, it is obvious that Maxwell wins the "OC debate" due to the locked nature of Fijis core.

If we discussed Hawaii/Grenada (yh, i know its practically the same) vs Maxwell, or Tahiti vs Maxwell then the discussion would go differently. We know Hawaii/Grenada and Tahiti does not OC massively, atleast on average.

However GCN in general scales WAY better with GPU clocks then Maxwell. A 1200 Core 390 would give a 1500 Core 970 a damn good run of its money (as shown by Jayztwocents). There is also the case of bandwidth...

As you yourself brought it up "a small memory clock goes a long way in alleviating the issue". My argument to that is "If Nvidia hadnt been as tight assed in the first place and given ALL their cards a wider BUS, a OC wouldnt be mandatory for good 4K performance"

You must remember, that we go to the store to purchase a product based upon what is said ON THE BOX.

Atleast we enthusiasts KNOW the risk of the silicone lottery. That being said, would you go to the store today, and buy a Fury X over a 980 Ti? NOPE. Because even if AMD magically improved their CPU overhead to completely match Nvidia (which means they need to increase drawcalls by around 400k), it would still only be on par or marginally ahead.

980Ti, for all it is worth, is a better card then the Fury X ever will be. Not just because of "Nvidia" but because the 980Ti is simply a better card architecturally.

FuryX is fucked by imbalanced ROP and TMU setup. Sure it has 8 ACEs and will smoke the 980Ti in compute. But it is also a 650 USD card, and when DX12 benchmarks show that games favoring raw compute will see near Fury X performance from a 290X, then that goes to show that even if the FuryX is good, it is flawed. It has so damn much potential, but no amount of drivers will fix the underlying core issue with it.

So yes, overclocking is argument one should not ignore. However i find it more reasonable to just look at core issues with a product. Which is why iv'e had second thoughts about getting the dual FuryX GPU... while it will rock the socks of my R9 295x2 and ANY 980Ti or TitanX. It will also choke itself on its completely imbalanced ROP count and as i play on 3440x1440, any texture heavy game like Shadow of Mordor, will completely tank in performance due to only 4GB VRAM.

Link to comment
Share on other sites

Link to post
Share on other sites

Certainly on the top end this is true. I don't agree that they have shown any capability to scale down efficiently. Even the fury-nano only achieves high efficiency because literally power locked and severely under-clocked (whereas Nvidia shows the ability to downsize gpu dies themselves, thus allowing more predictable performance under load.)

 

Now that said, Fiji is AMD's only new gpu of this generation, so it's not like they tried and failed. They just didn't try. It will be interesting to see if they try next gen or just keep re-branding.

 

(I am being generous here by calling the Tonga GPU last gen, because if you count it current gen, then from a power perspective it failed to keep up, even if it was just as good if not better than the best kepler gpu's on a performance per watt).

 

Considering that for almost 6 months an 165W reference card was the fastest single gpu in the world, shows great promise in where the market is moving from an efficiency perspective.

That's because GCN 1.2 is a more capable architecture than Maxwell. Maxwell's efficiency comes from Maxwell being shitty at compute. Unless Nvidia skimps on compute in Pascal, @MageTank is right when he says its efficiency will be equal to Arctic Islands.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

That's because GCN 1.2 is a more capable architecture than Maxwell. Maxwell's efficiency comes from Maxwell being shitty at compute. Unless Nvidia skimps on compute in Pascal, @MageTank is right when he says its efficiency will be equal to Arctic Islands.

I'm sorry, but saying GCN 1.2 is more capable than Maxwell is a statement that has less than 0 evidence to back it up. GCN 1.2 is still so shitty, they have to use over 50% SP compute over Maxwell to get the same gaming results.

 

That said. It's very interesting, and I wonder if the issues FIJI has are due to the HBM design, due to GCN 1.2 scaling or a mixture of both. Honestly we will find out later.

 

And like I mentioned earlier, we have yet to see AMD put forth a competitively efficient scaled down version of GCN 1.2 (Tonga is not competitive with Maxwell from a power stand point whatsoever), and I understand that in this generation they really never tried, but to say they will magically be better next time around, when they haven't shown that to be the case yet is rather naive. 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm sorry, but saying GCN 1.2 is more capable than Maxwell is a statement that has less than 0 evidence to back it up. GCN 1.2 is still so shitty, they have to use over 50% SP compute over Maxwell to get the same gaming results.

That said. It's very interesting, and I wonder if the issues FIJI has are due to the HBM design, due to GCN 1.2 scaling or a mixture of both. Honestly we will find out later.

And like I mentioned earlier, we have yet to see AMD put forth a competitively efficient scaled down version of GCN 1.2 (Tonga is not competitive with Maxwell from a power stand point whatsoever), and I understand that in this generation they really never tried, but to say they will magically be better next time around, when they haven't shown that to be the case yet is rather naive.

GCN IS better than the maxwell architecture.
Link to comment
Share on other sites

Link to post
Share on other sites

GCN IS better than the maxwell architecture.

^Says its better...to a post that states there is no evidence given yet.......also provides no link or info either...

Like most things in the internet....Grain of salt until proven otherwise.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

^Says its better...to a post that states there is no evidence given yet.......also provides no link or info either...

Like most things in the internet....Grain of salt until proven otherwise.

It is better in compute related tasks. Its stupid, you can tell it was for compute from the beginning (A sync cores, extremely unbalanced ROP count) but AMD marketed it towards gamers...

Link to comment
Share on other sites

Link to post
Share on other sites

It is better in compute related tasks. Its stupid, you can tell it was for compute from the beginning (A sync cores, extremely unbalanced ROP count) but AMD marketed it towards gamers...

 

If they're still competitive, why does it matter?

FX 6300 @4.8 Ghz - Club 3d R9 280x RoyalQueen @1200 core / 1700 memory - Asus M5A99X Evo R 2.0 - 8 Gb Kingston Hyper X Blu - Seasonic M12II Evo Bronze 620w - 1 Tb WD Blue, 1 Tb Seagate Barracuda - Custom water cooling

Link to comment
Share on other sites

Link to post
Share on other sites

Not surprising. Fury X has at least 10% and potentiall 25% extra left in it from future driver optimizations. This has been the case of every GPU ever in the history of everything. And since the 980 ti has been out for so long, it has nowhere near as much.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

Not surprising. Fury X has at least 10% and potentiall 25% extra left in it from future driver optimizations. This has been the case of every GPU ever in the history of everything. And since the 980 ti has been out for so long, it has nowhere near as much.

Problem with that there is AMD is taking too long!

980Ti was released on May 31st 2015

Fury X was released on June 24th 2015

 

Less than a month apart dude. We are soon heading in to 2016 and only now Fury X is showing up...

Command Center:

Case: Corsair 900D; PSU: Corsair AX1200i; Mobo: ASUS Rampage IV Black Edition; CPU: i7-3970x; CPU Cooler: Corsair H100i; GPU: 2x ASUS DCII GTX780Ti OC; RAM: Corsair Dominator Platinum 64GB (8x8) 2133MHz CL9; Speaker: Logitech Z2300; HDD 1: Samsung 840 EVO 500GB; HDD 2: 2x Samsung 540 EVO 500GB (Raid 0); HDD 3: 2x Seagate Barracuda 3TB (Raid 0); Monitor 1: LG 42" LED TV; Monitor 2: BenQ XL2420TE; Headphones 1: Denon AH-D7000Headphones 2Audio-Technica AD1000PRMHeadphones 3Sennheiser Momentum Over-EarHeadsetSteelseries Siberia Elite; Keyboard: Corsair Strafe RBG; Mouse: Steelseries Rival 300; Other: Macbook Pro 15 Retina (Mid-2014), PlayStation 4, Nexus 7 32GB (2014), iPhone 6 64GB, Samsung Galaxy S6 64GB
Link to comment
Share on other sites

Link to post
Share on other sites

Problem with that there is AMD is taking too long!

980Ti was released on May 31st 2015

Fury X was released on June 24th 2015

 

Less than a month apart dude. We are soon heading in to 2016 and only now Fury X is showing up...

 

Dude, I was getting improvements with my old GTX 470 up until a year ago. That's 4 years.

 

The biggest portion of improvements seem to come within a year or so, but there will be a trickle for a while even after that.

 

Also, you have to use the release date of the first chip in the same family (GM 200) which is Titan X, which was march 17th. So 980 Ti has three months 'head start'

 

We also know Nvidia likes to keep their GPUs under wraps until they actually need to release them, whereas AMD lives hand-to-mouth and have to release as soon as they can. So the GM 200 family is bound to have even more than that.

 

Around january, you should see the 980Ti at close to its full potential, and the Fury X will still have half of its "1-year leap" left to go.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×