Jump to content

Who is the fanatic, really? Yes, it's one of those threads again

woox

It's more like :

Fury X = 980 Ti, nearly equals, with the 980 Ti being a bit better performing, overclocking better and having more VRAM, but the Fury X is expecting unlocked voltage in a short time, some people have figured out how to overclock HBM which gives quite a boost in games and potential, a driver update or two down the line will nearly surely make it the better performing card, if you want the better card NOW then the 980 Ti > Fury X, if you want to take the risk and wait for AMD exploit the potential of the Fury X then Fury X > 980 Ti;

Fury > 980, beats the 980 by quite a margin for 50$ more, quieter, if you get the Tri-X, HBM for those who like being early adopters and potential;

390X = 980, they trade blows in most games, with the 980 getting the upper hand in 60% of the games, but the 70$ difference in price makes the 390X the better buy;

390 > 970, more VRAM and better performance out of the box in favour of the 390, and an OC to 1200 MHz, which is achievable by like 95/100 cards makes it beat a 970 OCed to 1450+ MHz;

380 > 960, better performance out of the box in favour of the 380 and a minor OC makes it beat even the most heavily OCed 960, they're priced the same with the 380 4 gigs version actually being priced lower, at least in Europe, and it can actually use 4 gigs;

370 > 750 Ti, the 370 performs better all around, a better buy unless you're upgrading an OEM PC;

360 = 750, the 360 has more VRAM but that's about it, they're very similarly performing;

I agree, G-Sync is a joke, it's 2 years (?) older costs at least 200$ more and does the same than what AMD figured out to do in a year and for cheaper.

CrossFire pros:

-Gives out more raw FPS;

-Less microstutter;

-Bridgeless;

CrossFire cons:

-Not as much support in games as SLI;

-No sexy LED-illuminated bridges (I like good looks, don't judge me);

 

  • GTX 980 Ti > Fury X across the board, even at 4K where the gap is smaller, but the 980 Ti is still undeniably better.
  • Fury beats the GTX 980 at 4K and 1440P and they're similar at 1080P. The GTX 980 overclocks better however and has a good amount less power usage/heat output. Fury is still solid but IMO the GTX 980 is better overall value unless you're playing at 4K where the Fury really pulls ahead.
  • R9 390X only slightly edges out the GTX 980 at 4K. Otherwise the GTX 980 outperforms along with having more margin for overclocking since Hawaii/Grenada doesn't OC well.
  • R9 390 outperforms the GTX 970 without question.
  • R9 380 and GTX 960 perform very similarly. From all the tests/benches I've seen the R9 380 is about 7% faster. Now knowing that the GTX 960 overclocks very well. You can expect roughly 10-15% performance increase from overclocking the GTX 960. Memory clocks can easily OC from 7000MHz to >7500MHz and boost clock from about 1300MHz to 1550MHz. Like I said though, if they cost the same I would get the R9 380... however the GTX 960 costs a bit less.
  • R7 370 of course beats the GTX 750 Ti, it's not even close. But the 750 Ti is cheaper, likewise.

 

We don't have the same situation we had with the R9 200 series. Back then AMD had phenomenal value by comparison. The R9 290X costed much less than the GTX 780 Ti. The 290 costed much less than the 780. The 280X costed much less than the 770. The 280 much less than the 760. And yet they performed basically the same, trading blows at various tiers. But this time, AMD doesn't have the incredible value advantage.

 

Right now you're looking at getting almost identical FPS per dollar from both sides, the only difference being that with Nvidia you'll also have lower power consumption/heat output... and supposedly Nvidia drivers are better though I haven't found that personally. So right now there's not really any good reason to get a 380 over a GTX 960. You're spending an extra $20 (or 10%) for like 7% more fps on average. No reason to get an R9 Fury over a GTX 980, you're spending an extra $80 (~16%) for like 10% higher framerates.

 

AMD's cards are still good, but the only one of their cards that currently offer REALLY good value and are a BETTER option than Nvidia's IMO are the R9 390 and R7 370.

 

Thanks for explaining the CFX vs SLI thing though I'm curious what you mean with one thing... you said gives out more raw FPS but then SLI has more support in games. But wouldn't higher FPS mean that games are supporting it better so that it scales so well? How can you say that it gives higher fps on average but is less supported?

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

 

GTX 960 is a good buy compared to the 380

 

960 might be a decent buy compared to the 380, but neither one of them are good buys compared to the sapphire vapor x 280x for $200.

Link to comment
Share on other sites

Link to post
Share on other sites

You asked "why should I buy a Fury X".

Not "Which one is better price to performance"

 

But they're the same price and the same performance, so it's a just question. What does one have that the other doesn't?

Link to comment
Share on other sites

Link to post
Share on other sites

@Dinkleberg I had forgotten, you and I were both right to a point.

 

There was a GK104 GTX 660 that was only released to OEMs (like Alienware or OriginPC) that had 1152 cores like the 760 does. It had a 192-bit memory bus though, and only sold with 1.5GB or 3GB of vRAM (which was the proper amount). In this case, it's exactly the same as the GTX 760 OEM version, which also has 1152 cores and a 192-bit mem bus and 1.5GB of vRAM.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia made SLI, technically they may have invented multi GPU configs (i say maybe, was prolly some bloke in a shed that came up with the idea, but we never hear of those guys). Which they are doing worse at then AMD, and has been for a while.

A little history in SLi / Crossfire -- neither ATi or nVidia invented it.

3Dfx (there once was a 3rd GPU company) invented SLI.

When 3Dfx got bought out by nVidia, the Engineers separated -- some stayed and went to nVidia, while the other half joined ATi (now AMD). Because of this, we now have both SLi and Crossfire.

In other news...

When someone is referring TDP is a measurement of power draw as an argument, there credibility NEEDs to be checked. TDP is a measurement of HEAT (symbol Q, used in Physics and Engineering) measured in Watts.

When you see a stock nVidia card draw ~50W less power than a stock AMD card, when the TDP's are advertised as 150W and 250W (example), that is why.

Heh, if the Mechanical Engineering department at my campus sees TDP being used like this, they would probably facepalm hard, or have a few heart attacks.

Intel Z390 Rig ( *NEW* Primary )

Intel X99 Rig (Officially Decommissioned, Dead CPU returned to Intel)

  • i7-8086K @ 5.1 GHz
  • Gigabyte Z390 Aorus Master
  • Sapphire NITRO+ RX 6800 XT S.E + EKwb Quantum Vector Full Cover Waterblock
  • 32GB G.Skill TridentZ DDR4-3000 CL14 @ DDR-3400 custom CL15 timings
  • SanDisk 480 GB SSD + 1TB Samsung 860 EVO +  500GB Samsung 980 + 1TB WD SN750
  • EVGA SuperNOVA 850W P2 + Red/White CableMod Cables
  • Lian-Li O11 Dynamic EVO XL
  • Ekwb Custom loop + 2x EKwb Quantum Surface P360M Radiators
  • Logitech G502 Proteus Spectrum + Corsair K70 (Red LED, anodized black, Cheery MX Browns)

AMD Ryzen Rig

  • AMD R7-5800X
  • Gigabyte B550 Aorus Pro AC
  • 32GB (16GB X 2) Crucial Ballistix RGB DDR4-3600
  • Gigabyte Vision RTX 3060 Ti OC
  • EKwb D-RGB 360mm AIO
  • Intel 660p NVMe 1TB + Crucial MX500 1TB + WD Black 1TB HDD
  • EVGA P2 850W + White CableMod cables
  • Lian-Li LanCool II Mesh - White

Intel Z97 Rig (Decomissioned)

  • Intel i5-4690K 4.8 GHz
  • ASUS ROG Maximus VII Hero Z97
  • Sapphire Vapor-X HD 7950 EVGA GTX 1070 SC Black Edition ACX 3.0
  • 20 GB (8GB X 2 + 4GB X 1) Corsair Vengeance DDR3 1600 MHz
  • Corsair A50 air cooler  NZXT X61
  • Crucial MX500 1TB SSD + SanDisk Ultra II 240GB SSD + WD Caviar Black 1TB HDD + Kingston V300 120GB SSD [non-gimped version]
  • Antec New TruePower 550W EVGA G2 650W + White CableMod cables
  • Cooler Master HAF 912 White NZXT S340 Elite w/ white LED stips

AMD 990FX Rig (Decommissioned)

  • FX-8350 @ 4.8 / 4.9 GHz (given up on the 5.0 / 5.1 GHz attempt)
  • ASUS ROG Crosshair V Formula 990FX
  • 12 GB (4 GB X 3) G.Skill RipJawsX DDR3 @ 1866 MHz
  • Sapphire Vapor-X HD 7970 + Sapphire Dual-X HD 7970 in Crossfire  Sapphire NITRO R9-Fury in Crossfire *NONE*
  • Thermaltake Frio w/ Cooler Master JetFlo's in push-pull
  • Samsung 850 EVO 500GB SSD + Kingston V300 120GB SSD + WD Caviar Black 1TB HDD
  • Corsair TX850 (ver.1)
  • Cooler Master HAF 932

 

<> Electrical Engineer , B.Eng <>

<> Electronics & Computer Engineering Technologist (Diploma + Advanced Diploma) <>

<> Electronics Engineering Technician for the Canadian Department of National Defence <>

Link to comment
Share on other sites

Link to post
Share on other sites

 Also I used the wrong "there" :)

 

That's what I was alluding to. On reflection I shouldn't have been so smart-arsy, ah well.

 Two motoes to live by   "Sometimes there are no shortcuts"

                                           "This too shall pass"

Link to comment
Share on other sites

Link to post
Share on other sites

A little history in SLi / Crossfire -- neither ATi or nVidia invented it.

3Dfx (there once was a 3rd GPU company) invented SLI.

When 3Dfx got bought out by nVidia, the Engineers separated -- some stayed and went to nVidia, while the other half joined ATi (now AMD). Because of this, we now have both SLi and Crossfire.

In other news...

When someone is referring TDP is a measurement of power draw as an argument, there credibility NEEDs to be checked. TDP is a measurement of HEAT (symbol Q, used in Physics and Engineering) measured in Watts.

When you see a stock nVidia card draw ~50W less power than a stock AMD card, when the TDP's are advertised as 150W and 250W (example), that is why.

Heh, if the Mechanical Engineering department at my campus sees TDP being used like this, they would probably facepalm hard, or have a few heart attacks.

Good point about SLI. I should have remembered this from the history on GPUs. Ah well. IIRC, first versions of SLI/Crossfire had some pretty weird and funny stufff like an external cable-bridge via DVI(?) and some overall broken results. Those were the days.

And yeah, I am guilty of using TDP in the wrong way as well. Point taken ^_^

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

  • GTX 980 Ti > Fury X across the board, even at 4K where the gap is smaller, but the 980 Ti is still undeniably better.
  • Fury beats the GTX 980 at 4K and 1440P and they're similar at 1080P. The GTX 980 overclocks better however and has a good amount less power usage/heat output. Fury is still solid but IMO the GTX 980 is better overall value unless you're playing at 4K where the Fury really pulls ahead.
  • R9 390X only slightly edges out the GTX 980 at 4K. Otherwise the GTX 980 outperforms along with having more margin for overclocking since Hawaii/Grenada doesn't OC well.
  • R9 390 outperforms the GTX 970 without question.
  • R9 380 and GTX 960 perform very similarly. From all the tests/benches I've seen the R9 380 is about 7% faster. Now knowing that the GTX 960 overclocks very well. You can expect roughly 10-15% performance increase from overclocking the GTX 960. Memory clocks can easily OC from 7000MHz to >7500MHz and boost clock from about 1300MHz to 1550MHz. Like I said though, if they cost the same I would get the R9 380... however the GTX 960 costs a bit less.
  • R7 370 of course beats the GTX 750 Ti, it's not even close. But the 750 Ti is cheaper, likewise.
 

We don't have the same situation we had with the R9 200 series. Back then AMD had phenomenal value by comparison. The R9 290X costed much less than the GTX 780 Ti. The 290 costed much less than the 780. The 280X costed much less than the 770. The 280 much less than the 760. And yet they performed basically the same, trading blows at various tiers. But this time, AMD doesn't have the incredible value advantage.

 

Right now you're looking at getting almost identical FPS per dollar from both sides, the only difference being that with Nvidia you'll also have lower power consumption/heat output... and supposedly Nvidia drivers are better though I haven't found that personally. So right now there's not really any good reason to get a 380 over a GTX 960. You're spending an extra $20 (or 10%) for like 7% more fps on average. No reason to get an R9 Fury over a GTX 980, you're spending an extra $80 (~16%) for like 10% higher framerates.

 

AMD's cards are still good, but the only one of their cards that currently offer REALLY good value and are a BETTER option than Nvidia's IMO are the R9 390 and R7 370.

 

Thanks for explaining the CFX vs SLI thing though I'm curious what you mean with one thing... you said gives out more raw FPS but then SLI has more support in games. But wouldn't higher FPS mean that games are supporting it better so that it scales so well? How can you say that it gives higher fps on average but is less supported?

The Fury X is more or less equal in performance to the 980 Ti, I cited the pros of the 980 Ti, but some people like new technology and may take the risk and wait for AMD to fully exploit the Fury X's power. The Fury beats the 980 even at 1080p, although it's only a 2-3 FPS difference, but both cards are made for 1440p not 1080p, for 1080p gaming, the R9 390 is king, at 1440p the Fury offers better performance by quite a margin, and for only 50$, not 80$ and in Europe it's the same price, it's the better deal. The 390X loses out to the 980 in 60% of games but it's only a 2-3 FPS difference, not worth the 70$ more for the 980. There is like 10% difference in gaming terms between the 960 and 380, the 960 is a good OCer but, again, a slight overclock to the 380, which it is more than capable of, makes beat the most highly OCed 960.

In the same game where both CF and SLI are supported, a 390X CF or a Fury X CF will beat a 980 SLI or a 980 Ti SLI, respectively, in terms of FPS, but more games support SLI than games that support CrossFire, all AAA games support both but not all B-titles do, Nvidia makes a game compatible with SLI in roughly the same week the game is launched, while AMD takes a little more time to make games CF-compatible.

Link to comment
Share on other sites

Link to post
Share on other sites

960 might be a decent buy compared to the 380, but neither one of them are good buys compared to the sapphire vapor x 280x for $200.

 

Yeah I was kinda disappointed by the downgrade from 3GB cards to 2GB cards at the mid-range... I had hoped that at the very least going from the 3GB 280 to the 256-bit 285/380 they would have at the very least made all models of it 4GB and as cheap as the 280/280X... but alas it didn't happen :/

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

The Fury X is more or less equal in performance to the 980 Ti

 

75450.png

75453.png

75456.png

75479.png

75477.png

R9-FURY-X-2-43.jpg

R9-FURY-X-2-49.jpg

R9-FURY-X-2-47.jpg

 

The only games where the Fury X outperforms the GTX 980 Ti are ones that in general perform much better on AMD cards... for example Far Cry 4 and in some tests Shadow of Mordor will do better on the Fury X, but in these games the R9 290X also outperforms the GTX 980 (which doesn't occur in many games). Overall on average the 980 Ti provides better performance than the Fury X at all resolutions. The gap is pretty small at 4K, but even so the 980 Ti is the better card.

 

The Fury beats the 980 even at 1080p, although it's only a 2-3 FPS difference, but both cards are made for 1440p not 1080p, for 1080p gaming, the R9 390 is king, at 1440p the Fury offers better performance by quite a margin, and for only 50$, not 80$ and in Europe it's the same price, it's the better deal.

 

I was looking directly at Newegg and PCPartPicker prices. GTX 980 is available at $480+ while the Fury is available at $560+. That's an $80 difference. But yeah after looking a bit longer I've found some 1080P benches and it appears that the Fury is better even at 1080P.

 

This is the only source I've found so far that tested at 1080P with them: http://www.extremetech.com/extreme/209665-amd-radeon-r9-fury-review-chasing-the-gtx-980s-sweet-spot/2

 

 

The 390X loses out to the 980 in 60% of games but it's only a 2-3 FPS difference, not worth the 70$ more for the 980.

 

Fair enough. Value may not be very good for GTX 980, it does occupy its own price point though. This is basically the same deal as the 960 and 380. You're paying more and getting more, but not as much more as you're paying.

 

There is like 10% difference in gaming terms between the 960 and 380, the 960 is a good OCer but, again, a slight overclock to the 380, which it is more than capable of, makes beat the most highly OCed 960.

 

There is about 7% difference and the price difference of the 2GB versions is about 10%. Value-wise they are very close. In terms of overclocking the gap between them shrinks when both are overclocked. The GTX 960 gains between 10-15% performance when overclocked, while the 380 gains a bit less than 10%. Compare the difference between the GTX 960 and R9 380.

 

http://www.guru3d.com/articles_pages/asus_radeon_r9_380_strix_review,22.html

http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_960_g1_gaming_4gb_review,25.html

 

The R9 380 gained over 10% in only one game, while the GTX 960 gained less than 10% in only one game. So the GTX 960 does overclock better.

 

In the same game where both CF and SLI are supported, a 390X CF or a Fury X CF will beat a 980 SLI or a 980 Ti SLI, respectively, in terms of FPS, but more games support SLI than games that support CrossFire, all AAA games support both but not all B-titles do, Nvidia makes a game compatible with SLI in roughly the same week the game is launched, while AMD takes a little more time to make games CF-compatible.

 

Gotcha. Thanks for explaining.

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

75450.png

75453.png

75456.png

75479.png

75477.png

R9-FURY-X-2-43.jpg

R9-FURY-X-2-49.jpg

R9-FURY-X-2-47.jpg

I honestly think that with better drivers things can definitely improve, and that's not taking any side. What really bothers me is its lack of OCing headroom (let's not kid ourselves here) and lack of custom cards (it's pretty much reference or bust if I remember correctly). For example, I'm 350,000% certain that a vendor like Sapphire can innovate on the cooling system and improve it quite well. Then its 4GB of vRAM limit (HBM be DAMNED) means it instantly gets killed as a 4K gaming card for many users who know what they're doing... ESPECIALLY users who run multiple monitors, and game on a 4K screen while doing things on other screens, especially if they use borderless windowed mode or regular windowed mode.

 

Granted, those people are mostly in the minority, but the users who would really push for the best are them, and they're unlikely to consider a Fury X for such a reason.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

I honestly think that with better drivers things can definitely improve, and that's not taking any side. What really bothers me is its lack of OCing headroom (let's not kid ourselves here) and lack of custom cards (it's pretty much reference or bust if I remember correctly). For example, I'm 350,000% certain that a vendor like Sapphire can innovate on the cooling system and improve it quite well. Then its 4GB of vRAM limit (HBM be DAMNED) means it instantly gets killed as a 4K gaming card for many users who know what they're doing... ESPECIALLY users who run multiple monitors, and game on a 4K screen while doing things on other screens, especially if they use borderless windowed mode or regular windowed mode.

 

Granted, those people are mostly in the minority, but the users who would really push for the best are them, and they're unlikely to consider a Fury X for such a reason.

 

Yeah drivers can definitely improve things. I'm honestly quite surprised that the Fury X didn't have voltage control from the get-go... with water-cooling it makes perfect sense since it would be able to adequately cool with the added voltage.

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

I honestly think that with better drivers things can definitely improve, and that's not taking any side. What really bothers me is its lack of OCing headroom (let's not kid ourselves here) and lack of custom cards (it's pretty much reference or bust if I remember correctly). For example, I'm 350,000% certain that a vendor like Sapphire can innovate on the cooling system and improve it quite well. Then its 4GB of vRAM limit (HBM be DAMNED) means it instantly gets killed as a 4K gaming card for many users who know what they're doing... ESPECIALLY users who run multiple monitors, and game on a 4K screen while doing things on other screens, especially if they use borderless windowed mode or regular windowed mode.

 

Granted, those people are mostly in the minority, but the users who would really push for the best are them, and they're unlikely to consider a Fury X for such a reason.

The Fury X us expecting a voltage unlock soon, and since it's watercooled and has VRMs that can deliver up to 375W, it'll hopefully be a good OCer, but right now, as you said, let us not kid ourselves. The cooling solution can be made better with one simple thing, having an OEM other than Cooler Master make the watercooler, but the cooling solution can't be made a lot better, and if I remember correctly, EK has made a waterblock for the Fury X. HBM is a cool new technology that's why they used it, though I'm certain they could've put another 4 gigs of GDDR5, the speed decrease wouldn't be as f-ed up as the 970's, that's for sure, I haven't seen any reviewer have a lack of VRAM in any game tested even with just 4 gigs, but having your two flagship cards have LESS VRAM than your lower-end cards while marketing all four for 4K ?! That's not right. Meanwhile the R9 Nano won't run in as much trouble as the 4 gigs are just good considering it may be fighting with the 390 and the 970.

People that get the Fury X are already a minority.

Link to comment
Share on other sites

Link to post
Share on other sites

75450.png

75453.png

75456.png

75479.png

75477.png

R9-FURY-X-2-43.jpg

R9-FURY-X-2-49.jpg

R9-FURY-X-2-47.jpg

The only games where the Fury X outperforms the GTX 980 Ti are ones that in general perform much better on AMD cards... for example Far Cry 4 and in some tests Shadow of Mordor will do better on the Fury X, but in these games the R9 290X also outperforms the GTX 980 (which doesn't occur in many games). Overall on average the 980 Ti provides better performance than the Fury X at all resolutions. The gap is pretty small at 4K, but even so the 980 Ti is the better card.

Are these reviews using the current Catalyst 15.7 driver ? It increased frame rates on AMD cards in most games, up to 15% in some games, and it made the Fury X more or less be the 980 Ti's equal.

PS: The frame rate increase is based on what people on Reddit said, it maybe less and maybe more.

Link to comment
Share on other sites

Link to post
Share on other sites

Are these reviews using the current Catalyst 15.7 driver ? It increased frame rates on AMD cards in most games, up to 15% in some games, and it made the Fury X more or less be the 980 Ti's equal.

 

Can you find any sources demonstrating this? I've never found driver updates to make much of a difference in games. When they say "up to x% performance gain in Y game" it's usually negligible for most people and under some circumstance and configuration they improved it.

 

On my own system using GTX 760 I can use drivers from 2 years ago and drivers from today and get basically identical performance in every game, even games released long after those old drivers... such as GTA V. Same deal with my friend's R9 270 system... upgraded him to the GTA V driver and it made no noticeable difference in performance.

 

I find it hard to believe that across the board there were significant improvements to framerates across most games on AMD cards but if it can be demonstrated I'll eat my words

 

edit:

Found a source that tested 15.7... doesn't seem like the driver made a meaningful difference.

http://www.babeltechreviews.com/catalyst-15-7-whql-performance-analysis-featuring-fury-x-290x-3/3/

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

I honestly think that with better drivers things can definitely improve, and that's not taking any side. What really bothers me is its lack of OCing headroom (let's not kid ourselves here) and lack of custom cards (it's pretty much reference or bust if I remember correctly). For example, I'm 350,000% certain that a vendor like Sapphire can innovate on the cooling system and improve it quite well. Then its 4GB of vRAM limit (HBM be DAMNED) means it instantly gets killed as a 4K gaming card for many users who know what they're doing... ESPECIALLY users who run multiple monitors, and game on a 4K screen while doing things on other screens, especially if they use borderless windowed mode or regular windowed mode.

 

Granted, those people are mostly in the minority, but the users who would really push for the best are them, and they're unlikely to consider a Fury X for such a reason.

its been proven that 4GB HBM is enough for 4K. But as soon as you go up to 5K, it takes a massive dump.

 

Also, Sapphire probably just released the normal TRI-X Fury with ref PCB due to it being cheaper, however knowing Sapphire, if the card is popular (why spend resources making a uber 1337 haxor version if the card type never got popular?), sapphire will make a OC version. If it is REALLY popular, they may do like they did to the 280X, and produce a TOXIC version. A version so tripped out, so binned and so high strung that it litterally comes pre OCd to the limits of the GPU design.

 

Will such a thing happen? Time will show. Lets hope it does.

 

As for VRAM, unless you play a game like GTA5 on 4k in borderless mode with another game running on another monitor, you wont consume enough Vram to make it matter. Remember, a web page needs at most 60-100mb VRAM, and that is a site with A LOT of images or videos. However most web pages have their images compressed into thumbnails to save bandwidth, which incidentally also saves you some Vram, unless you intend to view 200 images in full size. gaming at 4K resolution with chrome or something else running on another monitor shouldnt hurt your performance much.

Link to comment
Share on other sites

Link to post
Share on other sites

Also, Sapphire probably just released the normal TRI-X Fury with ref PCB due to it being cheaper, however knowing Sapphire, if the card is popular (why spend resources making a uber 1337 haxor version if the card type never got popular?), sapphire will make a OC version. If it is REALLY popular, they may do like they did to the 280X, and produce a TOXIC version. A version so tripped out, so binned and so high strung that it litterally comes pre OCd to the limits of the GPU design.

 

He's talking about the Fury X. The watercooled one is reference design only... which sucks because there could be better quality pumps.

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

Can you find any sources demonstrating this? I've never found driver updates to make much of a difference in games. When they say "up to x% performance gain in Y game" it's usually negligible for most people and under some circumstance and configuration they improved it.

 

On my own system using GTX 760 I can use drivers from 2 years ago and drivers from today and get basically identical performance in every game, even games released long after those old drivers... such as GTA V. Same deal with my friend's R9 270 system... upgraded him to the GTA V driver and it made no noticeable difference in performance.

 

I find it hard to believe that across the board there were significant improvements to framerates across most games on AMD cards but if it can be demonstrated I'll eat my words

 

edit:

Found a source that tested 15.7... doesn't seem like the driver made a meaningful difference.

http://www.babeltechreviews.com/catalyst-15-7-whql-performance-analysis-featuring-fury-x-290x-3/3/

my 295x2 gained 16 FPS in Farcry4, ultra settings, 3440x1440p. I got 7 FPS in The Withcer 3, ultra settings, no HW, 3440x1440p. And i havent really tested any other games to any notable extent after that.

 

But this is a DUAL 290X card getting that much improvement, so it may just be super polished CF drivers and not just single card performance being improved

Link to comment
Share on other sites

Link to post
Share on other sites

my 295x2 gained 16 FPS in Farcry4, ultra settings, 3440x1440p. I got 7 FPS in The Withcer 3, ultra settings, no HW, 3440x1440p. And i havent really tested any other games to any notable extent after that.

 

But this is a DUAL 290X card getting that much improvement, so it may just be super polished CF drivers and not just single card performance being improved

 

Yeah wouldn't surprise me. I know that the drivers make a big difference in crossfire and SLI profiles

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

He's talking about the Fury X. The watercooled one is reference design only... which sucks because there could be better quality pumps.

EKWB sells custom waterblocks for it. So if the pump pisses you off, you can custom cool it. Sure it is an exspensive solution to the whole thing, but it shouldnt be a problem.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah wouldn't surprise me. I know that the drivers make a big difference in crossfire and SLI profiles

I noticed a 3-5% performance improvement in games and benchmarks. My 3D mark score jumped by 200 points. Tested several times. Not an error.

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

its been proven that 4GB HBM is enough for 4K. 

 

As for VRAM, unless you play a game like GTA5 on 4k in borderless mode with another game running on another monitor, you wont consume enough Vram to make it matter.

You should read my guide =D.

 

You know not what you're talking about =D.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

I did have 1 issues with their drivers while using Google Chrome on Windows so I uninstalled Chrome... (used it and driver crashes for no reason...) :(

After looking it up the 970 drivers + Chrome does not mix... (could be fixed now but I am not sure...) :P

Except that one thing I have had no issues with Nvidia drivers! :D

Yea, I think they already fixed it :D

[spoiler=My PC]

Spoiler

CPU: Intel Core i7 6700K | COOLER: Corsair H105 | MOBO: ASUS Z170i Gaming Pro AC | RAM: Corsair LPX DDR4 16GB 2400MHz | GPU: EVGA GTX 980 Classified | CASE: BitFenix Prodigy | SSD: Samsung 950 Pro 512GB | PSU: XFX XTR 650W [spoiler= Le Other Stuff] Monitor: BenQ XL2411Z | Keyboard: Ozone Strike Pro | Mouse: A4 Tech X7 F4 | MousePad: Ozone

Spoiler
Spoiler

PlayStation 2 | PSP 2000 | Game Boy Color | Nintendo DS Lite | Nintendo 3DS | Wii

Spoiler

Sony Xperia J (Why u so bad D:) | iPod 4th gen | iPhone 4 | Yarvik Xenta 13c (3muchchrome5her)

Spoiler
Spoiler

Pentium B980 | 500GB WD Blue | Intel HD Graphixxx | 4Gegabeytes of REHAM

Current OS: MSX 10.0 ( ͡° ͜ʖ ͡°)Ilikethelennyfaceyouknow( ͡° ͜ʖ ͡°) Windows Password Reset Guide

Link to comment
Share on other sites

Link to post
Share on other sites

If you aren't willing to read the entire wall of text, don't bother responding. I am here to have a civilized conversation, not troll. See this as more of an article rather than a thread.

 

It's officially been over a month since the release of the fury x and the r9 390x. AMD has had more more than enough time to deliver proper drivers for these cards and increase their performance. As someone planning to build a gaming oriented computer in the near future, I would like to call out to AMD card owners and have them post their honest opinions on whether I should buy a 980 ti or a fury x. Why should I give up: 

 

Shadowplay

Geforce experience

6 GB of VRAM

Lower power consumption

Much better drivers

 

AMD's advertising was superb. They managed to convince pretty much everyone that the fury x was going to be a revolutionary card and even went so far as to dub it as the "titan killer". However, in reality the fury x is only as good as a 980 ti, and with the extra bonuses of the latter, is there really much incentive to buy a fury x, other than being an early adopter of HBM and water cooling? You call us fanatics for buying nVidia cards without consideration. The truth is, fanaticism is the mindless following of a specific ideology. Sounds familiar? It should, because it describes every AMD fan out there. We have every reason to want to buy an nVidia card, the ones stated above. What about you, though? That makes you the fanatics, not us. You might argue that we should buy an AMD card in order to support the company, which is at a dire financial situation (1.67 USD per stock) and avoid a monopoly. A fair point. But I am not a charity. I am not here to help a $1.3B company. I want to play video games at the best possible framerate for the money I spend. If AMD wants to attract me as a customer, they are going to have to create more competitive products, meaning their combination of quality and price has to be better than that of nVidia's, which is not the case (high bandwidth memory? more like large bus width memory, the bandwidth is only slightly higher compared to GDDR5). 

 

So tell me, fanatics. Why should I buy a fury x?

 

screw the 390X. Just buy a 390. It's almost £100 cheaper and only slightly slower

Link to comment
Share on other sites

Link to post
Share on other sites

You'll sure miss AMD when they go bankrupt and Nvidia can double their prices ;P

 

I'm curious where people get the idea that X company needs Y company or else they can charge three times more... they can't do that for one simple reason: value. Competition drives prices down to some degree, but not by much. Otherwise the prices would just endlessly drop into nothing if it was really a case of back and forth price lowering. Cost of manufacturing (and setup of factories and such) and how much people are willing to pay make a huge impact on the prices...

 

It's not like if AMD went out of business people would suddenly be cool spending $500 for a GTX 960, or $600 for an i5-4690K.

 

The hardware market is much more profitable when you sell TONS of units with an o.k. margin than if you sell a few with massive margins. Like if Nvidia sold only GTX 960s and nothing else, they'd be more profitable than if they sold only GTX Titan Zs and nothing else.

Intel i5-4690K @ 3.8GHz || Gigabyte Z97X-SLI || 8GB G.Skill Ripjaws X 1600MHz || Asus GTX 760 2GB @ 1150 / 6400 || 128GB A-Data SX900 + 1TB Toshiba 7200RPM || Corsair RM650 || Fractal 3500W

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×