Jump to content

AMD's own testing? Fury X vs GTX 980Ti

zMeul

So, now that the cat is out of the bag (week late, had to sit in the corner). Who is still ready to defend AMD Fury X's awesome 95mhz overclocking capability? See, this is why I said about reading up on hardware for years and being able to correlate hidden information when presented with evidence.

 

It's a pretty nice card still, but the drivers are still pretty terrible. Can't really compete <4K.

We should wait to see what overclocking is like if/when voltage is unlocked. It has around 15C of temp headroom afterall.

Link to comment
Share on other sites

Link to post
Share on other sites

So, now that the cat is out of the bag (week late, had to sit in the corner). Who is still ready to defend AMD Fury X's awesome 95mhz overclocking capability? See, this is why I said about reading up on hardware for years and being able to correlate hidden information when presented with evidence.

It's a pretty nice card still, but the drivers are still pretty terrible. Can't really compete <4K.

What is there to defend, the voltages are locked. Simple as that.

FX 6300 @4.8 Ghz - Club 3d R9 280x RoyalQueen @1200 core / 1700 memory - Asus M5A99X Evo R 2.0 - 8 Gb Kingston Hyper X Blu - Seasonic M12II Evo Bronze 620w - 1 Tb WD Blue, 1 Tb Seagate Barracuda - Custom water cooling

Link to comment
Share on other sites

Link to post
Share on other sites

<video>

 

Got to 41mins in this video and look at how badly AMD tried to cheat on their benchmarks

Of course AMD's own tests will be biased as fuck. I couldn't believe how stupid some people were when they said things like "AMD has no reason to lie" when I told them to not trust AMD's own benchmarks.

Of course they will make their card seem as good as possible and the competitor as bad as possible. Nvidia most likely does this too. AMD and Nvidia are not your friends. They are companies trying to take as much money from you as possible, even if it means twisting the truth to fool you.

 

Never trust benchmarks that have not been validated by an independent third party. The only people who trusted AMD's benchmarks were either naive, stupid or massive AMD fanboys. I hope this has taught them what a colossal dick AMD really is.

Link to comment
Share on other sites

Link to post
Share on other sites

Why would it need to be a TitanX killer when nVidia killed that one off themselves by pricing the 980ti so low?

So, 3 generations, that's how long it takes to forget that the Titan chip (Gx100) was originally the GTX X70/X80 line, and what you now call the x80 used to be an x60.

They ~doubled the money they earn from these chips, and even 980ti is still few hundred *insert currency here* above its old price.

How hard is it to see that?

They created an artificial "super card" by adding ridiculous amounts of RAM to it, and charging a K for it.

Then they create a "super card lite" by cutting the specs down slightly, but reducing the price more, and voila! 

Why are there no x50 and below desktop cards? Because they are used for higher-end cards. They bumped the numbers on all cards, invented some new ones and you get sucked in by the new "low low price".

wake the fuck up already...

Link to comment
Share on other sites

Link to post
Share on other sites

If that's the sort of results to expect then I'm impressed. AMD really seems to have done it right with fiji in my opinion, especially as this is beating the 980ti at the same price point, and as the titan x is so close in performance, with a bit of overclocking you can expect to see this comfortably beat out all the competition, especially with an AiO already attached the temps will easily be able to handle it. If I had the money I'd be racing to bag one of these right now.

I'm not a specialist ( and far from becoming one) about cards temps .. But isn't AMD running generally hotter compared to same performance in NVidia?

If so,they might have gone with the "aio" cooler so the out of the box experience would not thermal throttle ..

But I might also be dead wrong

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not a specialist ( and far from becoming one) about cards temps .. But isn't AMD running generally hotter compared to same performance in NVidia?

If so,they might have gone with the "aio" cooler so the out of the box experience would not thermal throttle ..

But I might also be dead wrong

 

That used to be the case with reference cooled 290/x's. Usually temperature wise both vendors are somewhat on the same level.

FX 6300 @4.8 Ghz - Club 3d R9 280x RoyalQueen @1200 core / 1700 memory - Asus M5A99X Evo R 2.0 - 8 Gb Kingston Hyper X Blu - Seasonic M12II Evo Bronze 620w - 1 Tb WD Blue, 1 Tb Seagate Barracuda - Custom water cooling

Link to comment
Share on other sites

Link to post
Share on other sites

A recent podcast by the Tech Report about Fiji pretty much nails this one to the wall, and explains everything you could possibly want to know about FuryX and Fiji in general.

 

Starting at 40:49 in the video they discuss why AMD's internal benchmarks were higher than reviewers. Honestly, the entire video is pretty educating, but this part reveals that AMD were playing to the strengths and avoiding weak points of fiji for the reviewer guide benchmarks, namely turning off AF filtering etc.

 

https://youtu.be/28CECF_Cieo?t=2449

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

A recent podcast by the Tech Report about Fiji pretty much nails this one to the wall, and explains everything you could possibly want to know about FuryX and Fiji in general.

 

Starting at 40:49 in the video they discuss why AMD's internal benchmarks were higher than reviewers. Honestly, the entire video is pretty educating, but this part reveals that AMD were playing to the strengths and avoiding weak points of fiji for the reviewer guide benchmarks, namely turning off AF filtering etc.

 

https://youtu.be/28CECF_Cieo?t=2449

 

Yep, choosing settings to avoid getting bottlenecked by the 64 ROPs on the Fury, which, if you are playing on 4K you should be doing anyway. As it's a card meant for 4K, only light anti-aliasing if any at all is needed, unfortunately a lot of reviewers are using MSAA and shit on 4K which really is dumb as boots.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

Yep, choosing settings to avoid getting bottlenecked by the 64 ROPs on the Fury, which, if you are playing on 4K you should be doing anyway. As it's a card meant for 4K, only light anti-aliasing if any at all is needed, unfortunately a lot of reviewers are using MSAA and shit on 4K which really is dumb as boots.

AMD had anisotropic filtering set to 0...

Nobody in their right mind would do that, even on 4K. Let's not pretend like AMD was trying to tweak the settings in their benchmarks to what people actually want to use. They ignore what people would want and just set the settings to whatever would give them the biggest benefit over Nvidia. There is just no way you can spin what AMD did to anything positive. They just tried to trick people.

 

It's completely fine that they were trying to trick people by the way. AMD's own numbers are from marketing material. You shouldn't trust them for the same reason you shouldn't trust any commercials you see, because they are designed to fool you into buying their product.

Link to comment
Share on other sites

Link to post
Share on other sites

fooling, tricking, blah blah

 

Their settings were always included with the original benches. You can't say they were trying to trick anyone when they were always forthright with their method.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

Their settings were always included with the original benches. You can't say they were trying to trick anyone when they were always forthright with their method.

I think they were, because the benchmarks they ran made no sense and the settings were not included in the actual benchmark graphs (which is what most people will look at when comparing two different GPUs).

You can't just run benchmarks at flat out stupid settings, make graphs about it and then go "no these graphs were totally not made to trick people! See I made a note about the stupid settings in an appendix several pages later in the reviewer guide".

 

It's like the small print which is read very quickly at the end of some commercials. It's not there to remove any possibility of people getting tricked. It's there because they have to put it there, even if they don't want to.

Link to comment
Share on other sites

Link to post
Share on other sites

A recent podcast by the Tech Report about Fiji pretty much nails this one to the wall, and explains everything you could possibly want to know about FuryX and Fiji in general.

 

Starting at 40:49 in the video they discuss why AMD's internal benchmarks were higher than reviewers. Honestly, the entire video is pretty educating, but this part reveals that AMD were playing to the strengths and avoiding weak points of fiji for the reviewer guide benchmarks, namely turning off AF filtering etc.

 

https://youtu.be/28CECF_Cieo?t=2449

 

Very interesting video indeed. Obviously setting anisotropic filtering to 0 is outright dumb. AF is so much more important than AA. 

 

But going further into the video they have very interesting debate about GameWorks.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

I find it very weird to set AF to 0, since AF doesn't affect performance more than 1-2 FPS even at 4K.

FX 6300 @4.8 Ghz - Club 3d R9 280x RoyalQueen @1200 core / 1700 memory - Asus M5A99X Evo R 2.0 - 8 Gb Kingston Hyper X Blu - Seasonic M12II Evo Bronze 620w - 1 Tb WD Blue, 1 Tb Seagate Barracuda - Custom water cooling

Link to comment
Share on other sites

Link to post
Share on other sites

in theory more watts should = more power.

 

but multiple sources are showing their benchmarks in favor of nvidia

MOBO- MSI z97 gaming 5  CPU- I5 4690k   GPU- G1 Gaming gtx970  RAM- 16gb Corsair Vegeance  PSU- Corsair RM650  COOLING- Corsair H110iGT 

STORAGE- AMD r7 120gb SSD / 500gb HDD  CASE- Phanteks Enthoo Luxe   Sound-  Logitech z506 / Corsair Void  Peripherals- G710, G502

Link to comment
Share on other sites

Link to post
Share on other sites

in theory more watts should = more power.

 

but multiple sources are showing their benchmarks in favor of nvidia

Ehhh... No? Not even in theory does more power consumption = more performance.

More bhp does not equal a faster car either. There are a lot of other factors you have to take into consideration such as efficiency.

 

An example of this would be lights. An LED light can produce far more light per wattage compared to an incandescent light bulb because they are more efficient. Same thing applies to computer hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

Ehhh... No? Not even in theory does more power consumption = more performance.

More bhp does not equal a faster car either. There are a lot of other factors you have to take into consideration such as efficiency.

 

An example of this would be lights. An LED light can produce far more light per wattage compared to an incandescent light bulb because they are more efficient. Same thing applies to computer hardware.

 

my point being amd should not use so much more power for little to no gain

MOBO- MSI z97 gaming 5  CPU- I5 4690k   GPU- G1 Gaming gtx970  RAM- 16gb Corsair Vegeance  PSU- Corsair RM650  COOLING- Corsair H110iGT 

STORAGE- AMD r7 120gb SSD / 500gb HDD  CASE- Phanteks Enthoo Luxe   Sound-  Logitech z506 / Corsair Void  Peripherals- G710, G502

Link to comment
Share on other sites

Link to post
Share on other sites

Looks like 4K on a single card isn't gonna be a thing yet...

It probably will be next year with the 490x. Amd has about a 50 percent increase in performance each generation.  But we may not see AMD r9 400 series for another 2 years

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

It probably will be next year with the 490x. Amd has about a 50 percent increase in performance each generation.  But we may not see AMD r9 400 series for another 2 years

 

Greenland will be released next year, on a 14nm FinFet node with HBM2. Not only will Pascal get a serious contender, but Pascal might be weaker.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Greenland will be released next year, on a 14nm FinFet node with HBM2. Not only will Pascal get a serious contender, but Pascal might be weaker.

We dont know that. We thought that about the 300 series when the 200 series launched. We thought this would be 20nm. 

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

Greenland will be released next year, on a 14nm FinFet node with HBM2. Not only will Pascal get a serious contender, but Pascal might be weaker.

 

Was it ever confirmed that It will be on 14nm? Link ples. I want to read.

Corsair 760T White | Asus X99 Deluxe | Intel i7-5930k @ 4.4ghz | Corsair H110 | G.Skill Ripjawz 2400mhz | Gigabyte GTX 970 Windforce G1 Gaming (1584mhz/8000mhz) | Corsair AX 760w | Samsung 850 pro | WD Black 1TB | IceModz Sleeved Cables | IceModz RGB LED pack

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

We dont know that. We thought that about the 300 series when the 200 series launched. We thought this would be 20nm. 

 

Actually we do. 14nm FinFet is already up and running, and just like ZEN will use that node, so will Greenland. 20 nm at TSMC was just too leaky and poor performing, but FinFet's takes care of that. Pascal will either be 16nm FinFet+ (at TSMC) or 14nm FinFet along with Greenland on Global Foundries, though NVidia has yet to land that contract.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Was it ever confirmed that It will be on 14nm? Link ples. I want to read.

 

http://www.fudzilla.com/news/graphics/37712-amd-greenland-gpu-comes-in-2016-gets-finfet

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

It probably will be next year with the 490x. Amd has about a 50 percent increase in performance each generation.  But we may not see AMD r9 400 series for another 2 years

V00MHmT.jpg 

According to some benchmarks, the 300 series is only ~10% faster than the 200 series.

Happy to help with any tech problems. Windows 10 installing guide here.
----i5 4570s----gigabyte z97x-sli----8gb ddr3----gigabyte g1 gaming 960----HAF 912----120gb ssd----1tb hdd----500w evga----

Link to comment
Share on other sites

Link to post
Share on other sites

Now if only AMD's low-midrange cards could follow suit and see similar improvements instead of being constant rebadges...

 

That would be just great.

CPU: Intel Core i5-4460 Motherboard: ASRock H97M Anniversary RAM: Kingston HyperX 1600MHz 8GB (2x4GB) GPU: ASUS GeForce GTX 750Ti
Case: Corsair Air 240 White Storage: Western Digital Caviar Black 500GB PSU: Corsair CX500 Keyboard: CM Storm Quickfire Rapid (Cherry MX Blue)
Mouse: SteelSeries Kinzu V2 Operating System: Windows 8.1N

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×