Jump to content

Fury X crossfire or 980ti sli

red773

980ti SLI has around 92% scaling.  The 980ti out-performs the FuryX in just about all scenarios.  It is a hot card, but it's hot all the way to the victory line.

In what game?

 

92% scaling in minecraft?

 

Scaling is DRIVER BASED. If your drivers arent good, you wont scale at all. SLI arguably has worse drivers then CF, this has been proven time and time again. They got worse frame timing/pacing, which causes stuttering. They got worse scaling in general, less newer games has good SLI profiles, while Crossfire profiles usually comes out with the first game drivers (ive been using CF for several YEARS, dont even start to insinuate i dont know how CF driver support is. Ive relied on it for years).

Link to comment
Share on other sites

Link to post
Share on other sites

https://www.youtube.com/watch?v=pDDjdQr67z0

 

 

 

 

Now, ill just leave this thread.

 

Until Nvidia fixes SLI, you are better off with a single 980Ti, OR Dual Fury X.....

 

SLI

IS

INFERIOR

AND

THERE

IS

PROOF

 

A good GPU (980ti) does not make up for a flawed system.

Link to comment
Share on other sites

Link to post
Share on other sites

4GB GDDR5 is enough for 4k for sure. But the memory bandwidth used by Nvidia is TOO DAMN LOW...

 

Look at R9 290X... old card, yes?

Why did it catch up to the 980 (talking about the launch of 980) once you increased resolution?

Nvidia has better driver overhead, we know that for sure.

The 980 is a MUCH stronger GPU, we know that for sure.

Both has 4GB GDDR5 memory, Nvidia is even clocked higher...

 

SO WHY, WHY DO YOU THINK NVIDIA SUCKS AT HIGHER RESOLUTIONS?

Because they give you a much much much lower bus WIDTH.

 

A R9 290X REFERENCE card, has MORE bandwidth then a 980Ti or TitanX. Now that is sort of fucked up.

 

The amount of Vram can be reduced if you can shuffle it in and out fast enough. However if you cannot move it in and out of storage (VRAM) due to a too narrow bus, then you need a bigger amount of VRAM to store that which is waiting to be moved.

 

End result is that a 980Ti has around 336GB/s transfer rate, a R9 290/290X/390/390X has 384GB/s, and Fury (any Fury) has 512GB/s.... the sheer VOLUME of data that can be moved is an order of magnitude higher....

 

So how do you correct this for Nvidia? You OC the memory speed (MHz). But the chips can only handle so much before they start producing errors. Yet, due to the initial bus width being too damn low, no matter how hard you press Nvidia memory chips, AMD only need a fraction of an increase to pull MILES away from Nvidia.

 

What AMD needs is stronger GPUs that is more efficient, their memory design (Bus Width over raw speed) is solid, and cannot be argued with.

 

Oh, just FYI... 970 SLI, forget it... 256bit bus x 7010GB/s / 8 = 224 GB/s max transfer rate.... 390 is 512 X 6000 / 8 = 384 GBs max transfer rate....

 

SLI being worse then CF in the first place, not to mention god knows how that SLI bridge affects things (After AMD started to use CF over PCIe bus, their scaling and quality has just gone up, a lot).... My guess is that the SLI/CF bridges are "too narrow" and chockes modern GPUs.. But hey, no way to know before Nvidia gets rid of it, ey?

 

People have been complaining about Nvidia's buses for so long, but the numbers quoted have never had any correlation with any performance degradation. Even the 970 with its infamous 0.5GB slow vram has zero impact on high resolution gaming.

 

Frankly from what I've seen so far, while HBM is nice from a "technology is progressing" point of view, it should absolutely no way have any bearing on whether or not people should be buying the card. The GPU itself is the most important factor.

 

It's really sad to see a company so strapped for R&D cash as AMD pissing away so much money on something that's mostly irrelevant.

Link to comment
Share on other sites

Link to post
Share on other sites

People have been complaining about Nvidia's buses for so long, but the numbers quoted have never had any correlation with any performance degradation. Even the 970 with its infamous 0.5GB slow vram has zero impact on high resolution gaming.

 

Frankly from what I've seen so far, while HBM is nice from a "technology is progressing" point of view, it should absolutely no way have any bearing on whether or not people should be buying the card. The GPU itself is the most important factor.

 

It's really sad to see a company so strapped for R&D cash as AMD pissing away so much money on something that's mostly irrelevant.

If they didnt piss away money at HBM, there wouldnt be no HBM. Because Nvidia is totally fine with GDDR5 and has on several occasions stated that they think they can keep pushing the limits of GDDR5 for a few more generations...

 

Now, you may not see any direct correlation, but at higher resolutions, you must shuffle information faster. That is a fact.

And why else, WHY ELSE, would a 290X, totally inferior to the 980, EVER start catching up at 1440p and 4k if it werent for bandwidth? Nvidia has less GPU overhead in their drivers, so they perform better in general. Their maxwell architecture is newer and more efficient. Their ROP count, shader count, everything says "WE SHOULD WIN EASILY". Yet, a soon 2 year old card just gets closer and closer the more load you put on it. Just like AMD said early on, You really need to go to 4k before the 290X can stretch its legs.

 

Explain it. Explain why every benchmark shows the exact same thing. At higher resolutions, AMD catches up, despite every evidence EXCEPT memory bandwidth being stacked against them.

 

There is so many circumstancial evidences pointing to memory bandwidth and or possible data compression/decompression issues with Nvidia that really, this is a no brainer.

Link to comment
Share on other sites

Link to post
Share on other sites

If they didnt piss away money at HBM, there wouldnt be no HBM. Because Nvidia is totally fine with GDDR5 and has on several occasions stated that they think they can keep pushing the limits of GDDR5 for a few more generations...

 

Now, you may not see any direct correlation, but at higher resolutions, you must shuffle information faster. That is a fact.

And why else, WHY ELSE, would a 290X, totally inferior to the 980, EVER start catching up at 1440p and 4k if it werent for bandwidth? Nvidia has less GPU overhead in their drivers, so they perform better in general. Their maxwell architecture is newer and more efficient. Their ROP count, shader count, everything says "WE SHOULD WIN EASILY". Yet, a soon 2 year old card just gets closer and closer the more load you put on it. Just like AMD said early on, You really need to go to 4k before the 290X can stretch its legs.

 

Explain it. Explain why every benchmark shows the exact same thing. At higher resolutions, AMD catches up, despite every evidence EXCEPT memory bandwidth being stacked against them.

 

There is so many circumstancial evidences pointing to memory bandwidth and or possible data compression/decompression issues with Nvidia that really, this is a no brainer.

 

So you think Nvidia's commitment to HBM2 exists only as recently as the Fury X? Given that HBM has been actively researched for the last 7 years, you think Nvidia up and went "yep HBM2 in 16 months. LET'S DO IT"? Unlikely.

 

Every benchmark I've seen, including my own experience, has put the 970 and the 290X as level at 4K. Explain that, given that the 970s memory bandwidth is even more limited than the 980's. The 980 does beat the 290X quite comfortably at stock speeds.

 

I'm happy that progress is being made, but let's not forget that the real cost of the R&D into HBM was a 300 series made up of two to four year old relics.

Link to comment
Share on other sites

Link to post
Share on other sites

So you think Nvidia's commitment to HBM2 exists only as recently as the Fury X? Given that HBM has been actively researched for the last 7 years, you think Nvidia up and went "yep HBM2 in 16 months. LET'S DO IT"? Unlikely.

 

Every benchmark I've seen, including my own experience, has put the 970 and the 290X as level at 4K. Explain that, given that the 970s memory bandwidth is even more limited than the 980's. The 980 does beat the 290X quite comfortably at stock speeds.

 

I'm happy that progress is being made, but let's not forget that the real cost of the R&D into HBM was a 300 series made up of two to four year old relics.

Considering SK Hynix and AMD is the driving forces behind the technology from many many many many years back... I think Nvidia would milk the GDDR5 cow for as long as they possibly could if AMD hadnt released a HBM card... they would have sat there, content with their current stuff. Because they could just bin the memory chips better and save money during production

 

Look at the facts... AMD was first to GDDR5 by almost a generation... they are also the first ones to "leave" GDDR5.... AMD is the first with HBM, and when time arrives for new technology to come out, i am sure AMD, if they still exist, will be the first ones to adopt it. Not because they are using it for a lifeline, but because they apparently see a need for the industry to move forward AHEAD of the incoming bottleneck.

 

And lets not remember that TWO to FOUR year old relics is at this point in time, competing and BEATING Nvidias less then ONE YEAR old products... What does that tell you about Nvidias designs? AMD even managed to make their refreshed product line consume marginally less power. That alone is a feat worth applauding, as any step AMD takes to be more efficient is really welcome the way i see it.

Link to comment
Share on other sites

Link to post
Share on other sites

So you guys think Fury X crossfire is better because crossfire is a superior technology that scales better and has better frame timings? 

Link to comment
Share on other sites

Link to post
Share on other sites

I am torn on weather to get Fury X crossfire or 980ti sli. I was originally going 980ti but I am leaning towards the Fury X. From what I have heard crossfire scaling is an average of 85% and sli is an average of 79%. I will install waterblocks on either setup I don't care about the rads on the fury. I mostly play BF4 and I have a 4k monitor which my 970 wont do much with. (no fanboys please) which setup seems better considering DX12 is launching soon. 

 

You might find this helpful. They test 2 and 3-way SLI vs CF with the Fury X and 980Ti. ;)

 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

But AMD made such a huge deal cause of memory bandwith on the cards being bonkers so 4GB was "enough"

Which, for 4k, it IS.

 

5k is WAY too expensive atm, and we havent even gotten to ultrawide 4k being even remotely affordable

Link to comment
Share on other sites

Link to post
Share on other sites

Which, for 4k, it IS.

 

5k is WAY too expensive atm, and we havent even gotten to ultrawide 4k being even remotely affordable

Eh, I am one of those people that think even for 1440p 4GB is not enough. I cap 3GB at 1080p now god forbid I go to 1440p or 4k.

|King Of The Lost|
Project Dark: i7 7820x 5.1GHz | X299 Dark | Trident Z 32GB 3200MHz | GTX 1080Ti Hybrid | Corsair 760t | 1TB Samsung 860 Pro | EVGA Supernova G2 850w | H110i GTX
Lava: i9 12900k 5.1GHz (Undervolted to 1.26v)| MSI z690 Pro DDR4| Dominator Platnium 32GB 3800MHz| Power Color Red Devil RX 6950 XT| Seasonic Focus Platnium 850w| NZXT Kraken Z53
Unholy Rampage: i7 5930k 4.7GHz 4.4 Ring| X99 
Rampage|Ripjaws IV 16GB 2800 CL13| GTX 1080 Strix(Custom XOC Signed BIOS) | Seasonic Focus Platinum 850w |H100i v2 
Revenge of 775: Pentium 641 | Biostar TPower i45| Crucial Tracer 1066 DDR2 | GTX 580 Classified Ultra | EVGA 650 BQ | Noctua NH D14

Link to comment
Share on other sites

Link to post
Share on other sites

Its actually better- Im not exactly sure on the number but HBM equals more vraam than just DDR5

Yep, kind of like the CPU that not many people know of any more called the Pentium II and Celeron

PII 450MHz, 256KB slow off die L2 cache=Celeron 300A @ 450MHz with 128KB of fast on die L2 cache

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

DUDE! Get another 970. You don't need to dash out the cash to get two new cards, I myself have two 970s (overclocked them, tho) and play almost every game maxed you with AA on. The only game in my library which is not maxed out is pCARS, and that is one hell of a demanding game. It is still played on high tho, and looks STUNNING.

That game isn't demanding it's poorly optimized and coded they are lazy as fuck
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×