Jump to content

AMD Fury X Far Cry 4 game performance from AMD

ahhming

I'm not talking about PCIe bandwidth, that wasn't even in my original point when comparing HBM to GDDR5 - because the PCIe bandwidth doesn't vary with what type of RAM the video card has

 

To come back to that statement: It is true that the PCIe badwith used by the graphicscard does not depend on its VRAM directly, but rather on how much data the card can consume at a time. Since the R9 Fury series will have fast GPU with much faster VRAM it should be able to use a higher PCIe bandwith.

So the faster HBM memory actually indirectly causes higher PCIe Bandwith usage (provided the GPU can handle higher data amounts).

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

As in your forgetting the iGPU configs. Both consoles have different variations.

again, that's exactly my point

although the PS4 has more theoretical GPU compute power, the real world performance (in games) between them is very small

To come back to that statement: It is true that the PCIe badwith used by the graphicscard does not depend on its VRAM directly, but rather on how much data the card can consume at a time. Since the R9 Fury series will have fast GPU with much faster VRAM it should be able to use a higher PCIe bandwith.

So the faster HBM memory actually indirectly causes higher PCIe Bandwith usage (provided the GPU can handle higher data amounts).

yes, but the problem with Fury starts to get really deep when the VRAM tops up and PCIe needs to work over-time to swap assets between system RAM and VRAM - at that point, the HBM bandwidth becomes irrelevant

as we can see in the FireStrike UHD+ tests, when VRAM usage reaches certain points, the Fury chokes and is passed by 980Ti, that's passed by 390X, that's passed by Titan X

that's my whole point I tried to made from god know how many pages ago, but I lack a decent partner for the debate

Link to comment
Share on other sites

Link to post
Share on other sites

yes, but the problem with Fury starts to get really deep when the VRAM tops up and PCIe needs to work over-time to swap assets between system RAM and VRAM - at that point, the HBM bandwidth becomes irrelevant

Ah I see, so your statement is, that the Fury X might get bottlenecked by the speed of the on system RAM (DDR3 or DD4), because the system RAM can not "feed" the GPU with data fast enough?

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

Ah I see, so your statement is, that the Fury X might get bottlenecked by the speed of the on system RAM (DDR3 or DDR4), because the system RAM can not "feed" the GPU with data fast enough?

exactly

 

and my point was also this: because of the 4GB limitation, AMD made a mistake using HBM1

they should've kept using GDDR5 in a 8+GB configuration with a good price point, since they advertise the Fury for VR - that 4GB limit will start to be a real problem very soon

they should've waited for HBM2 or a new way to implement HBM1 that didn't limited them to only 4GB

why I say that: it very much appears that nVidia's Pascal with HBM2 is 1st headed to HPC market and not at desktop graphics market; it may be very late 2016, or ealy 2017, when we'll catch a glimpse of Pascal for PCs

Link to comment
Share on other sites

Link to post
Share on other sites

exactly

 

and my point was also this: because of the 4GB limitation, AMD made a mistake using HBM

they should've kept using GDDR5 in a 8+GB configuration with a good price point, since they advertise the Fury for VR - that 4GB limit will start to be a real problem very soon

And I am sure you know more about electronic engineering than the GPU engineers at AMD.

 

So why aren't you working there exactly?

Link to comment
Share on other sites

Link to post
Share on other sites

That's impressive as hell - I just hope they didn't show the ONE game where it performs better than the 980ti and it actually performs like this on most games.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

exactly

 

and my point was also this: because of the 4GB limitation, AMD made a mistake using HBM1

they should've kept using GDDR5 in a 8+GB configuration with a good price point, since they advertise the Fury for VR - that 4GB limit will start to be a real problem very soon

they should've waited for HBM2 or a new way to implement HBM1 that didn't limited them to only 4GB

why I say that: it very much appears that nVidia's Pascal with HBM2 is 1st headed to HPC market and not at desktop graphics market; it may be very late 2016, or ealy 2017, when we'll catch a glimpse of Pascal for PCs

 

Well fact is it works. Did you see the benchmarks on the first post?

I agree, I also was hoping for a bit more VRAM but it seems to work perfectly fine with even 4k. And as we already know there is a high possibility that a 8GB will be released later.

So I don't see the reason why to deny that it seems to work well and actually outperforms a Titan X pretty easily?

Also that transfer between on system memory and GPU memory normally only happens when you load a for example a map. When the game is loaded and you are actually playing only a fraction of the max bandwith between gpu and RAM is actually used.

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

@zMeul do you have any experience playing at 4k?

 

and my point was also this: because of the 4GB limitation, AMD made a mistake using HBM

they should've kept using GDDR5 in a 8+GB configuration with a good price point, since they advertise the Fury for VR - that 4GB limit will start to be a real problem very soon

 

If you did, you'd see why this is wrong. Currently 4gb are more than enough even at 3840x2160 (VR wouldn't be much more intensive if at all). They'd only become a problem in something like a 4k surround scenario with multiple cards, and the amount of people that will actually go for that is so low it doesn't make a big difference. This is a card that is aimed at single gpu 4k rigs, and offers the potential for high refresh rate 4k with two of them in crossfire (and VR needs high refresh rates as well). Thing is, this kind of performance is only possible BECAUSE of hbm. If to go for 8gb they had to loose 20 fps, it would be a terrible move for them, and the product would be all the worse for it.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Well fact is it works. Did you see the benchmarks on the first post?

I agree, I also was hoping for a bit more VRAM but it seems to work perfectly fine with even 4k. And as we already know there is a high possibility that a 8GB will be released later.

So I don't see the reason why to deny that it seems to work well and actually outperforms a Titan X pretty easily?

Also that transfer between on system memory and GPU memory normally only happens when you load a for example a map. When the game is loaded and you are actually playing only a fraction of the max bandwith between gpu and RAM is actually used.

the FireStrike extreme test uses very little VRAM - this is why we should have real-world, not synthetic, benchmarks

Also that transfer between on system memory and GPU memory normally only happens when you load a for example a map. When the game is loaded and you are actually playing only a fraction of the max bandwith between gpu and RAM is actually used.

games like TW3 use texture streaming and ultimately might not reach the VRAM limit

but, there are games like CoD: AW that use texture pre-cashing and they will fill the VRAM instantly putting the PCIe to heavy work loads swapping assets when needed

the true test will come with games that truly have hi-rez assets specific for UHD+ resolutions

...

two problems with that:

  1. very few games currently have true UHD+ textures
  2. what happens when VR gets mad crazy with Starbreeze Studios' new VR headset features 5120x1440 resolution

AMD is marketing Fury X as a VR solution

Link to comment
Share on other sites

Link to post
Share on other sites

It's something! I would have expected something more since there were rumors saying they are already working on gpu's that run 8k? . Now the reality is that Nvidia will come up with some card that runs 4k on 57fps +$200 more and people will just start the same shity war of price per performance beacons Nvidia "buys every reviewer" ..

Link to comment
Share on other sites

Link to post
Share on other sites

One benchmark done by AMD themselves. Let's all trust the results!

My thoughts exactly.

Link to comment
Share on other sites

Link to post
Share on other sites

When they said the fury X would be $650 and announced the dual fiji card... that was what i was hoping for. Should make the dual fiji around the price of a titan X with A LOT more performance. So i will definitely be considering to buy the dual fiji. Also with DX12 being able to stack the memory... it should make the memory shortage go away with CF cards. So that problem will be solved. Also i personally never use AA at all. Don't need it and rarely notice the edges at all, even at 1080p (which i'm still using).

 

What i'm wondering though... is HBM actually something of AMD? I mean do they have a patent on it or whatever? They kinda made it sound like it was their tech. So is Nvidia gonna have to pay them to be able to use it? Or is HBM 2 not theirs? :|

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

The nano is supposed to perform similarly to a 290x, while drawing roughly 175W of power. 

 

Lisa Su claimed "significantly faster than the 290x". How true this is, we'll have to wait and see. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

943.jpg

I doubt this is true, mostly because it's coming directly from AMD.

I'll believe it when I see benchmarks from a Reputable source/ benchmarks.

Same would be if it was from Nvidia.

I think PR Math adds 20fps to your results ;)

We need independent testing, not fancy AMD marketing slides.

What is hype may never die - Cleganebowl 2016

Link to comment
Share on other sites

Link to post
Share on other sites

the FireStrike extreme test uses very little VRAM - this is why we should have real-world, not synthetic, benchmarks

 

games like TW3 use texture streaming and ultimately might not reach the VRAM limit

but, there are games like CoD: AW that use texture pre-cashing and they will fill the VRAM instantly putting the PCIe to heavy work loads swapping assets when needed

the true test will come with games that truly have hi-rez assets specific for UHD+ resolutions

 

Well the benchamrk shown in the first post is done in-game of Far Cry 4, it's not a Firestrike score.

 

But I agree, before we can make any conclusions we should wait for unbiased reviews to fly in and see how it performs according to their benchmarks.

As we don't know how they actually did the tests / scores it is easy for AMD to tweak such Benchmarks.

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

Well the benchamrk shown in the first post is done in-game of Far Cry 4, it's not a Firestrike score.

oh yeah .. my bad

I was locked on those other benchmarks

Link to comment
Share on other sites

Link to post
Share on other sites

https://forum.beyond3d.com/threads/spinoff-4gb-framebuffer-is-it-enough-for-the-top-end.56964/page-3#post-1852564

 

Nice bit of info on texture streaming, compression etc.

People saying 4GB isnt enough, eat your heart out. It's a bit complicated though, so you most likely wont understand it if you are one of those people complaining.

The person at that link states that 4GB is enough if the game is properly programmed. What about the already existing ones?

As a programmer myself I understand what he says and I also understand why programmers don't optimisie everything they could. Programming is an Engineering science, which means besides of the technical aspect you also have to take the economic part into account. In most cases optimising a game is not really a good idea economically speaking. It may give you around 10-20% performance boost but costs around 50-80% percent more (just an example, varies from case to case)..

 

What he is saying is basically "Game publishers should optimise their code and than 4GB should be enough". This is a "What if" statement and in my opinion is far too complext and varies really hard from game to game that you can not make such a statement with certainty.

 

However I do agree that games these days like to "waste" RAM (either RAM and VRAM). This might give the devlopers a kick to actually optimise their code.

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

look at the titan x... it has 12 GB of Vram... and it plays games at around 40fps average at 4k...is it OK for a 12Gb card? no...

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

look at the titan x... it has 12 GB of Vram... and it plays games at around 40fps average at 4k...is it OK for a 12Gb card? no...

 

Vram has nothing to do with how fast a particular GPU is. Not until it becomes a limitation. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

...

the problem with PC games is that most of them are ports from consoles, consoles that use unified memory architecture

when they port their console games to PC, they forget about that critical aspect and we end up with ports that require much more hardware resources

 

if you take any game from XB1 or PS4 and try to run it on a PC with similar HW components, you'll end up with a barely playable game

Link to comment
Share on other sites

Link to post
Share on other sites

If I remember correctly, they said during the presentation that there will be water and air cooled variants of both the Fury and Fury X. Given the price difference between Fury and Fury X, it wouldn't make sense for both to be the exact same GPU (same performance). Either Fury is a cut-down Fiji, or I misunderstood and the Fury is the air cooled version while Fury X is the water cooled version.

 

Will have to wait for further clarification on that. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

the true test will come with games that truly have hi-rez assets specific for UHD+ resolutions

Like Shadow of Mordor?

Link to comment
Share on other sites

Link to post
Share on other sites

Like Shadow of Mordor?

sorry I was thinking of something else - sorry, again

like CoD: AW and yes, Mordor

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×