Jump to content

AMD Fury X Far Cry 4 game performance from AMD

ahhming

I say things as they are. When some one is being an idiot I tell them so, and provide proof to go with it. Also, read my above post and the very same style of comment from @zMeul that you are accusing me of making.

I can't seem to find anywhere in those quotes that he called out on you or personally insulted you. Also the one time he was insulting you (in my opinion) he apologised for it.

 

Futhermore you seem to not understand what @zMeul statement is. To quote my conversation with him:

 

 

Ah I see, so your statement is, that the Fury X might get bottlenecked by the speed of the on system RAM (DDR3 or DD4), because the system RAM can not "feed" the GPU with data fast enough?

 

 

exactly

 

and my point was also this: because of the 4GB limitation, AMD made a mistake using HBM1

they should've kept using GDDR5 in a 8+GB configuration with a good price point, since they advertise the Fury for VR - that 4GB limit will start to be a real problem very soon

they should've waited for HBM2 or a new way to implement HBM1 that didn't limited them to only 4GB

why I say that: it very much appears that nVidia's Pascal with HBM2 is 1st headed to HPC market and not at desktop graphics market; it may be very late 2016, or ealy 2017, when we'll catch a glimpse of Pascal for PCs

 

This is while you asume he is talking about PCIe bandwith bottlenecking the Graphics card and bashing him for a non existent statement.

 

EDIT:

This is getting fairly off-topic here...

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

I can't seem to find anywhere in those quotes that he called out on you or personally insulted you. Also the one time he was insulting you (in my opinion) he apologised for it.

 

Futhermore you seem to not understand what @zMeul statement is. To quote my conversation with him:

 

 

This is while you asume he is talking about PCIe bandwith bottlenecking the Graphics card and bashing him for a non existent statement.

Further back. He was talking about PCIe Gen 2.0 causing pop in with TW3.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Technically it is, consumer format is less at 3840 X 2160. Guess they wanted to bank on that 4096 number.

 

jesus fucking christ... how long are we going to talk about this?

 

 

people who care will do a wiki search and see that there are about twenty "4k" resolutions, and others will keep obsessing over a single number like it means shit

Link to comment
Share on other sites

Link to post
Share on other sites

jesus fucking christ... how long are we going to talk about this?

 

 

people who care will do a wiki search and see that there are about twenty "4k" resolutions, and others will keep obsessing over a single number like it means shit

It's probably like the third time I've mentioned display resolutions since I joined the forum. So I'm not sure if you're implying that I'm wrong, or just sick of seeing people in general fussing over the statistics. It's pretty easy to understand as 4096 x 2160 is actual 4k and 3840 x 2160 is UHD so I don't see why people would fuss over it.

Link to comment
Share on other sites

Link to post
Share on other sites

Although I don't mind liking to think that Fiji is just going to pulverize GM200, we'll find out soon enough.

 

I genuinely hope it's truly as good as they are touting. Competition is healthy and promotes evolution of products. That's needless to say, though, as it's pretty much common knowledge here.

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

Also the one time he was insulting you (in my opinion) he apologised for it.

that time wasn't him I was replying to

I was replying to a member of a private forum in our country

Link to comment
Share on other sites

Link to post
Share on other sites

Further back. He was talking about PCIe Gen 2.0 causing pop in with TW3.

that point was brought into discussion by someone else, not by me

this is my original point in this whole debate: http://linustechtips.com/main/topic/388216-amd-fury-x-far-cry-4-game-performance-from-amd/?view=findpost&p=5240853

there is but one tiny issue with that theory

HBM only improves bandwidth between GPU and VRAM, not between system and video card

did the PCIe changed? no! did system RAM changed? no! so, in the grand scheme of the system as a whole, HBM is rather irrelevant

a total game changer would be when we'll have HBM far system RAM and a CPU to support it

---

are those benchmarks true? probably are

but there should be benchmarks with games that truly have hi-rez assets for UDH resolutions, and up

 

Link to comment
Share on other sites

Link to post
Share on other sites

that point was brought into discussion by someone else, not by me

this is my original point in this whole debate: http://linustechtips.com/main/topic/388216-amd-fury-x-far-cry-4-game-performance-from-amd/?view=findpost&p=5240853

And your not getting the fact that system memory (ie. system RAM) only matters if you run out of vRAM. Which isn't a problem when you look at SLI GTX 980's with 4GB GDDR5 vs Titan X with 12GB (repeating myself here). HBM is relevant due to it being significantly faster than GDDR5 while taking up a lot less space, using less power and its high bandwidth-which is needed to drive 4K effectively. CPU and RAM have nothing to do with how much of a game changer HBM is.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

And your not getting the fact that system memory (ie. system RAM) only matters if you run out of vRAM. Which isn't a problem when you look at SLI GTX 980's with 4GB GDDR5 vs Titan X with 12GB (repeating myself here). HBM is relevant due to it being significantly faster than GDDR5 while taking up a lot less space, using less power and its high bandwidth-which is needed to drive 4K effectively. CPU and RAM have nothing to do with how much of a game changer HBM is.

and we should assume that it doesn't!?

it was just shown by the FireStrike tests, posted somewhere around here, that when VRAM usage increases, Fury X gets beaten by 980Ti <- 390X <- Titan X

this point in the discussion you are ignoring

 

---

 

quoting myself, again ....

 

 

1434362721x0aYnDVBA9_10_1.gif

 

 

Take a moment to look between the 1440p memory utilization and compare it with the utilization at 4K. You will find some very large increases in VRAM usage, depending on the game, which very much makes sense.

 

Let's start with the GeForce GTX 980 on the very right side. The only game that seems to hit right at the VRAM wall is Dying Light at 1440p and 4K. FC4 comes close, and so does GTA V. Now look over to the GeForce GTX 980 Ti.

 

On the GeForce GTX 980 Ti we clearly exceed 4GB of VRAM in Dying Light at 1440p and 4K. We also exceed 4GB of VRAM at 4K in GTA V and Far Cry 4. What this shows is that these games, the GTX 980 cards "want to" and can, exceed the 4GB framebuffer if more VRAM is exposed. This means the 4GB of VRAM on the GTX 980 is limiting these cards, but it is not on the 6GB GeForce GTX 980 Ti.

 

...

 

VRAM

 

As you can see from our gaming lineup (which is using many new games released this year) we aren't seeing much demand over 4GB just yet. There are some hints that some games might need more; Dying Light for example, and possibly Far Cry 4 and GTA V. We can at least say this, 4GB of VRAM should be the MINIMUM for running games at 1440p today. If you were able to have 6GB of VRAM or more, you will be ensured that games coming this year and next should run fine, as far as VRAM goes at 1440p.

 

At 4K though 4GB of VRAM is clearly not enough. At 4K you want at a MINIMUM 6GB. It is possible though that more may actually help as you start increasing the number of video cards in SLI. 6GB might actually not be enough for some games in 4K when SLI is involved, we will see.

 

The GeForce GTX 980 Ti fits in extremely well at 1440p and should have fairly good longevity as a contender in the 1440p space. However, it may not be the "perfect" 4K card when SLI comes into play. The TITAN X with 12GB of VRAM, could potentially show its benefits there, or any card that might have 8GB of VRAM as well coming down the road.

 

 

http://www.hardocp.com/article/2015/06/15/nvidia_geforce_gtx_980_ti_video_card_gpu_review/10#.VYIN1DCqpQ4 - Monday , June 15, 2015

look at Dying Light:

1434362721x0aYnDVBA9_5_1.gif

1434362721x0aYnDVBA9_5_2.gif

see how much the 980 is hit when the VRAM limit is reached?! and that's just 1440p !!

Link to comment
Share on other sites

Link to post
Share on other sites

and we should assume that it doesn't!?

it was just shown by the FireStrike tests, posted somewhere around here, that when VRAM usage increases, Fury X gets beaten by 980Ti <- 390X <- Titan X

this point in the discussion you are ignoring

Pmsl, the card wasn't even released then, and that was only a rumoured score as its easy to fake a fire strike score. You can not deny the results shown by the official benchmarks actually done inside a real game.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Pmsl, the card wasn't even released then, and that was only a rumoured score as its easy to fake a fire strike score. You can not deny the results shown by the official benchmarks actually done inside a real game.

yes, we need real-world test

both me and @RononDex agreed on this from the start

Link to comment
Share on other sites

Link to post
Share on other sites

If this is true it is a spectacular own goal....

 

Nearly all my GPUs have been AMD, but this is a deal breaker for me...

 

I need HDMI 2.0 for my 4K TV. 

 

 

For those who say it doesn't matter - HDMI is the standard for 4K and 1080p. 99% of device use HDMI from Pi's to Projectors. 99% of 4K tvs support HDMI 1.4 or 2.0 very few have DS. 

Some people don't care about 8Ms latency. I like having a 50 inch 4K TV as my monitor, and that's that.

 

I may have to buy my first Nvidia card since GeForce 2 which would suck... because they haven't been very consumer friendly for a while.

Link to comment
Share on other sites

Link to post
Share on other sites

If this is true it is a spectacular own goal....

 

Nearly all my GPUs have been AMD, but this is a deal breaker for me...

 

I need HDMI 2.0 for my 4K TV. 

 

 

For those who say it doesn't matter - HDMI is the standard for 4K and 1080p. 99% of device use HDMI from Pi's to Projectors. 99% of 4K tvs support HDMI 1.4 or 2.0 very few have DS. 

Some people don't care about 8Ms latency. I like having a 50 inch 4K TV as my monitor, and that's that.

 

You might wanna take your comments to the other thread. 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

quoting myself, again ....  

look at Dying Light:

see how much the 980 is hit when the VRAM limit is reached?! and that's just 1440p !!

 It's not hit by anything other than your Green envy and the fact that both 980Ti and Titan have more cumpute units, it performs exactly as it should.

Would you say than 16GB system RAM is not enough because GTA V caches too much? - I have 12 GB RAM utilised only by GTA V.

Same goes to your example the game engine just caches more in the aviable VRAM. The graph shows no extraordinary frame rate drops, they are perfectly in line with ~20% more compute units on Titan/980Ti

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

Pmsl, the card wasn't even released then, and that was only a rumoured score as its easy to fake a fire strike score. You can not deny the results shown by the official benchmarks actually done inside a real game.

Well those benchmarks should actually be ignored because they are from AMD. You should never take anything the company trying to sell you the product says as fact because they will either cherry pick, or twist the truth in order to deceive you. The same goes for whenever Nvidia publishes benchmarks showing how great their cards are. Just ignore them and wait for other reviews to come out.

 

Of course we shouldn't trust some rumored score either, but that does not seem to be his point. I haven't read all the posts in this thread but it seems like you're flat out denying any doubt that 4GB will become a bottleneck, and zMeul says we can't know for sure.

I really don't think the "well the RAM is faster so we need less of it" argument makes any sense though. Hell a few pages back someone said 4GB of HBM will be as good as 12GB of GDDR5. Let's just apply the same logic to system memory and you will hear how silly it sounds...

 

"Hey I am running out of RAM and it causes my system to use the page file a lot. What should I do? I got 12GB of 1333MHz DDR3 RAM"

"Hmm so you're running out of RAM with 12GB? Well I recommend throwing out those 12GB of slow RAM and get 4GB of 2666MHz DDR4 RAM!"

 

Faster RAM will not help if the issue is that you are running out of RAM. I won't argue how much VRAM we need because I really don't know. 4GB might be enough, or it might not. What I will argue against though is this bullshit that a small amount of fast VRAM is somehow comparable to a large amount of slow VRAM, because they are not. Both has strengths and drawbacks which will be more or less apparent in certain situations.

 

 

 

Id like to hear patrickjp93's comments on the pci/ram/hbm debacle and what sort of limitations there would be.

I've talked to him about PCIe latency and the impact of that on GPUs before (the discussion goes on for quite a few pages). He genuinely believes that PCIe is such a big issue Intel's integrated GPUs will completely obliterate even the highest end dedicated GPU setup in the near future, causing Nvidia to go bankrupt and AMD to go full CPU/APU.

Link to comment
Share on other sites

Link to post
Share on other sites

--

I've talked to him about PCIe latency and the impact of that on GPUs before (the discussion goes on for quite a few pages). He genuinely believes that PCIe is such a big issue Intel's integrated GPUs will completely obliterate even the highest end dedicated GPU setup in the near future, causing Nvidia to go bankrupt and AMD to go full CPU/APU.

 

Thanks for that reference, it was a fun read.

Link to comment
Share on other sites

Link to post
Share on other sites

Id like to hear @patrickjp93 s comments on the pci/ram/hbm debacle and what sort of limitations there would be.

Sigh...this is one gnarled mess... Well, to start, the XBOne has eDRAM much like Intel's Crystalwell, which is why its bandwidth manages to stay so close to the PS4. Neither machine has the GPU or CPU horsepower to do 1080p 60fps on a AAA title anyway, so it's pointless to fight over GDDR5 vs. DDR3 here. The XBOne makes up for it and neither really needs GDDR5. 

 

Moving onward and upward, PCIe slots matter 1-3 fps at most for air-cooled cards, and it comes down to how the air currents in your case work. For Liquid-Cooled there is no difference or it's within margin of error for testing.

 

As CPUs move onward to being more powerful in compute, DDR4 will not be able to keep up, which is one reason why the Skylake E5/E7 Xeons are moving to 6-channel memory configurations and why Intel, Micron, and others are pushing Hybrid Memory Cube technology. Not only is it a far higher bandwidth memory (very competitive with HBM in this regard, though expense is certainly a concern for consumer products); but it's lower in latency, higher in density, lower in power consumption, and it's fully transactional on its own without any need of highly complex programming to make it work. This is a massive breakthrough where database hosting is concerned, which is one reason Oracle jumped on the first iteration (only 30GB/s bandwidth per stack) back in 2012. Compare that to DDR4 per DIMM and I'm sure you'll see the point. With the programming streamlined, burden is removed from the CPUs, saving power and money as well.

 

HBM is great for dumb memory that only has to be high-bandwidth, low-power, compact, and (eventually) cheap. That's why it's preferable over HMC at this point in time for GPU memory, though the reality is Micron's most recent spin (the MCDRAM going onboard Intel's Knight's Landing) is a titanic 240GB/s bandwidth per stack compared to the 128 per stack of HBM 1.0. When the 2.0 version comes along with 256 GB/s bandwidth, HBM will take back the bandwidth crown.

 

For gaming, PCIe 2.0 16x still isn't saturated, though there are some frame rate differences due to the 10/8 encoding (20% overhead) vs 130/128 (< 2% overhead) in PCIe 3 due to texture streaming techniques that aren't properly tuned or preemptive enough. Perhaps with the Fury X or Quantum (Dual Fiji XT) we'll see a card that saturates the 2.0 bus, which will put a bit of pressure on IBM and Intel to release PCIe 4.0 which was originally scheduled for Skylake-E but has dropped off the most recent roadmap. In compute PCIe 4.0 is desperately needed, which is one reason IBM has already implemented NVLink on their Tesla-accelerated systems.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

 

I've talked to him about PCIe latency and the impact of that on GPUs before (the discussion goes on for quite a few pages). He genuinely believes that PCIe is such a big issue Intel's integrated GPUs will completely obliterate even the highest end dedicated GPU setup in the near future, causing Nvidia to go bankrupt and AMD to go full CPU/APU.

I never said what you accuse me of saying and never would. For compute it's a huge issue. In gaming? No. Never has been and likely never will even with DX 12 multiadaptor. The entire industry sees dGPUs becoming a much more niche product as time goes on because of integration and integrated GPUs rising in potency to the level of high-end dGPUs by the end of the decade, but that's still a different matter.

 

Why am I continually plagued by

A ) Irrational Zealots

B ) Foolish Fanboys

C ) Vengeful Keyboard Warriors

D ) Egotists

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I never said what you accuse me of saying and never would. For compute it's a huge issue. In gaming? No. Never has been and likely never will even with DX 12 multiadaptor. The entire industry sees dGPUs becoming a much more niche product as time goes on because of integration and integrated GPUs rising in potency to the level of high-end dGPUs by the end of the decade, but that's still a different matter.

 

Why am I continually plagued by

A ) Irrational Zealots

B ) Foolish Fanboys

C ) Vengeful Keyboard Warriors

D ) Egotists

 

Welcome to the LTT community!

phanteks enthoo pro | intel i5 4690k | noctua nh-d14 | msi z97 gaming 5 | 16gb crucial ballistix tactical | msi gtx970 4G OC  | adata sp900

Link to comment
Share on other sites

Link to post
Share on other sites

 

Welcome to the LTT community!

 

 

That is not only LTT. That is anywhere that you're free to be as anonymous as you want, and as intelligent as the engineers of the products.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm gonna go with those benchmarks are a load of bullshit on this one.

 

I wonder how much power it's gonna draw, I'm guessing that you're gonna need at least a 1000W PSU to have two in crossfire.

 

 

--Note to those who don't understand things--

The whole 1000W PSU for two fury xs in CF was a jab at amd as their cards tend to be on the higher power consumption side of things.

 

I bet a lot of people, such as myself won't give a crap about power consumption and will just buy one. It doesn't matter how many whats it takes to win. More FPS is more FPS. ;)

CPU: AMD Ryzen 9 - 3900x @ 4.4GHz with a Custom Loop | MBO: ASUS Crosshair VI Extreme | RAM: 4x4GB Apacer 2666MHz overclocked to 3933MHz with OCZ Reaper HPC Heatsinks | GPU: PowerColor Red Devil 6900XT | SSDs: Intel 660P 512GB SSD and Intel 660P 1TB SSD | HDD: 2x WD Black 6TB and Seagate Backup Plus 8TB External Drive | PSU: Corsair RM1000i | Case: Cooler Master C700P Black Edition | Build Log: here

Link to comment
Share on other sites

Link to post
Share on other sites

The truth is even though its 4gb "allegedly". The fact that it is water cooled at that size makes it a over clocking monster. I'm enthused Cuz later at the end of the year we WILL see a 6gb version with the same size...has anyone seen the specs for the r9 nano?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×