Jump to content

AMD Fury X Far Cry 4 game performance from AMD

ahhming

´m excited about this,

ofc its waiting for benchmarks from reputable sources.

But still it looks very promissing.

Link to comment
Share on other sites

Link to post
Share on other sites

if I'd known in advance I'd have this discussion, I'd be prepared with recorded footage

but demand me to pop with evidence just like snapping fingers .. wow!

no one even bothered testing for this and bam! ... I'm instantly wrong :blink:

Yes after you make a claim that contradicts everything you better back it up with evidence.

But you aren't even to that stage yet. It's not about proving whether you get pop in on PCIE 2.0, it's about proving that in a system where PCIE 2 and PCIE 3 is the only difference, PCIE 3 eliminates pop in.

Which is where you have been proven wrong. Pop in exists regardless of PCIE2/3.

Now you have the option to back out gracefully or you can go full UbiSoft and keep digging deeper.

This is LTT. One cannot force "style over substance" values & agenda on people that actually aren't afraid to pop the lid off their electronic devices, which happens to be the most common denominator of this community. Rather than take shots at this community in every post, why not seek out like-minded individuals elsewhere?

Link to comment
Share on other sites

Link to post
Share on other sites

.....

you don't even understand what I'm saying

here's the block diagram for the XB1 with unified RAM architecture, with DDR3:

1200x-1.png

 

notice anything odd?! at all

Link to comment
Share on other sites

Link to post
Share on other sites

I'm gonna stay skeptical. For all we know, this is overclocked.

Still though, I know don't know if this would be enough to satisfy that one guy I yesterday who went full retard said that, when overclocked, it must beat an overclocked G1 Gaming 980 Ti while only consuming as much power as a stock G1 Gaming 980 Ti.

and that was exaclty my point

because HBM does not increase PCIe bandwidth

and also HBM is only relevant in GPU-VRAM communication

case and point the DDR3 vs GDDR5 make very little difference in real world performance

Welcome to LTT forums: The tech forum where nobody knows shit about about tech! (Yes, this is directed at you.)

Link to comment
Share on other sites

Link to post
Share on other sites

you don't even understand what I'm saying

here's the block diagram for the XB1 with unified RAM architecture, with DDR3:

 

*snip*

 

notice anything odd?! at all

 

You don't understand what you are looking at, so I know you don't notice anything.

 

Looks like what you would expect given a shared RAM configuration with a small EDRAM package. Very simple comparatively speaking.

What do you mean "odd"?

Not similar to a PC?

Link to comment
Share on other sites

Link to post
Share on other sites

you don't even understand what I'm saying

here's the block diagram for the XB1 with unified RAM architecture, with DDR3:

1200x-1.png

 

notice anything odd?! at all

 

 

---

 

why I used TW3 as an example, because the engine uses texture streaming and not pre-cashing; thus it makes constant assets swaps in VRAM

Link to comment
Share on other sites

Link to post
Share on other sites

right on target for my 2016 major upgrade, looks like everything going to pan out

Processor: Intel core i7 930 @3.6  Mobo: Asus P6TSE  GPU: EVGA GTX 680 SC  RAM:12 GB G-skill Ripjaws 2133@1333  SSD: Intel 335 240gb  HDD: Seagate 500gb


Monitors: 2x Samsung 245B  Keyboard: Blackwidow Ultimate   Mouse: Zowie EC1 Evo   Mousepad: Goliathus Alpha  Headphones: MMX300  Case: Antec DF-85

Link to comment
Share on other sites

Link to post
Share on other sites

Welcome to LTT forums: The tech forum where nobody knows shit about about tech! (Yes, this is directed at you.)

another master, welcome and I await your enlightenment 

Link to comment
Share on other sites

Link to post
Share on other sites

you don't even understand what I'm saying

here's the block diagram for the XB1 with unified RAM architecture, with DDR3:

1200x-1.png

 

notice anything odd?! at all

You do realise that I already know that the Xbox uses DDR3? And that is because Microsoft can save money? Plus its using an APU very similar to what you'd find in a PC instead of a dGPU, so standard DDR3 is more appropriate and requires minimal cost to implement. And games on consoles always have more graphical issues than games on PC.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

if I'd known in advance I'd have this discussion, I'd be prepared with recorded footage

but demand me to pop with evidence just like snapping fingers .. wow!

LOL, when you claim something that you know it's true for sure, there has got to be some evidence backing up your statement.

 

You just came here saying PCI-E 2.0 is a bottleneck unlike PCI-E 3.0. Full stop.

It's the same way you were bitching about your hatred for AMD(that comes from 1996-1997) on CG with the R9 3XX series, as if it was relevant(he knows what I'm talking about, most of you may not, so to clarify, CG or it's a Romanian forum.)

 

We came and said the opposite and backed up our claim with four reviews and tests that prove it right.

 

So please, do us a favor, stop embarrasing yourself.

 

EDIT: BTW guys,for extra fun factor, he claims he works at an IT service and somehow he thinks that by doing it, he knows everything and we "knows" nothing.(Gollum reference anyone?)

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

Looks promising but I'm going to wait for actual benchmarks as stock speed comparisons are unrealistic for Maxwell.
Maxwell runs easy 1400mhz+ so I want to know how it compares to FuryX and how much OC headroom the FuryX has.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

LOL, when you claim something that you know it's true for sure, there has got to be some evidence backing up your statement.

 

You just came here saying PCI-E 2.0 is a bottleneck unlike PCI-E 3.0. Full stop.

It's the same way you were bitching about your hatred for AMD(that comes from 1996-1997) on CG with the R9 3XX series, as if it was relevant(he knows what I'm talking about, most of you may not, so to clarify, CG or it's a Romanian forum.)

 

We came and said the opposite and backed up our claim with four reviews and tests that prove it right.

 

So please, do us a favor, stop embarrasing yourself.

 

EDIT: BTW guys,for extra fun factor, he claims he works at an IT service and somehow he thinks that by doing it, he knows everything and we "knows" nothing.(Gollum reference anyone?)

naming and shaming .. how cute

but you failed my furry friend because all that CG users showed in the AMD sub-forum is that they lack understanding of basic computer hardware architecture

not even going further into matters where they compare multi-GPU with single GPU in some idiotic comparison, bluntly ignoring multi GPU systems work in alternate frame rendering

 

good job there buddy! good job

here's a cookie: https://upload.wikimedia.org/wikipedia/commons/6/6e/Pepperidge-Farm-Nantucket-Cookie.jpg

now, get back to CG forums and tell them how you wiped the floor with me - not! but maybe they'll give you a cookie too, just for effort

 

maybe you'll have time to compare diplomas, since you know .. e-peen measuring

Link to comment
Share on other sites

Link to post
Share on other sites

943.jpg

I doubt this is true, mostly because it's coming directly from AMD.

I'll believe it when I see benchmarks from a Reputable source/ benchmarks.

Same would be if it was from Nvidia.

 

If they were really trying to skew things, then they'd have set the average FPS at 60 and the minimum FPS at 50-something.

The biggest  BURNOUT  fanboy on this forum.

 

And probably the world.

Link to comment
Share on other sites

Link to post
Share on other sites

naming and shaming .. how cute

but you failed my furry friend because all that CG users showed in the AMD sub-forum is that they lack understanding of basic computer hardware architecture

not even going further into matters where they compare multi-GPU with single GPU in some idiotic comparison, bluntly ignoring multi GPU systems work in alternate frame rendering

 

good job there buddy! good job

here's a cookie: https://upload.wikimedia.org/wikipedia/commons/6/6e/Pepperidge-Farm-Nantucket-Cookie.jpg

now, get back to CG forums and tell them how you wiped the floor with me - not!

 

maybe you'll have time to compare diplomas, since you know .. e-peen measuring

WTF are you babbling now? That did not make any sense. Oh, wait. Your not even trying to have a logical discussion since you got proven wrong so your resorting to flaming. How original.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

You do realise that I already know that the Xbox uses DDR3? And that is because Microsoft can save money? Plus its using an APU very similar to what you'd find in a PC instead of a dGPU, so standard DDR3 is more appropriate and requires minimal cost to implement. And games on consoles always have more graphical issues than games on PC.

 

Plus it's 2133MHz, so the difference in performance is negligible, if not better, due to lower latency.

The biggest  BURNOUT  fanboy on this forum.

 

And probably the world.

Link to comment
Share on other sites

Link to post
Share on other sites

WTF are you babbling now? That did not make any sense. Oh, wait. Your not even trying to have a logical discussion since you got proven wrong so your resorting to flaming. How original.

I replied to him, not to you, feel free to ignore that post

it matters not in the debate between you and me

Link to comment
Share on other sites

Link to post
Share on other sites

maybe you'll have time to compare diplomas, since you know .. e-peen measuring

 

That sentence says more about a person than I would ever need to know about their competence.

I have studied physics, now studying computer science and have been programming for 8 years professionaly (did an apprenticeship). If you are really into comparing diplomas.

 

However every person with even a bit of common sense knows that diplomas mean nothing. They are just there to certify a certain amount of your knowledge that you had at a specific part in your lifetime. This does not imply that you still have this knowledge or that you are "better" than anyone not owning these diplomas.

 

Seriosuly man, so much arrogance...

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

I'm sorry, do I look like I work for LTT?!?!

and yes I have a PCIe 2.x platform and I can attest to the occasional assets pop-in in TW3

care to disprove me? be my guest

 

 

why I used TW3 as an example, because the engine uses texture streaming and not pre-cashing; thus it makes constant assets swaps in VRAM

 

Pop in occurs on PCIE 3 too

 

Pop in occurs on PCIE 3 too

 

Pop in occurs on PCIE 3 too

This is LTT. One cannot force "style over substance" values & agenda on people that actually aren't afraid to pop the lid off their electronic devices, which happens to be the most common denominator of this community. Rather than take shots at this community in every post, why not seek out like-minded individuals elsewhere?

Link to comment
Share on other sites

Link to post
Share on other sites

I replied to him, not to you, feel free to ignore that post

it matters not in the debate between you and me

Its not a debate. Its me providing all the evidence necessary, and you not even reading.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

...

I really got heated because it appears they "hunt" me

sorry

Its not a debate. Its me providing all the evidence necessary, and you not even reading.

and you're not reading what I post because I said the PS4 uses GDDR5 vs DDR3 in XB1 - despite in the considerable difference in favor of PS4 RAM bandwidth, the two systems have very similar real-world performance

can we at least agree on this?! if not, we should end here

Link to comment
Share on other sites

Link to post
Share on other sites

I really got heated because it appears they "hunt" me

sorry

 

Well the facts / tests haven been provided to you in form of reference links and 3 videos from LTT specifically on this topic. All of them show that unless you go back to PCIe 1 you are not bottlenecking the graphics cards performance. I don't see the sense of this discussion then?

Of course it is true that PCIe 3.0 is much faster than 2.0, but as has been proven it has no really measaruble effect on GAMING performance. (well as a physics I like to take a little percantage for measurement errors, 1-2FPS difference are not really noticable imo and may have been caused by other stuff).

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

and you're not reading what I post because I said the PS4 uses GDDR5 vs DDR3 in XB1 - despite in the considerable difference in favor of PS4 RAM bandwidth, the two systems have very similar real-world performance

can we at least agree on this?! if not, we should end here.

Your forgetting that the consoles don't differ just in the type of system memory used.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Well the facts / tests haven been provided to you in form of reference links and 3 videos from LTT specifically on this topic. All of them show that unless you go back to PCIe 1 you are not bottlenecking the graphics cards performance. I don't see the sense of this discussion then?

Of course it is true that PCIe 3.0 is much faster than 2.0, but as has been proven it has no really measaruble effect on GAMING performance. (well as a physics I like to take a little percantage for measurement errors, 1-2FPS difference are not really noticable imo).

I'm not talking about PCIe bandwidth, that wasn't even in my original point when comparing HBM to GDDR5 - because the PCIe bandwidth doesn't vary with what type of RAM the video card has

Your forgetting that the consoles don't differ just in the type of system memory used.

this is exactly why I used them in this debate to have a existing comparison between two types of RAM when interfaced with the GPU

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not talking about PCIe bandwidth, that wasn't even in my original point when comparing HBM to GDDR5 - because the PCIe bandwidth doesn't vary with what type of RAM the video card has

Ah I see, my mistake then

My beast (PC):

  • ASUS R.O.G. Rampage IV Black Edition, X79, LGA2011, AC4
  • 2x ASUS Fury X
  • Intel Core i7 4930K BOX (LGA 2011, 3.40GHz) OC @4.4GHz
  • Corsair H100i, CPU Cooler (240mm)
  • Samsung SSD 830 Series (512GB)
  • Samsung SSD 850 Pro 1TB
  • Western Digital 512GB HDD
  • TridentX 4x 8GB 2400MHz
  • Corsair Obsidian 900D

Link to comment
Share on other sites

Link to post
Share on other sites

this is exactly why I used them in this debate to have a existing comparison between two types of RAM when interfaced with the GPU

As in your forgetting the iGPU configs. Both consoles have different variations.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×