Jump to content

Meladath

Member
  • Posts

    158
  • Joined

  • Last visited

Awards

This user doesn't have any awards

  1. And thats why you don't post something stupid like this on the internet because 1. Looks are subjective 2. Because so many people run 4x cards right? Oh boohoo you need a single 120mm fan mount, even with a 240mm CPU AIO and 2 cards you only need 4 fan mounts. Show me a case that doesn't have 4 fan mounts that anyone actually buys.
  2. Wow some people really are stupid, comparing LN2 world record attempt benches to stock ones thinking its somehow representative of the real performance makes me laugh so much. Take a look at the average/lowest 4x titan X, the ones with cards at stock benches, not the highest lmfao. http://i.imgur.com/sKqdUwF.png 13640 graphics score vs ~15k graphics score with the 4 fury X. http://i.imgur.com/AHAgdNX.png ~14k http://www.3dmark.com/fs/5141862 13800 http://www.3dmark.com/fs/5054873 That seems to be the highest 4x titan X at stock I can see, just over 15k
  3. I love these threads. All the people who know nothing about games development pretend they do, pretend they know about what they are spewing, and its just a massive shit throwing contest! It's so good to watch people make incredibly ridiculous comparisons to things like coke and cars.
  4. https://forum.beyond3d.com/threads/spinoff-4gb-framebuffer-is-it-enough-for-the-top-end.56964/page-3#post-1852564 Nice bit of info on texture streaming, compression etc. People saying 4GB isnt enough, eat your heart out. It's a bit complicated though, so you most likely wont understand it if you are one of those people complaining.
  5. And I am sure you know more about electronic engineering than the GPU engineers at AMD. So why aren't you working there exactly?
  6. You don't understand what you are looking at, so I know you don't notice anything. Looks like what you would expect given a shared RAM configuration with a small EDRAM package. Very simple comparatively speaking. What do you mean "odd"? Not similar to a PC?
  7. Are you... actually.. this... dumb Literally every benchmark you can find on PCI-E bandwidth testing in games shows almost NO difference in FPS, only once you get down to crazy low amounts like PCI-E 1.1 x8/PCI-E 2.0 x4 does it even make a miniscule difference. There have already been links of videos in this thread of similar things, but here. https://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/1.html http://www.tomshardware.com/reviews/graphics-performance-myths-debunked,3739-3.html https://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/ inb4 argues over some of the charts prove his point slightly but completely ignores the margin of error. EDIT: PCI-E bandwidth ONLY MATTERS for GPGPU purposes, in which it is incredibly important, as swapping large amounts of data multiple times in a single frame will show the PCI-E's weakness. This has almost NO important on games however as VRAM is not swapped anywhere near as much as in a GPGPU environment.
  8. Well before you have any proof don't make claims that are refuted by every other benchmark on the internet and just say "well I had some personal experience so I MUST be right and everyone else MUST be wrong!"
  9. The performance difference between the XB1 and PS4 which DOES exist (regardless of your stated points with no proof) is due to the faster GPU on the PS4, NOT MEMORY. It is FAR from a concrete DDR3 vs GDDR5 comparison. Did you actually not read my post AT ALL? or did you just notice the fact that I also said there was no PCIe communication and immediately assume everything else I said also agreed with you 100%? Specifically the part stating that DDR3 is SIMILAR PERFORMANCE WISE TO GDDR5. THEREFORE there won't be much performance difference. DDR3 simply has much lower latency, but also much lower bandwidth. HBM is not similar performance wise to DDR3 and GDDR5. THEREFORE there WILL be a performance difference.
  10. Wow you really do know nothing about technology don't you. "I can notice the difference with pop-in!" (somehow, look at all this proof I am showing against the mountain of counter evidence refuting my claim) Also since consoles have unified memory they do not have to do with the PCIe bus, and NO, the PS4 and XB1 DO NOT, ANYWHERE NEAR, "perform nearly the same" Also, DDR3 is very similar performance wise to GDDR5, so the point about consoles being "not very different" (which they very much are) again means nothing. If you knew what you were talking about again you would know this. DDR2 similar to GDDR3, DDR3 similar to GDDR5.
  11. By this time the Fury X will be a relic. And even then, look at the link I provided. Even shadow of mordor which you showed has a peak of 5.6GB of VRAM, only gained 2 FPS on average at 4K with all other settings maxed on an 8GB card over a 4GB card. It did stutter more as they say, but on your list of games, (and its pretty well known) that shadow of mordor eats VRAM and isnt very efficient with it. I doubt the "peak" matters much at all. Also, remember HBM is FAR faster than GDDR5. That will undoubtebly help with the stuttering that something like SoM faces at 4K maxed. I HIGHLY doubt AMD would release a card, tout its the best thing ever for 4K gaming and just give it 4GB of VRAM knowing it will stutter etc.
  12. http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html 4GB is more than enough for 4k right now. Anyone saying it isn't, or even trying to insinuate anything like that is just trying to cause an argument, or trying to prove themselves right with no other point. Even shadow of mordor maxed out at 4K gains a whole... 2 FPS for having 8GB over 4GB. WOW 2 FPS. Won't stop the nvidia fanboys though. "But X game uses X at X resolution with my X" Unused RAM is wasted RAM, pretty much every game ever will store as much as it possibly can in VRAM at all times even if the performance increase is absolutely miniscule. That doesn't mean it NEEDS that much though. Refer to the link I posted for more.
  13. 980Ti uses as much, if not more power than the R9 290X... how do you guys fail to read benchmarks... TDP IS NOT POWER USAGE.
  14. gr8 b8 m8 i r8 8/8 only nvidia can improve efficiency right?
  15. I am guessing the R9 Nano is a 390X/290X 4GB (using HBM instead of GDDR5)
×