Jump to content

tribaljet

Member
  • Posts

    84
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Contact Methods

Profile Information

  • Gender
    Not Telling

System

  • CPU
    Intel Core i7-2820QM
  • RAM
    8GB DDR3 @ 1600MHz
  • GPU
    Nvidia Geforce GT 555M Rev. A1
  • Storage
    500GB 7200RPM HDD
  • Display(s)
    17" 1920x1080 @ 60Hz
  • Mouse
    Razer Copperhead
  • Sound
    Creative X-Fi Go! (non-"Pro"), Edirol UA-25, NI Audio 2 DJ, PA2V2
  1. There's always the whole game modding matter to consider, but pushing crazy amounts of AA can indeed inflate VRAM usage for sure. And of course it's always good to have higher VRAM but whether the card can push acceptable performance while using its entire framebuffer, that's a whole different matter. There seems to be a general consensus of both initial G1 Gaming Maxwell cards to be on the louder side of things, but from a purely performance-centric perspective it is indeed one of the best cards to get. If a strike between performance and noise is desired, something like a MSI 4G might be a better option, especially since it does have its hybrid fan mode where it turns the fans off when idling or at low loads.
  2. Unexpected, considering most people I've talked with did prefer ATI rather than AMD, despite many claiming the HD 2000 and 3000 series being subpar, all agreed that the HD 4000 series was excellent and that there was a clear design path shift on the HD 5000 onwards. APUs do indeed strike a great balance of budget, sufficient CPU grunt and surprisingly acceptable IGP performance, something that Intel hasn't done so far (despite Broadwell's IGP outperforming the best current APUs but at a significant price hike). Regarding consoles, they do the job of simplifying gaming quite well, even if over time they have become more complex as expected. However, do keep in mind I believe consoles peaked at 6th generation. Yes, I very much want to see (let's go wild) something like 1GB HBM onboard, that'll give a proper kick in Intel's pants lol. And looking at HTPC usage, the future could look bright for AMD. And yes, it's like AMD plans several moves well ahead, just that it's a risky method since Intel/Nvidia could pull a surprise move that could end up being disruptive enough to force AMD's hand into changing its original plans. Imagine AMD's CPU and GPU divisions being split, with ARM taking the CPU division and Samsung taking the GPU division, each to improve their respective chips. I believe it would improve the market (further down the line), with the main issue being the matter of x86 licensing availability, which I believe Intel would be forced to concede in order to avoid anti-trust issues.
  3. Clearly you are unaware that we are on a computer tech forum where there are users known as enthusiasts and said users know what they go for, and you've already been told there is a smaller price difference that what you're assuming (look, that word again).
  4. It certainly is odd that they went with the same ROP configuration of flagship Hawaii. They kind of seemed to do a 780->780 Ti play in a larger scale, which was interesting for sure. And certainly agreed on how both manufacturers have fundamental differences that can be perceived in multiple ways, with one particularly popular way as of late being tesselation on Witcher 3. What are your thoughts on someone buying AMD or at least the GPU division and seriously infuse cash so R&D could boom?
  5. Agreed that a wider bus would benefit both the GTX 970 and 980. Yet, if looking purely at memory bandwidth, then the GTX 980 Ti does remarkably well against a watercooled Fury X, does it not? After all, memory bandwidth is only a part of the game. But yes, the overall conclusion is that the GTX 970 and 980 should both have a wider bus (as well as the GTX 960 for that matter), and everyone would benefit from such. Oh look, a stock OC'd model that outperforms the R9 390X (SAPPHIRE VAPOR-X R9 290X TRI-X OC). What a shock! But please, continue to prove how vastly knowledgeable you are. After all, I still remember you defending AMD-optimized games fiercely without even knowing Far Cry 3 was one lol. Here's a protip for you, learn the difference between stock and aftermarket, and what each card can actually do.
  6. I don't need to, many have done so successfully. Just because you are unable to do so, don't assume (you know what they say about that word, right?) others can't in a very safe manner.
  7. Yes, they have patched the drivers around 7 months or more. And unsurprisingly you're wrong again, the R9 290(X) can easily reach R9 390(X) performance. And you keep on banging your VRAM drum.
  8. That's amusing because I've had zero issues with any browser using that backend on Maxwell cards under Windows 7 and Windows 8.1. Yes, the 290(X) can reach 390(X) performance, but the 200 series lacks features only available on the 300 series (excluding VSR, of course). If the 300 series had the exact same features as the 200 series and just the increased performance attained through clocks, then I'd be the first to call it a rebrand. But yes, of course AMD could've very easily made the new features available on the 200 series but instead they went and did otherwise as a differentiator.
  9. Makes sense you would post that second link, where similar minds must meet. And apparently you've never dealt with browser GPU acceleration issues, otherwise this would've been fixed in a couple of seconds, but no you had to come sputter more fluff. Odd for sure, but the site is a reference to content type and accuracy. Yes, many people think the R9 390X is a mere rebadge of the R290X whereas it is in fact an architecture refresh that brings improvements (albeit small) across the board. While I can agree that there's a very valid side for TVs providing immersion, the input lag really kills it for any sort of fast gaming and while things like RTS' can be played the same way without suffering from excessive input lag impact, that isn't really the main focus of the tech. OpenCL still has a long way to go, and the fact that all 3 major IGP/GPU vendors have mismatched OpenCL support doesn't help either. The potential certainly is there and if it does indeed manage to overtake CUDA's position, then we might see changes in design from future Nvidia GPUs. The thing is, do you really believe it's such a low percentage to increase bus width? After all, that can increase performance but so will it increase chip size, thermal envelope requirements and historically has been represented by lower memory clocks attached to such chips. That's not to say that the GTX 980 Ti/Titan X have followed that practice but they're in different price brackets. And would really something like a GTX 970 get a wider bus considering it's performance tier? We already knew when the GTX 970 and 980 came out that there would be a Titan card to match the generation and potentially a Ti card as a stopgap between GTX 980 and Titan cards, so when looking at it that way, the GTX 970 falls in place with 770 and 670/680, all having the same bus width. I've been enjoying the technical evolution of Witcher 3 (miss you, E3 graphics), and I have to say that while the game looks stunning, not having granularity in terms of in-game tesselation was an asinine move from CDPR. Over the course of years I've come to realize that there are unquestionably driver releases that have excellent performance/stability ratios and moving from said drivers (assuming they're not ancient releases) to newer drivers is only justified if a specific game (or a few more) are factually and drastically improved by said drivers, and if that sort of improvement doesn't come from patching the game itself. Makes sense, GCN showed its compute colors from the start, and I would very much like to see a W8100 softmodded into a R9 290X to see what it could reach.
  10. The issue is finding recent reviews of it, as there are several reviews dated 2014 which due to driver changes since then do skew the results.
  11. I understand what you're saying, but you can look around and find that Techpowerup (along with other well established sites like AT) is very reputable and trustworthy and I pretty much put them above most Youtube content. Should we really condone shoddy optimization? There really is little reason for such VRAM usage at 1080p for the foreseeable future, both in terms of what Maxwell and GCN 1.2 can do in order to optimize VRAM usage. Regarding bandwidth, I believe that was done for the sake of efficiency, which while it did show results, was disappointing as we can see the difference between GM204 and GM200. Regarding 1080p, that does have the advantage of allowing for any given card to last quite a bit longer, precisely due to lower strain when comparing to 1440p+. For 1440p I would indeed think of both cards as being valid with potentially the R9 390 as being a better pick, as long as (and I can't note this enough) the whole package of features fits the desired needs. Ghosting is a palpable issue with FreeSync, as well as a rather undesired refresh rate range so I would never recommend it over G-Sync. Keep in mind I look at each tech's capabilities, not at how "morally" right or wrong each is. I personally do like the filter used on DSR but from what I've seen, both DSR and VSR have a very similar performance hit so with that I can't agree that VSR has lower performance hit (very marginal at best). CF wins against SLI, no argument there. AMD wins at multimonitor, Nvidia wins at 3D. OpenCL is AMD's cup but the majority uses CUDA, still things might change in the future. Shadowplay is quite interesting, even though it bothers me that they didn't separate the component from Geforce Experience. My concerns with Raptr are mainly with the user agreement, if you can believe that Still on the memory bandwidth note, look at how low the bandwidth is but what the architecture does with it. So yes, I would've liked to see a wider bus but it performs quite well nonetheless. Golden R9 290(X) are great cards to get, finding them is the tricky part. And it should be said that Maxwell tends to have significantly higher OCing headroom when compared to GCN, and I don't believe that has anything to do with a wider bus or chip size. Why do you think that is the case?
  12. Are you genuinely joking or serious about the driver matter? Because I see both WHQL and beta drivers working without issues on Maxwell. And I'll concede that AMD newer drivers can improve performance, but that is again AMD's fault for having a shoddy driver release schedule in comparison to Nvidia. Region is of no consequence, performance is. Better try to avoid cherry pick results, AMD already does enough of that. And again, I gave the source to the article which includes all the graphs, the reason for the one graph being linked is because it's the only one relevant to the OP. Regarding the videos, I'm just glad some (in)famous youtubers weren't present on the videos, otherwise it would border on ridiculous. Go look at that other poster's post track record and see how there is blanket recommendations without backing, that is my issue and not recommending "a" or "b".
  13. There is at least one video that's credible. And I still prefer reviews from more established reviewers. The source was mentioned, but here's the link: http://www.techpowerup.com/reviews/Powercolor/R9_390_PCS_Plus/ But sure, keep deluding yourself into thinking it's a GTX 970 running core at 2GHz against a R9 390 running at 50MHz if it makes you feel any better.
  14. Except it doesn't win at 1080p and it only improves at 1440p and higher. And "a bit of power" is a drastic understatement. Instead of posting videos like you do on every single post, try actually writing an argument with data rather than linking videos (yet again). Just like you've shown on other threads, you clearly go by word of mouth rather than learning by yourself, therefore giving false advise to people seeking factual information. In any case, as before, it's a waste of time arguing with you.
×