Jump to content

MattDaemontools

Member
  • Posts

    178
  • Joined

  • Last visited

Everything posted by MattDaemontools

  1. Why do people even recommend the 750 TI is beyond me, the R7 265 is noticeably faster. I agree with Qwerty though, grab a used core 2 quad and a new GPU. Grab the fastest GPU you can afford. R9 270 > R7 265 > GTX 750 Ti > R7 260X > 750 > R7 260
  2. This is what happens when you let marketing morons go wild. The 980 isn't a 165W card and the 970 can't use all of its VRAM properly. Also what the frack is wrong with Nvidia saying the Tegra X1 is a 1TFlop chip when they are actually counting the 16bit half precision performance which was abandoned over a decade ago. When in fact the single percision FP32 performance is 500GFlops at PEAK, so not even when it goes into actual products with real-world thermal and power limits.
  3. The general manager of AMD's computing and graphics business unit revealed some fascinating information in a recent interview. The AMD executive states that AMD is actively working on making APUs and GPUs talk to each other beyond simple dual graphics for better performance. AMD also states they do care very much about x86 CPU performance, in fact it's one of their top priorities. The GM also talked about AMD's new found interest in software and emphasized the importance of GPU drivers giving the recent AMD Catalyst Omega driver as an example. According to the GM the new restructuring and cost cutting efforts were done to secure more fund to go into research and development. These are excerbs from the interview which you can find in the full article. AMD also talked about Carrizo, virtual reality and working on next generation "leadership" graphics products among other things.
  4. I'd go for the Gaming 970. Quieter, slightly cooler and overclocks slightly better.
  5. I wouldn't pay much if any attention to synthetics, especially unigine based since those favor Nvidia. If you're getting a good performance improvement in games then there's nothing to fret over.
  6. i personally don't find reference cards to be better than non-reference designs in anything frankly. Even in multi-GPU setups, non-reference axial cooled GPUs that exhaust heat directly into the case always ended up running cooler for me.
  7. I either sell my old cards or donate them, depends on my financial state really.
  8. Here's the deal... I post a comment and you send me free stuff, agreed ?
  9. None, I hate coil whine. I'd get an R9 290 Vapor-X instead. It's 70 bucks cheaper and faster than the overclocked 970s once you overclock it. http://www.bit-tech....x-970-review/12 http://www.bit-tech....r-x-oc-review/9
  10. Both cards are equivalent to an HD 7870 2GB card. the 1.25GB of memory on the GTX 570 frankly has no place in today's games. You will definitely run out of memory playing a lot of games and that's going to be a horrible stuttery experience.
  11. Someone above him DID say the 390X will use 750W.. Obviously a gaffe at AMD. Well GPUXPert is right. 900 series isn't as efficient as Nvidia is claiming. Not even close. But when someone calls them out on that they're attacked for some strange reason.
  12. I stand corrected. This is extremely toxic to the forums. All of these posts get filled with pointless debate about who to believe and who not to believe and no discussion around the relevant content. You're thinking of videocardz.
  13. No, people here definitely bash the AMD related wccf news. No one said a thing about how "horrible" wccf is when they leaked the 980 and 970 prices. Double standard at its finest. http://linustechtips.com/main/topic/217044-gtx-980-and-gtx-970-prices-revealed-549-and-329/
  14. I didn't see all this wccf hate when they were leaking 900 series info...double standard much ? :blink: :blink: :blink:
  15. CUDA is no where near as fast as OpenCL which is why Adobe abandoned CUDA in favor of OpenCL.
  16. What about the difference between 47 and 45 ? it's 4.4% is that within margin of error too ? you can't just dismiss any result by saying it's within margin of error. Slower frame rate = added latency. Also the fact that the frame rate is always lower by 2 even when the difference between 88 and 47 is much larger proves that gsync always adds a frame of latency or more. Otherwise we should've seen a similar percentage in reduced performance, but 4.4% is close to double 2.5%. And because gsync is always a frame behind you will play vs a freesync player or a normal non v-sync player, you will shoot first and you will die. You will "think" that you shot first because of the image but the fact is you're always a frame behind so you don't notice anything unusual until you start engaging with other players.
  17. http://www.digitaltrends.com/computing/intel-enthusiasts-devils-canyon-pc-cpu/#!8Yh47 Earlier this year, Intel made waves with PC enthusiasts by announcing the arrival of new “Devil’s Canyon” chips. Aside from a cool code name, these processors were said to provide vastly improved overclocking potential, thanks to re-worked power management, and a thermal interface polymer that conducts heat more efficiently. Now that they’ve arrived, however, enthusiasts are wondering if they were duped. Fool me onceBefore talking about Devil’s Canyon, though, let’s first remember what led up to it. The release of Haswell last year was expected to provide a modest, but noticeable bump in performance over the previous generation. Once reviewers laid their hands on Haswell chips, however, it became clear that something had gone wrong. The new 4th-gen desktop processors were only 10 percent quicker than their predecessors, at most, and they were priced slightly higher than their 3rd-gen equivalents, which meant their value was questionable. Worse, the 4th-gen chips switched to a less effective thermal material, which made them less suitable for overclocking. Reviewers noted that the new processors often peaked at a lower overclock speed than those that came before. The Tech Report, for example, managed to achieve 4.9 GHz with an Ivy Bridge chip, but only hit 4.7 GHz with the Core i7-4770K. Tom’s Hardware, meanwhile, only hit 4.6 GHz with a single 4770K; most maxed out at 4.4 GHz. This development is only the latest in a long line of decisions that has put Intel at odds with die-hard PC fans. In 2010, the company eliminated the ability to overclock most chips by tying the speed of every chipset bus to a sole internal clock. Intel then poured salt on the wound by introducing expensive “K-series” processors that do have an unlocked multiplier, but also cost more than their locked siblings. Fool me twice Intel’s execution of its anti-overclocking campaign was made with few excuses. Enthusiasts often felt ignored, but they also had little choice but to stick with Intel. AMD’s latest processors simply aren’t quick enough to compete. So it’s no wonder the community perked up when Intel’s VP of the PC Client Group, Lisa Graff, began hyping the new Devil’s Canyon hardware. Marketed from the beginning as an answer to enthusiasts who felt ignored, the unlocked chips promised maximum speeds of up to 5 GHz on air cooling, a truly outstanding figure. Enthusiasts went starry-eyed and light-headed as they dreamed of what might be possible. Overclocking quotes from manufacturers are usually conservative, after all; if Intel says 5 GHz, then what’s really possible? Less than 5GHz, as it turns out. Numerous reviewers have found that the new Devil’s Canyon chips are barely better than the Core i7-4770K. HardOCP, PC Perspective and The Tech Report all maxed it out to 4.7 GHz, and even that figure did not come easily. The Tech Report even noticed the new Devil’s Canyon 4790K CPU required more voltage than the 4770K to achieve the same clock speed. A few extreme overclockers have managed better results (the record is 7 GHz), but only by disabling two cores and using liquid nitrogen for cooling. That, of course, isn’t practical for 99.9999999 percent of owners. Even the new unlocked Pentium processor should be viewed with skepticism. Yes, it’s a $75 processor that some reviewers have overclocked as high as 4.5 GHz, but it’s also a dual-core without hyperthreading, and you’ll want a Z87 or Z97 motherboard to make the most of its potential. In short, you’ll be spending $175 to $200 on a processor and motherboard combination that stumbles whenever it’s asked to handle a workload with more than two threads – and many demanding applications, including the latest games, will ask for more than that
  18. My friend had a 680 which is basically a 770 running a few clocks lower. He recently upgraded to a 290 from Sapphire and couldn't be happier.
  19. 8x MSAA is bugged in Skyrim + you can't use MSAA with ENBs so you might as well disable it now.
  20. #1 R9 290 #2 R9 290 #3 R9 290 Yes, it's that good.
  21. Performance in eyefinity is also significantly ahead that of Nvidia surround. With The 290/290X you do get unique features like TrueAudio & Mantle. Eyefinity is mcuh more mature than Nvidia Surround as well, easier to use and offers more functionality. Check out Linus's review of the new Samsung & Asus 4K monitors, input lag is actually lower than comparable 1440p monitors. http://youtu.be/X5YXWqhL9ik?t=7m21s http://www.hardocp.com/article/2014/01/26/xfx_r9_290x_double_dissipation_edition_crossfire_review/8#.U4Z3xoWaqFg
  22. At 4K R9 290X CrossfireX is significantly faster than 780 Ti SLI. In HardOCP's review the 290X Crossfire setup outperformed the 780 Ti SLI setup in all tests conducted. The 3GB frame buffer on the 780 Ti poses a very real issue at 4K as well, often causing in-game stuttering. http://www.hardocp.com/article/2014/01/26/xfx_r9_290x_double_dissipation_edition_crossfire_review/9#.U4Z1YYWaqFg
×