Jump to content

Hieb

Member
  • Posts

    1,285
  • Joined

  • Last visited

Everything posted by Hieb

  1. The CPU should not cause blurryness... more likely that's caused by either anti-aliasing (many types blur the image to remove jagged edges), playing at lower resolution than your monitor, or using VGA cord to connect your GPU to your monitor.
  2. AFAIK you are able to crossfire them but the R9 390 will run at the speed of the R9 290 and will effectively only have 4GB of VRAM... so it's only really like having 2x R9 290s
  3. Depends on the game. Generally the amount of VRAM found on cards is suitable to their horsepower... like there are some games at 1080P where 2GB isn't enough to run the game with high quality textures, but generally cards that come in 2GB flavours, such as R9 380, aren't going to be playing those games at high/ultra settings anyways. I think 2GB is fine for 1080P, but if you want to run with really high quality textures in GTA V or stuff like that, then you might want a 3-4GB card
  4. "ITX" branded cards aren't necessary... there are only a few extremely compact cases that require them. The thing with the ITX specific cards is they have single fan and a less robust heatsink, so they end up being noisy and not cooling as effectively. With the 250D you can fit most full size cards so there's no point in getting an ITX card.
  5. I heard that 8GB DIMMs theoretically perform better than having more 4GB DIMMs, something to do with having the units of memory on both sides of the DIMM or something?
  6. You can also manually uninstall the drivers and delete AMD (or Nvidia, if applicable) folders from Program Data and AppData, and then delete the AMD (or Nvidia) registry keys... I've had bad luck with DDU. Not sure if DDU or Nvidia is to blame, but twice when I've used it to remove the leftovers (even in safe mode) my system registry has been corrupted, preventing me from even using system restore etc. so I had to do a full reinstall of everything. Haven't had this issue when I manually removed the stuff. Easy to follow guide here http://www.overclock.net/t/1150443/how-to-remove-your-nvidia-gpu-drivers
  7. The R9 390 gets about ~10% better framerates but Nvidia's cards have a bit more features... Shadowplay is better than AMD's alternative and the Nvidia control panel offers a bit more... like adjusting number of pre-rendered frames for example. Up to you depending if you need the features or just want the highest fps
  8. Oh I see what you mean yeah. I Mean the better the optimization is the lower CPU usage should be, however just because a game has high CPU usage doesnt mean it is poorly optimized
  9. TLDR: Intel's CPUs do more in each cycle (Hz) than AMD's do, so an Intel CPU at around 2.8GHz can do what an AMD CPU can do at 4GHz. Also AMD's FX "cores" share a lot of resources, whereas in other CPU architectures each core has its own resources so it doesn't have to share... so an FX six core has a lot less than an Intel six core would have. ps. sorry if anything didnt make sense im tired :3
  10. less efficient code = more instructions needed to get stuff down = cpu intensive o_0
  11. Hieb

    4k gaming

    Although playing with medium settings at 4K would be a little silly... whats the point of having all that pixel density if you use low res shadows, textures, etc.
  12. Well right now their cards offer better value at many price points so it's not too surprising... although you wouldn't know that they're doing remotely well gauging by review counts on sites like PCPartPicker or Newegg... bunch of GTX 970s and such with dozens of reviews while you're lucky to see 5 reviews on an R9 390 lol. Same with other tiers of cards
  13. For every frame your video card renders, the CPU first has to do a bunch of processing for it... the CPU processes animations, where all the objects are and then tells the GPU. The CPU also processes a lot of effects... it's involved heavily in shadows, lighting and particle effects (like explosions). So the more complex all of these things become (or the less optimized a game is), the more work the CPU has to do for each frame. And because the CPU does this for every frame, each frame adds more CPU load. So playing at 30 FPS will have roughly half the CPU requirements of playing at 60 FPS (although in some games physics calculations, AI, etc. are not tied to framerate, so then the CPU usage doesn't scale linearly) So higher settings and higher framerates means more things for the CPU to process... So if a certain CPU reaches its limits at, let's say, 45 FPS at high settings @ 1080P in a game with a GTX 760, then that same CPU will also reach its limits at 45 FPS at high settings @ 4K with a GTX 980 Ti. This is why I don't like people saying X CPU will bottleneck Y GPU, because it depends. An i5-6600K could bottleneck a GTX 980 Ti if someone is playing triple A titles at 1080P, looking for 144Hz gameplay... playing at ~144 FPS requires a lot of CPU in big titles. Meanwhile if someone is playing at 4K, a GTX 980 Ti could probably be supported without any issues by a high end i3 (like an i3-4360) or low end i5 (like an i5-4440), because framerates at 4K are gonna be targeted around 60 FPS, which isn't too terribly demanding. This is also why often times you'll see low-resolution benchmarks in CPU tests (although this is less common now since ever since Tek Syndicate's video about the FX-8350 people bitch about "real world" benchmarks, which serve a different purpose... but I digress). Because as you lower the resolution, it makes it easier for the video card to pump out more frames, and this allows you to find the point where the CPU spends all of its time processing frames (and physics, etc.) and cannot handle anymore... and at this point lowering the resolution further or adding a better graphics card won't improve the FPS because the CPU has reached its limit. So long story short: - CPU processes things for each frame (basically prepares it for the GPU, telling it what to draw) - Particle effects, lighting effects, shadows (and ambient occlusion) typically have significant CPU involvement - Then the GPU builds the picture out of pixels Higher settings = more CPU involvement per frame More frames = more frequently needs CPU to prepare frames
  14. As always it would depend 100% on the specific game and settings you're playing at... are you looking for 1080P 144Hz gameplay? Then the i5-4590 will probably bottleneck in a lot of games, as at framerates that high there'd be tremendous CPU load. Looking at 4K 60 Hz gameplay? Then that CPU could drive that without any issues.
  15. http://www.geforce.com/hardware/geforce-gtx-980-ti/buy-gpu You pick your product in hardware and then click Nvidia store option
  16. Only in really CPU hungry games at very high framerates... for example a stock 4690K isnt gonna be able to play GTA V at a locked 144 FPS, so if you played GTA V at like medium settings 1080P, then yeah it would bottleneck... but in typical usage scenarios the 4690K won't bottleneck the GTX 980 Ti.
  17. I haven't used Opera in like 8 months but when I last used it it was missing a lot of essential features and the extension support that Chrome has
  18. I have RIFT open, GeForce Experience, Steam, Glyph, Origin, and Chrome with half a dozen tabs (YouTube, Twitter, Facebook, LinusTech, Twitch following page, and sixth tab I use for reading about random stuff... right now its on Reddit) and Chrome is only using about 600MB between all of those... total memory usage is at about 3.6GB
  19. Well I use a Gigabyte Z97X-SLI... it does the job fine, no real complaints... only real downside to it is that it's not as wide as standard ATX boards (doesn't reach the third set of standoffs), so it flexes alot when trying to connect or disconnect the 24-pin ATX power connector. Otherwise it's a great board considering it was quite cheap
  20. Come on this should be a no brainer... are you a nerd or are you a man? Because fairs are for nerds. New M+KB is a pretty fuckin big deal
  21. As long as your motherboard has a PCI-e slot the GPU will work... worst case scenario is the board doesn't have full bandwidth slots but the card will still work, just won't get maximum bandwidth. In your case I can't imagine this being an issue as I'm pretty sure every Z77 board has a 16x slot.
  22. Just to clear the air on a couple things: 1) Core is a very ambiguous term, especially in recent years since AMD's Bulldozer family of architectures... afaik there's no strict requirement or classification for a core, it's just the accepted term for the 'pipeline' in a CPU to execute instructions for a thread. 2) Hyper-threading effectively makes the core two cores. Obviously it doesn't double its processing power, however it allows the core to complete instructions from two threads within a single clock cycle... meaning that an i3 can in fact simultaneously process 4 things, just like an Athlon can. The i3 will outperform the Athlon in almost every game out there... games aren't a perfectly parallel workload, and if they were the i3 and Athlon would perform pretty close to each other, I'd think. Games are a mixed multi-threaded, bursty workload... the load isn't consistent across all the threads, and it's not a seamless stream of tasks, it's a bunch of tasks spread out where performance is dependent on how quickly they are completed. This is why single-threaded performance continues to make such a big impact even though most games released over the past few years can use four threads or more.
  23. I haven't personally used the G3258 but from what I've gathered reading various articles, reviews, and watching performance tests on YouTube... when the G3258 struggles, it does so because of only being able to run two threads, so it can't schedule things very efficiently. The G3258 usually doesn't just (if at all) have low average framerate, rather it suffers from stuttering, freezes and periodic big fps drops. Overclocking doesn't fix these things as far as I know. I mean just to give you an idea, even when overclocked to like 4.5GHz the G3258 is usually outperformed by an i3 In any games that will run fine on two thread CPUs (such as World of Warcraft, older games, I suspect most mobas would do well...) the G3258 will generally be ample fast enough to drive the GTX 670, and the overclock will certainly help with that... but in more robust modern titles like GTA V or the likes, the G3258 isn't gonna provide particularly smooth gameplay regardless if it's OCed or not.
  24. Really? You doubt that? The R7 370 is a rebadge of the R7 265 which was a rebadge of the HD 7850. However AMD likes to keep a tier of GPUs using the same silicon... for example R9 290 and 290X are both Hawaii chip. R9 270 and 270X are both Pitcairn chip, etc. So if an R9 380X is really happening, it's either going to be a slightly modified version of the R9 380 (kind of like how the R9 270 and 270X were really the same, just the 270X was OCd) or it will be Tonga XT that is found in the mobile GPU R9 M295X
×