Jump to content

Hieb

Member
  • Posts

    1,285
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

About Hieb

  • Birthday Mar 25, 1995

Contact Methods

  • Steam
    Hiebly
  • Origin
    Hiebly
  • Battle.net
    Blob#1515
  • PlayStation Network
    hiebEZ
  • Twitter
    http://twitter.com/hiebz0r
  • Website URL

Profile Information

  • Gender
    Male
  • Location
    Burnaby, BC
  • Interests
    Computer hardware, game design, web design, green energy, cosmology, biology, chemistry.
  • Member title
    no time for civ 5 anymore cricri

System

  • CPU
    Intel Core i5-4690K @ 3.80GHz
  • Motherboard
    Gigabyte GA-Z97X-SLI
  • RAM
    8GB G.Skill Ripjaws X 1600MHz
  • GPU
    Asus GTX 760 2GB @ 1150MHz / 6200MHz
  • Case
    Fractal Design Core 3500 + Silverstone Dust Filters
  • Storage
    128GB A-Data XPG SX900 + Toshiba 1TB 7200RPM HDD
  • PSU
    Corsair RM650 80+ Gold
  • Display(s)
    Dell 23" 1920x1080 60Hz 5ms
  • Cooling
    be quiet! Pure Rock
  • Keyboard
    Logitech K120
  • Mouse
    Razer Naga
  • Operating System
    Windows 8.1 Professional 64-bit
  • PCPartPicker URL
  1. The CPU should not cause blurryness... more likely that's caused by either anti-aliasing (many types blur the image to remove jagged edges), playing at lower resolution than your monitor, or using VGA cord to connect your GPU to your monitor.
  2. AFAIK you are able to crossfire them but the R9 390 will run at the speed of the R9 290 and will effectively only have 4GB of VRAM... so it's only really like having 2x R9 290s
  3. Depends on the game. Generally the amount of VRAM found on cards is suitable to their horsepower... like there are some games at 1080P where 2GB isn't enough to run the game with high quality textures, but generally cards that come in 2GB flavours, such as R9 380, aren't going to be playing those games at high/ultra settings anyways. I think 2GB is fine for 1080P, but if you want to run with really high quality textures in GTA V or stuff like that, then you might want a 3-4GB card
  4. "ITX" branded cards aren't necessary... there are only a few extremely compact cases that require them. The thing with the ITX specific cards is they have single fan and a less robust heatsink, so they end up being noisy and not cooling as effectively. With the 250D you can fit most full size cards so there's no point in getting an ITX card.
  5. I heard that 8GB DIMMs theoretically perform better than having more 4GB DIMMs, something to do with having the units of memory on both sides of the DIMM or something?
  6. You can also manually uninstall the drivers and delete AMD (or Nvidia, if applicable) folders from Program Data and AppData, and then delete the AMD (or Nvidia) registry keys... I've had bad luck with DDU. Not sure if DDU or Nvidia is to blame, but twice when I've used it to remove the leftovers (even in safe mode) my system registry has been corrupted, preventing me from even using system restore etc. so I had to do a full reinstall of everything. Haven't had this issue when I manually removed the stuff. Easy to follow guide here http://www.overclock.net/t/1150443/how-to-remove-your-nvidia-gpu-drivers
  7. The R9 390 gets about ~10% better framerates but Nvidia's cards have a bit more features... Shadowplay is better than AMD's alternative and the Nvidia control panel offers a bit more... like adjusting number of pre-rendered frames for example. Up to you depending if you need the features or just want the highest fps
  8. Oh I see what you mean yeah. I Mean the better the optimization is the lower CPU usage should be, however just because a game has high CPU usage doesnt mean it is poorly optimized
  9. TLDR: Intel's CPUs do more in each cycle (Hz) than AMD's do, so an Intel CPU at around 2.8GHz can do what an AMD CPU can do at 4GHz. Also AMD's FX "cores" share a lot of resources, whereas in other CPU architectures each core has its own resources so it doesn't have to share... so an FX six core has a lot less than an Intel six core would have. ps. sorry if anything didnt make sense im tired :3
  10. less efficient code = more instructions needed to get stuff down = cpu intensive o_0
  11. Hieb

    4k gaming

    Although playing with medium settings at 4K would be a little silly... whats the point of having all that pixel density if you use low res shadows, textures, etc.
  12. Well right now their cards offer better value at many price points so it's not too surprising... although you wouldn't know that they're doing remotely well gauging by review counts on sites like PCPartPicker or Newegg... bunch of GTX 970s and such with dozens of reviews while you're lucky to see 5 reviews on an R9 390 lol. Same with other tiers of cards
  13. For every frame your video card renders, the CPU first has to do a bunch of processing for it... the CPU processes animations, where all the objects are and then tells the GPU. The CPU also processes a lot of effects... it's involved heavily in shadows, lighting and particle effects (like explosions). So the more complex all of these things become (or the less optimized a game is), the more work the CPU has to do for each frame. And because the CPU does this for every frame, each frame adds more CPU load. So playing at 30 FPS will have roughly half the CPU requirements of playing at 60 FPS (although in some games physics calculations, AI, etc. are not tied to framerate, so then the CPU usage doesn't scale linearly) So higher settings and higher framerates means more things for the CPU to process... So if a certain CPU reaches its limits at, let's say, 45 FPS at high settings @ 1080P in a game with a GTX 760, then that same CPU will also reach its limits at 45 FPS at high settings @ 4K with a GTX 980 Ti. This is why I don't like people saying X CPU will bottleneck Y GPU, because it depends. An i5-6600K could bottleneck a GTX 980 Ti if someone is playing triple A titles at 1080P, looking for 144Hz gameplay... playing at ~144 FPS requires a lot of CPU in big titles. Meanwhile if someone is playing at 4K, a GTX 980 Ti could probably be supported without any issues by a high end i3 (like an i3-4360) or low end i5 (like an i5-4440), because framerates at 4K are gonna be targeted around 60 FPS, which isn't too terribly demanding. This is also why often times you'll see low-resolution benchmarks in CPU tests (although this is less common now since ever since Tek Syndicate's video about the FX-8350 people bitch about "real world" benchmarks, which serve a different purpose... but I digress). Because as you lower the resolution, it makes it easier for the video card to pump out more frames, and this allows you to find the point where the CPU spends all of its time processing frames (and physics, etc.) and cannot handle anymore... and at this point lowering the resolution further or adding a better graphics card won't improve the FPS because the CPU has reached its limit. So long story short: - CPU processes things for each frame (basically prepares it for the GPU, telling it what to draw) - Particle effects, lighting effects, shadows (and ambient occlusion) typically have significant CPU involvement - Then the GPU builds the picture out of pixels Higher settings = more CPU involvement per frame More frames = more frequently needs CPU to prepare frames
  14. As always it would depend 100% on the specific game and settings you're playing at... are you looking for 1080P 144Hz gameplay? Then the i5-4590 will probably bottleneck in a lot of games, as at framerates that high there'd be tremendous CPU load. Looking at 4K 60 Hz gameplay? Then that CPU could drive that without any issues.
×