Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Taja

Member
  • Content Count

    872
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Taja reacted to Techicolors in Good games with a decent singleplayer?   
    the combat is decent, but after beating the game twice, it's not that great. not much tactics are involved, and you're kinda encouraged to use repetitive moves. wouldn't say the combat as rich, but it is enjoyable. 
  2. Agree
    Taja got a reaction from ARikozuM in Settle This Now: i5 or i7, Gaming Only   
    Dont believe ppl here, check comparative benchs and chose yourself.
    Most ppl here poster the digital foubdry vídeo in the past to justify i7, for me the vídeo Just reinforced that i5 IS THE WAY TO GO.
    But dont Trust me, check multiple vídeos and reviews, and see what fits you best.
  3. Agree
    Taja reacted to Mira Yurizaki in 6600K + EVGA 1070 ACX 3.0 FTW Bottleneck?   
    I'm just going to say this An i5-6600K is still able to get within 98% of a GTX 1080 if you compared it to say an i7-6800K or a i7-5765C. In other words, a bottleneck is inevitable by virtue of not getting the best CPU.
     
    There's no reason to lose sleep over this anyway. A bottleneck doesn't necessarily mean a performance cap. I could get lower scores in some benchmarks on my i7-6700 from my i5-4670K but when I upgraded the video card, it still was a huge upgrade.
  4. Like
    Taja got a reaction from jordandhs in Will I have bottlenecking issues?   
    Well, the higher the resolution,the LESSER the bottleneck caused by the processor. so i THINK you will be fine, but better check benchmarks
  5. Agree
    Taja reacted to typographie in Will a bad GPU bottleneck a high end CPU?   
    A "GPU bottleneck" is what you have if you have successfully avoided a platform (CPU/memory) bottleneck. If your framerate is uncapped, you have either one or the other. To put this another way, basically what you want is a CPU fast enough to stay out of your GPU's way.
     
    GPU-bound performance is usually preferable because you can then adjust your in-game graphics options to tweak performance. In a game that's truly CPU-bound, changing resolution or graphics options usually does not affect your framerate much. That's actually one of the ways to diagnose a CPU bottleneck: set your resolution to 800x600 and see if performance improves. If not, it's your CPU.
  6. Agree
    Taja reacted to DocSwag in NVIDIA Titan XP Reviews and Benchmarks   
    Judging by what you've posted so far.... It just doesn't look worth it to me. It's generally only 20% or so faster than the amp 1080, so if you get two of them in sli it'll cost 1300-1400 dollars but you'll also be getting much higher performance in most games. Not to mention that in games that don't scale horribly in sli, GTX 1070 sli, which costs around 800 dollars, will probably come extremely close in performance.
     
    It's still my opinion that the Titan XP is a stupid card unless you're using Int 8... Anyone who wants that kind of performance should just wait for the 1080 ti.
  7. Agree
    Taja reacted to failblox in Are SSDs really worth it at today's prices?   
    It's extremely useful for video editing. If you've ever tried editing footage that's stored on a hard drive compared to storage on an SSD, the difference is immediately noticeable.
     
    I agree with you that it's pointless for general media like movies and music though.
  8. Agree
    Taja reacted to BolginNT in Is my 970 running too hot   
    Not sure what's all the fuss about. Sure, 82C doesn't look nice but keep in mind that components used on any GPU are build to operate under these temperatures for at least the length of the warranty, which in my country is usually 3 years.
    Considering how freaking toasty some AMD GPUs are (according to Computerbase.de, R9 390X runs at 92C under load) and yet they manage to survive (in most cases :D) I wouldnt consider 82C as bad as some people make it look like. 
  9. Agree
    Taja reacted to LookTopHats in Is the "K" worth it   
    I see alot of people wanting to use an i7 when their rigs are purely for gaming. While it will have an effect on fps (depending on the game) it will only be a very small fraction and almost always below a 10fps increase. This really isn't viable for the extra cost of an i7 when used only for gaming. I would suggest along with the others that you spend your money on an i5 of the same generation and perhaps purchase an unlocked one if you wish to overclock in the future. Even by buying the unlocked i5 i'm sure this is cheaper than the locked i7.
  10. Agree
    Taja got a reaction from beingGamer in Why support Killary Hilton?   
    There are only two real choices. Like here in brazil,you can nulify our vote or vote in one of the symbolic candidates (the ones that have NO chance to win).
    But bottom line,you know only 2 candidates have a chance to win, and some people like to chose the lesser evil.
  11. Agree
    Taja got a reaction from PlayStation 2 in Why support Killary Hilton?   
    There are only two real choices. Like here in brazil,you can nulify our vote or vote in one of the symbolic candidates (the ones that have NO chance to win).
    But bottom line,you know only 2 candidates have a chance to win, and some people like to chose the lesser evil.
  12. Informative
    Taja reacted to daniellearmouth in Can someone explain Intel's "Tick Tock" cycle and their new 3-way cycle?   
    The Tick-Tock model was adopted by Intel in 2007 as a means of roadmapping processor microarchitecture advancement by creating a brand new microarchitecture (the 'Tock') followed by shrinking it down to a smaller process (the 'Tick').
     
    When the Nehalem microarchitecture released in 2008 (bringing about the i3, i5 and i7 series of CPUs), that was Intel's 'tock' - their new microarchitecture. Intel then made improvements and shrunk down the process from 45nm (nanometres) to 32nm to create Westmere - the 'tick' - in 2010.
    Sandy Bridge released in 2011 as a new microarchitecture, followed by Ivy Bridge in 2012.
     
    Likewise, in 2013 Haswell released, but in this case Haswell was refreshed, giving us the 4790 and the 4690 as a replacement for the 4770 and 4670 respectively. This was due to complaints of overheating compared to Sandy Bridge CPUs, so Intel changed the thermal material used to allow the heatspreader to dissipate heat from the die.
    This refresh was followed by Broadwell, with much confusion - Broadwell was very difficult to acquire as barely anyone sold them (as far as I could tell, at least), and that was due to the process they were on. Haswell was a 22nm architecture, whilst Broadwell was a 14nm architecture. Reducing the size of the process is a sure-fire way to make it harder to cool and make more reliable, and the more you push against physics, the harder physics pushes back.
     
    Onto 2015 and the release of Skylake. Skylake is the top of the line as of now for consumers, but soon came an issue. Skylake was touted to have a process reduction in the form of Cannonlake, but because of how hard it was for Intel to stick to Moore's Law (the doubling-up of transistors for each new architecture), Intel determined that it would abandon modify their Tick-Tock model to account for a third instance.
     
    The Tick and the Tock remain unchanged. Skylake released last year as a 14nm architecture, and with a projected release at the second half of 2017, Cannonlake will follow on a 10nm process. So what about the middle? That's where this step comes in.
    Similar to how Haswell was handled, Skylake will be refreshed (or optimised, if you will) in the form of Kaby Lake - an improved Skylake with newer CPUs and newer features. And this is where the new Tick-Tock model comes in.
     
    Whilst Intel before had Tick-Tock, Intel now have Process -> Architecture -> Optimisation. And 'Processor -> Architecture -> Optimisation' pretty much sums it up, really.
  13. Agree
    Taja reacted to Misanthrope in i5 Bottleneck   
    Yeah I saw, editing.
     
    The reality is that unless you specifically want very high refresh rate this just won't be a problem for the reasons I explain: Virtually all games, even games using DX12 and Vulkan with Async Compute with 1 exception (Ashes of the Singularity) are primarily GPU bound. Some are more dependent on CPU like GTA V but I say 95% of all titles out there and a good 80% of announced games are DX11 and GPU bound.
     
    GPUs, even the top 2 GPUs out right now (1070 and 1080) are just not powerful enough to completely max out 1440p
     
    http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585-5.html
     

     
    See the minimum and average on them? They break 60 FPS average but are nowhere near high refresh rates which is the point in which the GPU sometimes is held back by the CPU since it has to wait for it.
     
    This just doesn't happens often enough to justify a CPU that will give you 1 to 2 extra FPS on most games and 10 to 20 far beyond 60FPS btw worst case scenario. And the worst case scenario is 1080p and 120FPS or better.

    As time goes on your GPU will go down those graphs, not up, meaning that it will still be trying to handle graphically intensive games to even have any real CPU related gains in performance. Please do not listen to people like svetlio who are just basing their arguments on conjectures and assumptions about future games which are not based in anything concrete or anything that can happen quickly enough to matter which is 3 years or so before the GPU is fairly midrange or lower in performance anyway thus no longer capable.
  14. Like
    Taja got a reaction from Totalschaden1997 in Annoying pci express x16 lock   
    Push it,and while pressing it,pull out the GPU
  15. Agree
    Taja got a reaction from BoldarBlood in Annoying pci express x16 lock   
    Push it,and while pressing it,pull out the GPU
  16. Informative
    Taja got a reaction from TheRandomness in Annoying pci express x16 lock   
    Push it,and while pressing it,pull out the GPU
×