Jump to content

GM_Peka

Member
  • Posts

    823
  • Joined

  • Last visited

Reputation Activity

  1. Like
    GM_Peka got a reaction from ADZ_123_!"£ in Can someone please! help me   
    Make sure the power switch is plugged in properly
  2. Like
    GM_Peka got a reaction from matjojo in Need help with GTX 760 sli   
    You are best of buying one from Ebay, second hand, the 760 seems to be discontinued now.
     
    You could try selling the 760 for about 100 quid and put it towards a GTX  970.
  3. Like
    GM_Peka got a reaction from Ashewolf in Need help with GTX 760 sli   
    You are best of buying one from Ebay, second hand, the 760 seems to be discontinued now.
     
    You could try selling the 760 for about 100 quid and put it towards a GTX  970.
  4. Like
    GM_Peka got a reaction from ZetZet in Best mouse for small-medium hand?   
    It's quite a different grip, but look at the Zowie ec1-a.
  5. Like
    GM_Peka got a reaction from minibois in Good CPU for CS:GO?   
    That's exactly it, I'll find the monitor that does it if you like.
     
    Edit: http://gaming.eizo.com/products/foris_fg2421/
     
    That's it.
  6. Like
    GM_Peka got a reaction from ryyo96 in Best Version?   
    MSI cards tend to be my go to for cheap but quiet cards, their cooler design is top notch.
     
    There's also one coming from Asus which will probably be good, but it will probably cost an extra few percent.
     
    https://www.techpowerup.com/img/15-06-02/41b.jpg
  7. Like
    GM_Peka got a reaction from railfan in LTT FANS! + [!x10^6]   
    Actually that's true, haha! Mechanical HDD noise pisses me off, the place I'm stationed at has a super low noise floor.
     
    I even have my HDD on a custom made mount, which helps though.
  8. Like
    GM_Peka reacted to Thorium19 in i7 4790k Average Temps   
    It's not the IHS that's the issue, it's the glue holding the heatsink plate on that's the issue. Because it's not soldered on, the glue creates a slight gap that causes the thermal paste to not make full, proper contact with the plate. It's why when people delid their cpus they get better temps, because they get rid of the glue and allow full and proper contact. It's just an intel screwup when really they should have sticked to soldering them on.
  9. Like
    GM_Peka got a reaction from TylerLovekeeper in Help choosing a boss GPU?   
    If you're considering 970 sli and 4k, just get a 980 Ti.
  10. Like
    GM_Peka got a reaction from simson0606 in Help choosing a boss GPU?   
    If you're considering 970 sli and 4k, just get a 980 Ti.
  11. Like
    GM_Peka got a reaction from LeightonPC in Help choosing a boss GPU?   
    If you're considering 970 sli and 4k, just get a 980 Ti.
  12. Like
    GM_Peka got a reaction from ItsGinger35 in 60, 120 or 144hz?   
    Well for a start, the 64/128 tick rate you're refering to is what's happening on the server side, and in simple terms, it's how often then server sends information to your client, lower tickrate worsens hit registration accuracy and stuff, valve servers often run at 64 but decent servers run at 128, even if you are at 64 but you game is running at 256 fps, the movement of enermy players etc is interpolated so that you still see smooth movement.
     
    Anyway, I'm not really getting anywhere with my explanation so I'll just say this, FPS is independent from serverside tick rate, and your client is in no way limited to 64 FPS or anything.
     
    The advantage 144hz gives is quite large, you get reduced input latency/ increases smoothness and aiming is a lot easier, 144hz gives you a decent advantage when it comes to pure DM power.
  13. Like
    GM_Peka got a reaction from DotoreN in 60, 120 or 144hz?   
    Well basically, I will give you the short version of IPS/PLS vs TN
     
    TN:
    Faster pixel response time (less motion blur)
    Cheaper
    Lower input latency
    Colour shift/changes when looking at the monitor from an angle (IE, 45 degrees)
    Affordable 144hz
     
    IPS/PLS:
    Better colour
    144hz IPS Costs ALOT
     
    Now it's up to you do decide the xl2411z is TN, but it is one of the better TN panels out there, meaning it has better colour that other TNs and probably better colour than what you already have, don't listen to people who says avoid TN at costs, what it comes down to, is what do YOU want, do you want the IPS colour advantage over TN, or do you want silky smooth frames per second and TN, it's really up to you, we can only give you the facts. In the past I've used a benq xl2420t, which is a similar monitor, and I must admit, it's served me well.
     
    TNs are often prefered by serious gamers for the fact that better colour doesn't improve your ability to play well, but lower input latency and higher refresh rate do. However, if you don't want the high refresh rate advantage, there's nothing wrong with going with better colour at 60hz.
  14. Like
    GM_Peka got a reaction from -TesseracT- in 60, 120 or 144hz?   
    Well for a start, the 64/128 tick rate you're refering to is what's happening on the server side, and in simple terms, it's how often then server sends information to your client, lower tickrate worsens hit registration accuracy and stuff, valve servers often run at 64 but decent servers run at 128, even if you are at 64 but you game is running at 256 fps, the movement of enermy players etc is interpolated so that you still see smooth movement.
     
    Anyway, I'm not really getting anywhere with my explanation so I'll just say this, FPS is independent from serverside tick rate, and your client is in no way limited to 64 FPS or anything.
     
    The advantage 144hz gives is quite large, you get reduced input latency/ increases smoothness and aiming is a lot easier, 144hz gives you a decent advantage when it comes to pure DM power.
  15. Like
    GM_Peka got a reaction from Pesmerga in Bad performance on r9290   
    Download MSI afterburner and check what your GPU usage is at, if it's not in the high 90s, you could be looking at a CPU bottleneck.
  16. Like
    GM_Peka reacted to aom in This company claim its cable could charge your phone 2x faster ( SONICable)   
    It doesn't re-purpose the data lines. Most smartphones if they detect the data lines will assume that it is from a computer and limits the power to 500mA to prevent overdrawing, even if it is a special charging port or charger. However, if the data connections are shorted (which i assume the switch is doing) , the phone assumes that it is a charger and then imposes the limit (of usually 2A).
     
    In both cases, the data lines do not transfer power.
  17. Like
    GM_Peka reacted to Random Access Soda in Metal Gear Solid V: Ground Zeroes first PC Gameplay   
    You are insulting me with NO provocation there
    http://www.computerandvideogames.com/364271/pc-piracy-rate-above-90-says-ubisoft/
    do your damn research before accusing me of posting made up statistics.
  18. Like
    GM_Peka reacted to Random Access Soda in Metal Gear Solid V: Ground Zeroes first PC Gameplay   
    Anything above 30 is placebo. You are you to question the experts? It's not like they do these things without researching it first. 60 fps is a lie perpetuated by the GPU makers to sell GPUs more often.
  19. Like
    GM_Peka got a reaction from JAKEBAB in The Schiit Fulla Just Released - An $80 Headphone Amp + DAC   
    To be fair, I've actually had a bit more of a deeper look, and they actually seem pretty down to earth and decent people to me, now I've looked past, the in my opinion, silly jokes.
  20. Like
    GM_Peka got a reaction from cesrai in The Schiit Fulla Just Released - An $80 Headphone Amp + DAC   
    Yeah, evidently enough, some people do find the branding funny, to each their own, I suppose.
     
    Also, no one's inclined to take me seriously, they can if they like, it's their call.
  21. Like
    GM_Peka got a reaction from Mechev in 2 psu reviews   
    *Cough* Seasonic master race *cough*.
     
    On a more serious note, I'd just get the one that makes you more happy, they are both more than fine.
  22. Like
    GM_Peka got a reaction from McMurderMonkey in EK sure are "fans" of themselves!   
    Oh god, that would kill me, the noise my HDD gives off is enough to put me on edge. As soon as 4TB SSDs are available for 350 pounds or something, I'm going full SSD (you may notice I'm using 2TBs roughly for storage ATM, so why would I want 4TB, that's because storage needs are only going to increase as time goes on).
  23. Like
    GM_Peka got a reaction from Darkr in Headset Choice   
    Get HD598s (or HD558), plus a modmic.
     
    G4ME ZEROs are overpriced IMO.
  24. Like
    GM_Peka got a reaction from Razarza in Planetside 2 Dev Confirms PS4 CPU Is A Bottleneck   
    Well the thing is with CPU bottlenecks, you can reduce graphical fidelity, and the frame rate will not change much, I think that's how it is anyway.
     
    Edit: Although, I guess worse shading/shadows and less shrubbery will reduce CPU load.
  25. Like
    GM_Peka reacted to Builder in Why UNIX and Mac OS X Beat Windows   
    Edited 8/24/2014: Section headers added and essay cleaned up.
     
    BEFORE YOU READ: This thread is intended to serve primarily as a vehicle for a brief history of how we got from UNIX to OS X as it stands today, secondarily for discussing why I use OS X and by extension, why I use Macs, and as a tertiary intention, for the discussion of the pros and cons of various UNIX-like or UNIX derived operating systems, or other operating systems. It will also highlight Apple’s rich history of fostering open development processes and open source software projects. Yes, you could find this information elsewhere, but it’s in front of you right now, carefully organized for your reading pleasure. If I correct some misconceptions along the way then so be it.
     
    I’m not perfect, a lot of this information is shrouded in myth and legend, TL;DR if I get something wrong please don’t hesitate to say so and I will correct the OP with it. One cannot possibly recount the entire history of UNIX in a single forum post, so if you have any other questions, again, don’t hesitate to ask them. I will do my best to answer them.
     
    Sources:
    Mac OS X Internals, A Systems Approach (osxbook.com)
    Wikipedia (wikipedia.org)
    Knowledge accumulated over the years that I can’t really cite because I forgot where it came from but it’s been verified to be true (myself)
     
    Sit back, relax, grab a coffee, and ponder life’s big questions as I take you on a guided tour through history of the most important software ever wrought of humble code by all humanity.
     
    "Unix (all-caps UNIX for the trademark) is a multitasking, multiuser computer operating system that exists in many variants. The original Unix was developed at AT&T's Bell Labs research center by Ken Thompson, Dennis Ritchie, and others." (http://en.wikipedia.org/wiki/Unix)
     
    Everything is a file: UNIX is created.
    UNIX was an advanced multi-user operating system whose first release occurred in 1973. It was originally intended to be a "programmer's workbench" for writing code that was compatible on many platforms. Designed primarily by the now famous Ken Thompson and Dennis Ritchie, UNIX had its roots in the MULTICS (soon to become UNICS as a joke on UNICS being a castrated version of MULTICS after Bell Labs pulled out) project started at MIT in conjuncture with AT&T Bell Labs and General Electric in the mid 1960s. At Bell Labs, UNICS was rewritten in the C programming language, which had been designed specifically for the task. The idea behind UNIX was to develop an operating system that used a minimal amount of machine language (assembly) code to ensure that porting it over to new platforms would require minimal effort, often only porting the assembly code base and writing a C compiler. This may sound elementary now, but at the time it was revolutionary to create a system that could very easily be ported to many other computing platforms with minimal effort. Traditional efforts were often assembly language only. At this point in time, UNIX was known as Research UNIX, and was completely proprietary, with source licenses being available at nominal cost.
     
    One of these source license was issued to the University of California Berkeley, where a graduate student named Bill Joy began compiling his own version of UNIX for distribution. The first release of the first Berkeley Software Distribution (BSD ) occurred on March 9, 1978, and in edition to UNIX’s core codebase, also contained a Pascal compiler and the line editor ex, of Joy’s own design. This release was known as 1BSD. 2BSD would be released a year later in 1979 to include two programs that remain at the core of programming on UNIX systems to this day: the famous (or infamous, depending on who you ask ) vi text editor and the C shell.
     
    The Free Software Revolution and the Birth of GNU/Linux
    Though now in widespread usage, (and long before Windows took over) BSD was subject to licensing fees from AT&T, which had grown quite large. In response to the growing price of a BSD license, a man by the name of Richard Stallman founded the GNU Foundation to start an effort to create a free software version of UNIX that would be both free as in beer and free as in speech. The GNU Foundation also authored a new software license known as the GNU GPL, which introduced the concept of “copylefting” (a play on words of copyrighting) to the world. Copylefting meant that modified version of the source code had to be distributed containing the same license, and it served in the GNU license to ensure that a company could not leverage GPL licensed software for their own purposes without giving their modifications back to the community. It restricted the distribution of binary-only sources, that is, sources which can be executed but not read by humans. The GPL is actually a lot more complicated than this, and if you are interested in software licensing and the GPL in particular, I urge you to read http://en.wikipedia.org/wiki/GNU_General_Public_License.
     
    As part of Stallman’s project to create the ultimate free software distribution of a UNIX-like system, a project was launched to create a revolutionary new microkernel on which to base the system. This project is known as GNU Hurd, and to this day they have not produced a kernel suitable for consumer or production usage. Never let it be said that Stallman is a quitter though, as Hurd is still in active development well over its 30th birthday.
     
    Obviously, those impatiently awaiting Stallman’s halo UNIX derivative grew more impatient. Seeing the problem with the Hurd and fearing (as is now obvious) that it would never get completed, a brilliant systems programming by the name of Linus Torvalds decided to come up with his own solution; a GNU GPL licensed kernel compatible with the rest of the GNU software library (which had been completed in spite of Hurd’s perpetual state of almost-doneness, which, as a side note, Hurd has been in since Stallman stated it would be done by fall 1990) now known as the Linux kernel. Suffice it to say that the Linux kernel took over, despite it being a monolithic kernel while Hurd was intended to be a microkernel.
     
    Chart of UNIX, UNIX-derived, and UNIX-like Operating Systems
    At this point you might be wondering, what the fuck does this have to do with OS X? Don’t worry, we’re getting there. The history of free software to this point is important to understand, because it was pivotal in a 1985 startup’s history and in general helps in understanding what happened in the next 12 years before OS X was released. If you’re looking for a tree of all of this stuff, brace yourself:

     
    Five years ahead of its time: The 3M challenge and the founding of NeXT.
    Ousted from his position at Apple after working the Macintosh and Lisa teams into the ground in 1984, Jobs set out to create the perfect “3M” computer at his new company NeXT Inc., with at least a megabyte of RAM, a megapixel display, and a processor that could process one million instructions per second, (MIPS). His goal was to create the ultimate research computer for universities and research centers. When asked how he felt about the NeXT computer arriving months late, Steve Jobs (in typical Jobsian fashion) replied, “Months late? This computer is five years ahead of its time!” And on this occasion, most people think he was right in saying so. Tim Berners Lee would later use a NeXT “Cube” (named affectionately for its cube like 1’x1’x1’ chassis designed by Apple Ilc case designer Frogdesign) and its corresponding NeXTSTEP operating system to create the World Wide Web, attributing the speed at which he did so to the operating system’s object oriented capabilities.
     
    While NeXT’s early hardware failed to amaze, NEXTSTEP was a fully object-oriented operating system, a product that had previously been the subject of many years’ research at Apple.
     
    NEXTSTEP and the Object-Oriented Craze
    NEXTSTEP was composed of a hybrid kernel containing the Mach 2.0 microkernel and the 4.3BSD (sound familiar?) UNIX environment. NEXTSTEP introduced a large array of revolutionary user interaction paradigms, including ubiquitous drag and drop and an object-oriented device driver framework known as Driver Kit. NEXTSTEP used Objective-C as its core programming language. Objective-C is an object-oriented C variant containing syntax inspired by Smalltalk to which the comprehensive rights were purchased by NeXT. There is a joke among Objective-C programmers that Objective-C contains the syntactical efficiency of C with the compiled speed of Smalltalk, i.e. taking the worst points of two languages instead of the best ones. It was at NEXTSTEP release 1.0 that the first iteration of Interface Builder appeared, a feature that is now part of Xcode in OS X.
     
    Despite NEXTSTEP’s resounding success and wide appreciation as an object oriented programming environment, it was unable to sustain the company’s lackluster hardware business and so in 1993 the hardware business was dropped in order to focus efforts on developing NEXTSTEP for the x86 platform.
     
    OpenStep and OPENSTEP
    NeXT developed a partnership with Sun Microsystems to release the specifications for OpenStep, a completely open platform containing several frameworks and APIs that could be used to create one’s own implementation of an object oriented operating system on any underlying core system. This specification was implemented in HP-UX, SunOS, and yes, even Windows NT. NeXT released its own OpenStep compliant operating system known as OPENSTEP in 1996. OPENSTEP was essentially a version of NEXTSTEP fully compliant with their OpenStep standard. (speaking of that bad product names thread, whomever thought it was a good idea to use all of these similar names and capital letters in NeXT’s product lineup should be dragged out into the street and shot. If these names confuse you, you’re not the only one.)
     
    Meanwhile at Apple, Microsoft was beating them to a pulp. After establishing complete operating system dominance, it became clear that Apple’s floundering strategy of releasing half-completed products that barely worked and reeked of desperation, (such as a licensing program allowing their System series of OSes to be installed on any hardware, much in the same fashion as Windows though without having the critical advantage of actually being any good) would not work.
     
    WHERE IS OS X?
    Whew. 1500 words in and OS X hasn’t even been created yet. I promise, it’s coming. We just have another new kernel ( ) to cover before we get to the really good stuff.
     
    Apple System OS and the Search for a New Beginning
    The Apple System series of operating systems was looking long in the tooth. Being the default method of interacting with a Macintosh since 1994, Windows was dominating the market, releasing improved technologies year after year that further extended their lead. Apple began a frantic search for their next generation operating system, at first partnering with the Open Software Foundation to port Linux (see, I told you it was important!) to the Power Macintosh. The result was known as MkLinux, running on the OSF’s Mach implementation, and was completely open source. It featured the Linux kernel running as a single process in Mach. The core system, osfmk, would eventually become part of the OS X kernel base. Despite yielding osfmk, MkLinux was unsuccessful in providing what Apple wished for in their next-generation OS. 
     
    Apple announced the purchase of NeXT Inc., in late 1996. What would have been called System 7.6 was released afterwards bearing the new moniker, “Mac OS.” Showing how the tables have turned since, Apple focused on marketing Mac OS 7.6 as being very compatible with Windows 95 and the internet. Mac OS 8 would bring the Copland API to the Mac, which later evolved into the Carbon API, (now defunct). It also introduced the brand-new Platinum user interface and a brilliant little search engine for local drives, network servers, and the internet, known as Sherlock. Mac OS 8.1 sought to apologize to Apple’s power users for OS 8’s ruinous changes to the classic Mac user interface by adding back in a start button and allowing the user to choose whether to boot to the Metro UI or the desktop. (…) In all seriousness, 8.1 introduced such important under-the-hood advances as the HFS+ file system, the Macintosh Runtime for Java, as well as bringing in Internet Explorer and Netscape Navigator for browsing the web. Here ends the changes to the original System series that were eventually implemented in OS X.
     
    Power Meets Usability: The development of Mac OS X begins.
    Apple’s acquisition of NeXT was looking pretty bad at this point, having spent a whole paragraph improving their old codebase and not doing anything whatsoever with NeXT’s vast software portfolio. Just kidding. You knew I was kidding, right?
     
    After buying NeXT, Apple began creating an entirely new operating system based on OPENSTEP’s codebase. The first step was creating a similarly awesome object oriented programming interface to OpenStep’s, which is now known as the Cocoa API. The first version of what would become OS X was known as Rhapsody, an effort to combine the intuitiveness and usability of Mac OS with the true grit power user features and standards establish by NeXT and OPENSTEP, and it was completed in September 1997 comprised of OPENSTEP, 4.4BSD and Mac OS, of course. Already we have a lot of open source software coming in, with 4.4BSD being totally open source and OPENSTEP’s OpenStep collection also being open source. In Rhapsody 2, more code, this time from the FreeBSD, NetBSD, and Mach 3.0 projects, was added. From there, Rhapsody 2 was forked into OS X Developer Preview 1, OS X Server 1.0, and Darwin 0.1. In DP2, the OpenStep-derived Cocoa API was implemented. DP3 introduced Aqua, which is the glowingly blue user interface that has been a hallmark of OS X design, (and dare I say an inspiration to Aero?) ever since. The OS X Public Beta followed the incremental DP4, featuring the open source Darwin core, the Dock, Aqua, and a new version of Sherlock. In 2002, Apple and the Internet Systems Consortium founded the OpenDarwin project in an attempt to create a vibrant open source community around OS X’s open source core.
     
    And finally, on March 24, 2001, Apple released the first version of Mac OS X, 10.0 “Cheetah.” Drawing on nearly 30 years of UNIX heritage and open source community-based development, combined with the power and efficiency of the OPENSTEP environment, and the usability and friendliness of Mac OS, Mac OS X is truly “the world’s most advanced operating system.”
×