Jump to content

James09

Member
  • Posts

    91
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    James09 reacted to Marine_Boy in 750Ti - one 4K screen + 2 1080p screens. Will it work?   
    You correctly pointed out that no external power requirement is a thing in this case. I will look into some GeForce 1050. Don't know why didn't I consider this one in the beginning  Thanks! 
  2. Informative
    James09 got a reaction from Marine_Boy in 750Ti - one 4K screen + 2 1080p screens. Will it work?   
    Have a look at the manufacturer's website of the specific card you want to purchase. They will usually include a Maximum Number of Displays listed. As long as its 3, it won't matter which 3 of the ports are populated. If you just need something with ports you might consider something in a similar price range as the 750ti but current gen, such as 1030 / 1050 or something from the red team. Most of those don't even need a power input, which I'm guessing is one of your requirements. Getting something current gen will usually result in more updated connectivity options. "Usually"
     
    Just my 2c
  3. Agree
    James09 reacted to Fardin in Psu just... Died, tips for new psu?   
    whats your budget? get anything from top tier lists. corsair cxm units are decent if budget is limited. 
     
  4. Agree
    James09 reacted to faziten in Question about Ryzen and Coffeelake IPC   
    Let's break this down:
     
    1) CPU instructions are different in AMD and INTEL even those with identical names (AVX, AES, etc). They are different implementations of the same concept, not directly comparable, just like the "thrust" in electric and fuel engine cars.
    2) How can you compare IPC between different CPU's with different instruction-set implementations?. Let's say AMD Ryzen has 10 AVX per cycle (no it does not, it's a simple way to put this). While Intel Covfefe lake has 11AVX instructions per cicle, does it mean intel is better? No. Different instructions set. There is no guide line to determine what should an AVX instruction do, nor how long should it take. Plus the impact in every architecture is way different (different amounts of cache, different cache speeds). So IPC is not a way to compare CPUs. it's obscure and most data is lost to overgeneralizing situations that are unknown to us (implementation secrets) and inherent complexity of the matter. 
    Any real world application has mixed instruction sets involved, every instruction needs different "sizes" in the execute module. So you end up with a bunch of statistical data, that compares things that are not comparable in the first place. Worthless data.
    3) Ryzen has architectural flaws. They are evident when you go up to threadripper (latency between distant cores in packs communicating through infinity fabric). 
    Intel has them too, Haswel had an AVX bug that made it eat up much more current than it should. Plus it had TSX instruction set disabled due to random bugs. Skylake and Kaby lake have an Hyperthreading bug that is evident under certain loads and particularly problematic for servers... etc. There is no bug free hardware nor software.
    Ryzen is stronger than intel in terms of scalability. Its a super scalar architecture similar to what NVidia did with Pascal. It lets them add more cores without making a lot of changes. Of course it has it's own limits, tightly linked to "Infinity fabric way of things". Intel has a simpler core design that allows them to archive faster clocks. All in all, the end user prefers faster clockspeed whenever possible, due to lack of software support for multicore monsters. Some games are made to run in 32 core monsters, but the vast majority is made for a few cores. (It makes programming even more difficult when you try to go past a handful of threads). So "weak" and "strong" are absurdly dumbed down terms to speak of architectural perks and flaws. 
    4) Scores. Use them as guidelines not as the absolute truth. Today for pure gaming, having an intel quad-core or Hexa-core overclocked up to 5.0Ghz is the indisputable best way to go. It has nothing to do with hardware. It's pure software guilt here. Most games have design patters that link heavily some game related features (shadows, physics, input, etc) to one thread. Even if the O.S tries to spread the load, it simply can't. Why is it better you may ask?. Well, games need at least 2 years to be developed. Ryzen came out this year, so there is no game today that was built with ryzen in mind. Plus Zen hits a huge brickwall after 4.0Ghz due to core stability, Intel hits the same brickwall after 5.0Ghz due to heat. 
     
    The key element to understand this Versus is understanding workload. If you deal with games, the "no compromises" CPU is intel. If you deal with budget, your best bet is Zen. You won't be winning any high score contest though. But this is a constant since the beginning. 
     
  5. Agree
    James09 reacted to mhernon93 in Upgraded to SSD Advice needed   
    thanks brother, its an m.2
  6. Like
    James09 reacted to Slick in Open Letter (Linus Going Down)   
    My attempt at a light hearted joke response went rather terribly so i'll go with something a little more straightforward.
     
    We do make a lot of videos that are jokes or trolls.. and as is the nature of these things sometimes they don't go so well! We release a video every day.. It can be hard to perfectly hit the mark every time.. Also please consider that some people may enjoy some things that you don't! Although the comments and like/dislike and whatnot can communicate when most people aren't into it and we constantly try to learn and grow from this.

    The thumbnails are well addressed here - 
     
    We have always been silly, we have always trolled, we have always made jokes, we have always pushed the bar, we have always tried new things.
     
    Sometimes we're too silly. Somtimes we troll a little too much. Sometimes our jokes aren't funny. Somtimes we drop the bar. Sometimes the things we choose to try are not the right things.
     
    And we will continue to do all of these things, we will continue to make mistakes, but we will also make a ton of awesome content while doing so.
     
    Thanks for watching and hopefully the future of our content pleases you more
  7. Like
    James09 reacted to dizmo in Should I go high end now?   
    Depends also on how much money you have, and how your living situation is.
    No point in getting a sweet computer if you can't eat for the next month, right?
  8. Agree
    James09 reacted to vanished in Do "gaming" high hz monitor really make a difference in competitive gaming?   
    Oh, I wasn't thinking about gzync or anything like that, just high refresh rate vs not (like 144 vs 60)
  9. Informative
    James09 got a reaction from vanished in Do "gaming" high hz monitor really make a difference in competitive gaming?   
    Youtube doesnt have the best compression quality but I have seen some gameplay slowed down between Gsync and non Gsync and the difference is astounding to me. The image is a lot smoother allowing you to actually recognise that blob as a head sooner rather than later. 
  10. Agree
    James09 reacted to jirehbobs in Do "gaming" high hz monitor really make a difference in competitive gaming?   
    Sadly I don't know anyone from Taiwan lol. It's ok I think I can live with non-adaptive sync displays for now. Maybe consider it once the prices goes down.
  11. Agree
    James09 reacted to vanished in Do "gaming" high hz monitor really make a difference in competitive gaming?   
    Well, yes  The higher refresh rate means less time between an input and you seeing a result from it, but I just mean in terms of input lag you'll probably feel the difference more than you see it
  12. Agree
    James09 got a reaction from jirehbobs in Do "gaming" high hz monitor really make a difference in competitive gaming?   
    Since the SCREEN is literally drawing more frame per second, assuming the same player skill, the person that has a higher refresh monitor would definitely have an edge over another opponent. But often it comes down in milliseconds but sometimes thats all that matters. If you have Gsync/Freesync enabled monitors that helps even more as the card and the monitor syncs up better. The math itself is beyond me but a head would be shown longer on a 120hz monitor than a 60hz monitor. 
  13. Informative
    James09 reacted to Vellinious in 1070 silicon loterie?   
    Here


  14. Informative
    James09 reacted to STRMfrmXMN in 80 PLUS Efficiency and What It Really Means   
    All the time I'll see people recommend PSUs based on efficiency. This, although fundamentally a good idea so that you don't end up with a stick and some chewing gum powering your system, shows that most do not understand what 80 PLUS efficiency implies. Let's get a couple myths out of the way:

    - "A higher 80 PLUS rating correlates to better quality." Incorrect. Certain components in a PSU do need to be of a certain quality to achieve higher efficiency (typically MOSFETs and diodes), however, quality of soldering, certain capacitors, etc, can be forgone in achieving an exemplary 80 PLUS rating. Electrical performance can be ditched as well. I like to use the EVGA G1 as an example of this. It's made of above average componentry, performs lackingly, and achieves gold efficiency. Then there's the EVGA B2, which is constructed about as well, performs better electrically, and advertises 80 PLUS Bronze efficiency (it actually achieves 80 PLUS Silver efficiency but that standard has been given up by and large). The EVGA B2 is a better PSU than the G1, yet it wastes slightly more electricity. This will correlate to a marginally more expensive power bill (pennies on the dollar for most home users) but ensures you a better power supply for your money. If, however, you plan to run a very power-hungry system for several hours on end then a more efficient power supply can save a more noticeable amount of money, especially if used heavily during hours of the day where electricity is more expensive.
     
    On another note: some brands will undersell their unit's rated wattage if it can achieve higher efficiency at lower loads, I.E. a brand may sell a 550W 80 PLUS Platinum rated unit that can actually output 600W+ but would have to be advertised at a lower efficiency rating if they were to sell it at that rated wattage.

    - "Higher 80 PLUS efficiency keeps the PSU cooler." Not to any serious degree, but this is technically true. A less efficient PSU will waste more electricity and wasted electricity is turned into heat. This is not likely to have an appreciable impact on the temperature of your room or system however as your system doesn't really draw that much power, thus it's better to optimize your system's airflow before throwing an AX1500i in your system to minimize heat created by the power supply. Since PSUs exhaust heat anyways the temperature of your system's hardware will not be impacted to any noticeable degree. Different PSUs also handle cooling differently and 80 PLUS efficiency doesn't correlate to the size of the fan used or the heat-dissipation abilities of the unit.
     
    - "Power supplies are most efficient at around 50% load." This is, by and large, untrue, and seems to be set in stone by many simply because the peak efficiency measured by Ecova's testing of just three load levels is at 50% always. Many manufacturers or reviewers test PSU efficiency at different loads and post charts online, if this matters to you, but many PSUs are more efficient at 60% load than 50% and many are more efficient towards 30%. Don't buy a PSU based on how efficient it will be with whatever hardware you have in it. Different topologies and different PSU platforms handle efficiency differently. This should be a non-issue and you should be looking at buying the best PSU you can get with your money.
     
    - "If you have a 1000W PSU with an 80% efficiency then you are only going to be able to get 800W from your power supply." This is incorrect. If you have an 80% efficient 1000W PSU then, when putting it under enough load to max its output you are going to be drawing more power from the walls - not losing output from your power supply. In this instance, putting a 1000W PSU under max load with an 80% efficiency would mean you're drawing 1250 watts from the wall. Math goes as such:
                                                                                                    X / Y= Z                  
                                                                                            1000W / .80 = 1250
                                                                                      1250W drawn from the wall

    X represents the wattage you're using (say 350W with a Ryzen 7 3700X and RTX 2080 Super under 100% system load), Y represents the efficiency in decimals (an 85% efficient PSU would be .85), and Z represents your total system draw from the wall. For this calculation we're assuming that the PSU in question has exactly enough wattage to power the system at 100% load and is 87% efficient at 100% draw, making it an 80+ Gold efficient power supply.


    So in our case with the 3700X and 2080 Super:
                                                                                                   350 / .87
                                                                          = 402 watts drawn from your power outlet
     
    Note, however, that efficiency is not consistent throughout the load of the power supply.

    Power supplies are more and less efficient at different loads. They are also more efficient when connected to a more powerful grid, the 230V nominal, which you may use if you don't live in North America. Check that your PSU allows for operation under both voltages. Most modern ones switch operation automatically. Other, often older units, will have a hard switch at the back of the unit to switch to choose from either 115V or 230V (note, DO NOT SWITCH TO THE ONE THAT DOESN'T MATCH THE ELECTRICAL OUTPUT OF YOUR WALL OUTLET! This doesn't usually end well!). This graph demonstrates the efficiency curve of a 2011-era Corsair TX750 when plugged into a 115V AC versus being plugged into a 230V AC. Note the TX750 is an 80+ Bronze rated PSU.
                    
                                          
     

    If you live in the United States, for example, you are using a 110-120V (115 nominal) AC through a standard NEMA 5-15 socket. Your power supply may be more or less efficient than your manufacturer claims because they may advertise efficiency through a 230V AC, though standard 80 PLUS efficiency testing is done on a 115V AC. Note that these tests for efficiency are also done under very specific test environments and do not necessarily reflect real-world scenarios so you may achieve higher or lower efficiency than rated by the manufacturer.

    And just to finish up let's go list the various 80 PLUS ratings and their efficiency at different power draws on a 115V and 230V AC as well as 230V AC redundant.
                                                                                   
     
                                                                              
    Note that Silver isn't really used anymore and the efficiency of a PSU that would achieve Silver certification would typically just be rounded up or down to Bronze or Gold. "230V internal redundant" refers to efficiency in a redundant scenario like in a data center. This guy from Dell explains it.
     
    One last thing I want to make a little more hard-hitting here. 80 PLUS efficiency ratings were invented to save corporations and industrial services money in the long-term, not home users! A company with 1000 computers all consuming 100W for 10 hours a day will see a much greater benefit from having all 80 PLUS Titanium units in their systems than you likely would with your system. Don't spend tons of money trying to get a super efficient PSU when a PSU that's just as good, costs less, and achieves a tier lower 80 PLUS rating is drastically cheaper. 
     
    Resources:
    Ecova (formerly Ecos), the 80 PLUS certification founder (and located very near me in Portland!)
    Wikipedia - There's more info here if you want to go down the Wikipedia rabbit hole
    Plug Load Solutions - A list of all PSU companies and how many different PSUs they have that achieve Ecova's various 80 PLUS standards.
  15. Informative
    James09 reacted to Vellinious in 1070 silicon loterie?   
    Stop using offsets.  Use the voltage / frequency curve.

    When you set an offset overclock, you're allowing the software to create what roughly equates to a stock voltage / frequency curve by itself....it's essentially just raising every point across the curve up by your offset amount. As you can see below, I set an offset of +150 on the slider, but looking at the curve, it set +143. That's because 150 was outside of the 12mhz steps that Pascal uses. Disregard that for a second, and just look at the voltage. See how the curve that it's set, has a voltage for that clock at 1043mv? That's going to allow the GPU to try to run your prescribed clock at that voltage, before it bumps the voltage up. Micro-changes that like inherently cause instability.



    Now we'll look at the voltage / frequency curve method. I've set an offset clock of +110, to get my voltage points close, then raised the frequency points for the voltages above 1043mv to higher clocks until I got to the voltage and frequency I was targeting. In this case, the exact same 2164 core clock (+143), except now, it's not going to try to run 1043mv, it's going to go straight to the 1081mv I've prescribed for it to run, and won't go any lower, unless it starts warming up, in which case, it would drop a step and run 2154 @ 1075mv. This GPU is under water, so the likelihood of that happening are slim and none, but.....on air, it could.



     
  16. Informative
    James09 got a reaction from Terra Firma in FX 9590 Cooling   
    Honestly, I feel like you would be better off with a beefier air cooler, assuming your case supports it. Alternatively, the best 120mm AIO would probably be Corsair's H80i v2 or one of the new NZXT coolers. 
     
    Though I haven't personally had any hands on experience with the 9590 specifically, its widely known to be a VERY hot chip. That being said, assuming you get a decent 120mm AIO in push pull, you should be ok at stock clocks at least.
     
    As for PSU, your best bet would be to find some reviews online when they did power testing on that chip, add in your 1060's power usage and add another 100W at least on top just to be on the safe side. Knowing how power hungry the 9590 is, total system consumption will likely end up in the 500W+ range all things considered. Thus, 650W "might" be a tad too low IMO.
  17. Agree
    James09 reacted to LooneyJuice in i7 4770k - Delided   
    Yeah I think that was Intel's phase of "how low can we go?". I mean, look at the 4790k. Talk about a knee-jerk reaction to the debacle. Both built on 22nm, merely a year apart, one runs at 3.5-3.9 and the other tuns 4.0-4.4. And of course, Devil's Canyon was a massive success, partially covering up the whole deal. It just still, to this day, boggles my mind that they couldn't invest a penny more per chip. Sure it amounts to a lot of cash over thousands of chips, but sales can also fluctuate due to reputation...
  18. Like
    James09 reacted to WereCat in i7 4770k - Delided   
    So I finally did it today.
    I delided my i7 4770k using the Der8aurer delid kit.
     
    I used Thermal Grizzly Conductonaut on the CPU die.
    MX-4 on the cooler.
     


     
    I used transparent nail polish to insulate the small SMT near the die (just in case, was probably not needed).
     
    I measured the Before Delid temperatures in the morning, it was afternoon after I measured the After Delid temperatures so the room temperature was higher, by how much I don't know as I forgot to measure the room temperature but it was at least +3*C more.
     
    BEFORE DELID

     
    AFTER DELID

     
     
    As you can see, the temperature went down by at least 20*C (probably more due to the higher room temperature).
    Unfortunately, it looks like the 4.6GHz is really the wall. I was not sure before since I was hitting the temperature limit but I am unable to make 4.7GHz stable at 1.35V or 4.8GHz at 1.40V.
     
    4.8GHz at 1.40V is quite stable in Aida64 but it crashed the moment I tried to run Cinebench R15.
    I managed to get the readings from Aida64 at least. I will try to work on that 4.7GHz but that will be a big jump in voltage just to get it stable at 100MHz more.
     

     
     
    I am not sure what's up with one core being a lot cooler than the others. I thought that deliding will fix this but apparently not, hmm...
     
    EDIT:
    this is on air cooling
  19. Informative
    James09 reacted to WereCat in i7 4770k - Delided   
    Yeah, when I was taking that stuff off the die it was like a crumbly rubber. I have never seen thermal paste in such a bad shape before.
  20. Informative
    James09 reacted to MageTank in Comprehensive Memory Overclocking Guide   
    Welcome to my memory overclocking guide. Before we get started, there are a few things I want to get out of the way, along with a few people to thank. First of all, thank you to @SteveGrabowski0 for being my partner in crime in this sub-forum, spreading the word about memory and it's impact on gaming performance. I would also like to thank @done12many2 for reigniting my passion for memory overclocking. Seeing you take to it so quickly, gave me hope that I could improve upon what I already had, and I did. Lastly, I would like to thank a friend who is not a part of this forum, but he's the man that got me into computers in the first place. He was also the one to teach me every timing in explicit details. Thanks Matt, a man could never ask for a better OCD stricken, unrealistically high standards friend. 
     
    Now, for the disclaimer: Memory overclocking will drive you insane. There is no one-stop overclock that will work for all boards, CPU's, etc. When I say it's trial and error, I mean it. You will either hate it, and never do it again, or become so addicted to it that it consumes your free time. Normally with a disclaimer; someone would say "this is your own doing, I am not liable for damage, bla bla bla" but let's face it, the only way you will damage your system with memory overclocking, is if you completely abandon all common sense. Stay within the voltages I put in this guide, and you will be perfectly fine. Now... let's get this show on the road.
     
    Part 1: Intel
    For this first part, we will be focusing on Intel boards and CPU's, since this is where I have the most expertise. Most of the timings we will be touching, are available on both DDR3 and DDR4, so a lot of this knowledge is interchangeable. Let's start with terminology:
     
    Voltages: Below, are a list of voltages we will use when overclocking our memory to improve stability. I'll include both DDR3 and DDR4 voltages, along with Intel's "recommended max" voltages for the users that wish to have peace of mind. These voltages are:
    vDIMM (Sometimes called VDDQ or DRAM Voltage, supplied from the board to the memory itself) VCCIO (Voltage for the path going into and out of the IMC) VCCSA (Sometimes called System Agent Voltage, it's your IMC and PCIe subdomain voltage) For DDR3, typical voltages are 1.35v (DDR3L), 1.5v (JEDEC DDR3), and 1.65v (OC'd DDR3). Intel's max recommended voltage for DDR3 on Sandy/Ivy/Haswell, is 1.5v +5%, which is 1.575v. For DDR4, typical voltages are 1.2v (JEDEC DDR4), and 1.35v (OC'd DDR4). Intel's max recommended voltage for Skylake's DDR4 half of it's IMC is 1.2v + 5%, which is 1.26v. For the DDR3 half of Skylake's IMC, it's 1.35v + 5% which is 1.4175v. Sources to these claims (and why I think they are bogus) can be found here: 
    For VCCIO/VCCSA, I do not recommend exceeding a value of 1.25v for each. I personally use a value of 1.14v for VCCIO, and 1.15v for VCCSA. Going beyond 1.25v is silly, and may potentially damage your IMC or traces on your board.
     
    Primary Timings: These are timings that are normally listed on every sales page of your ram. They include:
    CAS Latency (tCL) RAS to CAS delay (tRCD) Row Precharge Time (tRP) RAS Active Time (tRAS) Command Rate (CR) (Note: Command Rate is not a timing, but it's listed under Primary Timings, so I included it here) They are also commonly available to tinker on most chipsets, and are often made available for tuning in software like XTU.
     
    Secondary Timings: These are timings that are seldom ever listed anywhere on a marketing page, but you can find them within your BIOS on some chipsets. They include:
    Write Recovery Time (tWR) Refresh Cycle Time (tRFC) RAS to RAS Delay Long (tRDD_L) RAS to RAS Delay Short (tRDD_S) Write to Read Delay Long (tWTR_L) Write to Read Delay Short (tWTR_S) Read to Precharge (tRTP) Four Active Window (tFAW) CAS Write Latency (tCWL) Most of these timings are inaccessible on lower-end chipsets and more restrictive BIOS's. Very rarely will you have access to them on lower-end configurations, and even XTU lacks control over most of these timings.
     
    Tertiary Timings: These are timings that are NEVER listed anywhere on a marketing page, and are different per motherboard/CPU IMC/ ram IC. They are generated by your IMC, after your board probes it repeatedly looking for a stable configuration. Some of you might have noticed your PC restarting a few times when installing new memory kits. These timings are often the cause of that, as they need special training in order for you to post properly. They include:
    tREFI tCKE tRDRD (_SG, _DG, _DD, _DR) tRDWR (_SG, _DG, _DD, _DR) tWRRD (_SG, _DG, _DD, _DR) tWRWR (_SG, _DG, _DD, _DR) SG = Same Group, DG = Different Group, DD = Different DIMM, DR = Different Rank. Credit to @Digitrax for providing this information.
    Very specific boards and chipsets will allow modification of these timings. They are by far one of the most important groups of timings you can adjust, and are directly involved in improving your bandwidth efficiency. More on that later.
     
    Round Trip Latency: Since these settings are not timings, and are not always listed under tertiary timings, I feel they need their own section, as they are probably the single most important settings you can adjust to see the biggest impact on performance. They include two settings:
    RTL (the title of this section should give you hints as to what this is) IO-L As the title of this section hints at, Round Trip Latency is directly involved in how long it takes your ram to complete it's total cycles. The tighter this value is, the lower your overall latency is. Sounds great, right? Well, the problem is: literally every timing is associated with this setting, and tightening other settings, makes it harder to tighten this. It's also annoying to adjust, as you cannot adjust it without also adjusting IO-L settings (the two must be adjusted as a pair) and there is no secret formula for doing so. All I can tell you is: your RTL channels cannot be more than 1 apart in either direction. Example: If RTL of Channel A is 50, RTL of Channel B can be 51 or 49. It cannot be 52 or 48, as this will result in extremely terrible performance, or worse, system instability.
     
    Now that we have the timing terminology out of the way, let's first discuss stability testing. After all, you cannot overclock until you know how to validate that overclock.
     
    Stress Testing (Validating Stability)
    This part is always met with some sort of controversy, as everyone has their own way of doing things. That being said, I too have my own way, and it's the only way I've ever done it, so I'll have to stick by what I know. When making adjustments in your BIOS for timings or frequency, I always recommend running a full pass of memtest86. Memtest86 is not a stress test, but it will test things that can potentially show your IMC not liking your current memory configuration. I use it as a precursor to actual memory stress testing, as it helps prevent instant crashing in Windows due to IMC outright hating your memory configuration. We use memtest86 in two phases: 
     
    Phase 1: Full Pass
    Phase 2: IMC Smackdown.
     
    Phase 1 is pretty self explanatory. It's running memtest86, using all 13 tests. Phase 2 is where the fun begins, as we disable all tests excluding test 6, and run it several times. I personally do 10 runs of test 6, but feel free to do however many you wish to do. It will test different rows and addresses with each subsequent test, so the more you run it, the better your chances are for finding IMC/RAM incompatibility. This phase is critical when making adjustments to tertiary timings, as this test will find issues quicker than any other. When using Memtest86, make sure you hit C, and select "All Cores: Parallel". This will make the test go much quicker. Believe me, you will want to save as much time as you can, as memory overclocking takes a long time to validate 100% stability.
     
    Next, we have my tool of choice for basically all forms of stress testing, Prime95. I know, some of you are scared when you see this come up. In fact, pretty sure I felt someone's heartbeat increase somewhere in the world due to the sheer mention of it. Relax. For this purpose, Prime95 is going to be 100% harmless. In fact, we won't be using an FFT size small enough for it to get hot, so you should be fine. If you are absolutely terrified, feel free to use the non-AVX version, as it shouldn't matter for ram stability (unless you are stress testing specific AVX-based tertiary timings, such as tRDWR_DD/DR, but more on that later. For now, let's focus on how to stress it. Open up Prime95 of your choice (I am currently using 28.10 as of this guide) and input the following settings:

    (Do note: Number of threads should be equivalent to the amount of threads available on your processor. For example, a 7700k has 8 threads, while an R7 1700 has 16)
    Now, for "Memory To Use", make sure you enter your own value. I highly recommend 75% of your total capacity. If you have say, 16GB, then your capacity = 16 x 1024 - 25% = 12288MB. For 8GB, that value would be 6144MB. Since I have 32GB, I'll be using 24576 to stress test. Once this starts, let it run for several hours. I personally let mine run for about 8-12 hours, depending on how I feel and how much I've tinkered from my last stable profile, but I do not recommend running for less than 8 hours. I know it's tempting to cut corners, but memory instability is not a game you want to play. It can seriously corrupt your windows installation, and require a fresh install. Take this part seriously.
     
    As for why we use the settings above, allow me to explain. 512k-1024k is hard on the IMC and IO lanes. 2048k+ is hard on your ram. By setting the range at 512-4096, we not only stress the IMC and IO Lanes, we also stress the memory itself. Be warned: 1344k and 2688k are also included in this range, and are the hardest stress on vCore. If your CPU is unstable by any means, it will fail this, and will likely hold you back on memory overclocking. Always make sure your CPU is 100% stable before attempting memory overclocking. The less variables involved, the better. For those of you with Haswell, and worried about that old myth of Prime95 killing CPU's, understand this. This range lacks 448k, which was the hardest FFT to test on FIVR. You should be fine here.
     
     
    Overclocking Memory (Intel Platforms)
    Precautions: The very first thing I advise you do, is locate your CLEAR_CMOS button on your motherboard (if you have one) or put your system in a location that adjusting your CMOS jumpers/battery is easily accessible. You are certainly going to be using them, no exceptions. Next be sure to have your power supply's power cable near you. Sometimes, removing this and holding down the power button for 60 seconds, results in enough of a clear to allow you to get back into BIOS without completely resetting everything. Lastly, save all of your "pseudostable" profiles, so that you can continue to adjust them for better stability without starting over.
     
    Overclocking Time!: Now that we have the precautions out of the way, it's time to start tinkering. I recommend focusing on Frequency first, while keeping your primary timings the same. I personally dial in a vDIMM of 1.35v, and then I start increasing my memory frequency one memory strap at a time. If I was at 3000 C15, I would try 3200 C15, then 3333 C15, 3466 C15, and so on. When you reach a point to where it no longer posts, you have 3 options. Option 1: Throw more voltage at it. Option 2: Loosen your primary timings. Option 3: settle for last bootable configuration. 
     
    I advise trying option 1 first, as it might only take a little bit more vDIMM to make it stable. For example: My 3600 C14 profile is unstable at 1.35v, but stable at 1.39v. Since it's still under the "1.4175v" that Intel suggested for the DDR3 half of the IMC, I just pretend the DDR4 half of my IMC will tolerate it just as well. As I've ranted about before, you won't be killing an IMC with vDIMM. Now, your VRM components near your ram on the motherboard, that's a different story entirely. Use common sense, and try to avoid going over 1.45v for 24/7 vDIMM and you should be fine. Some 4266 kits even use a value of 1.4v on their XMP's, and nobody has killed a board or CPU with those yet.
     
    Option 2 is what we call "compromising". You have to be careful when making compromises on timings for speed. The end must justify the means. If you gain a slight amount of bandwidth, but lose on latency at all, it's a bad trade. Memory is already so ridiculously fast in regards to bandwidth, that latency should ALWAYS come first in your mind. That being said, frequency can be just as good for latency as it is with bandwidth. It just takes a little balance. If you increase frequency while keeping timings the same, latency improves. If you loosen latency while increasing bandwidth, one of two things can happen. #1: you have faster bandwidth, and latency remains the same as a result. This is a good trade with no negative side effects, so I tend to allow this. #2: you gain bandwidth, but latency suffers. This is a terrible trade, and should never be made. Go back to your last configuration, and work on making that stable instead.
     
    When making minor tweaks, I recommend using software like Aida64's memory bandwidth test (cachemem test) to see your gains in performance. Yes, I know it sucks using paid software, but it seriously helps with knowing whether or not your timings are making a positive or negative impact in performance. While I do intend to provide the list of timings that benefit performance regardless of your memory IC's, you must understand that certain IC's have specific tertiary timings that they benefit from being loose, or tight. I cannot tell you a Samsung timing configuration, that will also boost your Hynix timing configuration, because they both enjoy completely different values. You can also have two different Samsung IC's (B Die, D Die, etc) that also prefer different values. The best course of action in this scenario, is trial and error.
     
    Now that we've gotten frequency and primary timings taken care of, it's time for secondary timings. While you will see small gains from most of these timings, I want to focus on one very important secondary timing. tRFC. You see, memory is a matrix of billions of capacitors that need to be recharged. You have tRFC, a secondary timing, that works alongside tREFI, a tertiary timing. Every <tREFI>, they are recharged in order, for <tRFC> amount of time. Simply put: tRFC is the mount of time your ram can do nothing, while being recharged. tREFI = the amount of time your ram can do things, before needing recharged. Both are very important, and have significant impact on your latency. tRFC works best as low as you can get it, and tREFI functions best as high as you can get it. tRFC, in my testing, is best left at 270, as it's the easiest value to keep stable, while having the best gains in performance. tREFI on the other hand, can go as high as 65535 and not really matter, but can potentially lead to corruption if your motherboard's quality is lackluster. The warmer your DIMMS, the more often they need recharged. If mobo is bad, it can't recharge high enough to meet the required interval. Basically, if motherboard is bad, stick to the JEDEC standard of 7.8usec refresh interval. If your ram is 3000mhz, the formula is 1500 x 7.8 = 11700. If your ram is 3600mhz, the formula would be 1800 x 7.8 = 14040 tREFI.
     
    There are other formula's for your secondary timings worth following, such as: tFAW = tRRD x 4. The others, they tend to take trial and error. Gain's can be small, or big, depending on whether or not you are using DDR3 or DDR4. I can say that with DDR4, the gains are not as massive as touching tertiary timings. Speaking of which...
     
    Tertiary timings: Depending on your level of masochism, this will be the part you love the most, or absolutely dread. There is no in-between. As you saw above during the terminology half, tertiary timings tend to have a few suffixes after their name. These are SG, DG, DD, and DR. I'll be frank here. I have no idea what SG or DG means, I just know that they severely impact your bandwidth, no matter what kind of memory you use. As for DD, I believe these are related to 2DPC (DIMMS Per Channel) and only matter if you have 2 DIMMS per channel (ITX users rejoice, less complication) while DR matters when using multi-rank kits. It's easier to associate DR with "Dual Rank". If you have a single rank kit, touching _DR timings does literally nothing. No positive or negative, and no instability issues either. I recommend taking these one at a time, or at the very least, one group at a time. Focus on tRDRD (and all of it's suffixes), followed by tRDWR, and so on. Fun fact about tRDWR: these timings directly impact AVX. The tighter they are, the hotter AVX is. The looser they are, the cooler AVX is. Those of you that fear AVX, you might be able to use this to your advantage, and make those stress tests easier on yourself. I promise not to judge you.
     
    EDIT: Huge thanks to Digitrax for providing clarification on what these tertiary timings mean. Look at his post below for details:
     
    Once you've finally settled on your tertiary timings, and have gone through countless hours of stress tests, it's time for the bane of my existence. RTL/IO-L's. I honestly cannot give you any better advice, other than "You gotta feel it". There is no magical value that I can tell you to dial in, and have it work. You can ask me until you are blue in the face, and I simply will not be able to help you. RTL has one very specific value it likes, and a few others that it "tolerates", and that's it. Either it works, trains poorly, or doesn't work at all. Now, with DDR4, we do have a trick up our sleeves to at least prevent it from training poorly. It's a very simple formula for a specific setting, called RTL Init. This formula is: IO-L + IO-L Offset + CL (x2) + 10. Let's say your IO-L is 4, and your offset is 21. You have a CAS Latency of 14. The formula would be: 4 + 21 + 14 (x2) + 10 = 63. Once you input 63 in the RTL Init setting, your IMC will no longer train RTL's beyond it's current threshold. This is great, as it at least prevents performance from getting worse. However, this is only a band-aid. You should still strive to find optimal settings for RTL/IO-L. That being said, do not beat yourself up dwelling on this. If you've gained significant strides in all other aspects of your ram, then feel proud of what you've accomplished. It's still worlds beyond what XMP can offer you, and you've gotten one step closer to mastering one of the most difficult "overclocking disciplines" there is. 
     
    For those of you that expected more than this, I am sorry. I am still learning myself, and I do not feel that I understand every aspect yet, so bare with me as I continue to learn and update this "guide" with what I discover in the future. If I forgot to tag anyone that was waiting to read this, I apologize, as my mind has been elsewhere and I honestly cannot remember who was waiting to read this. Part 2 (AMD Classic) and Part 3 (AMD Ryzen) will be coming as soon as I get the time. It may take weeks or even months for those to be completed, as my free-time is very scarce at the moment. As for why I need two entirely different parts for AMD, It's simple. Ryzens IMC is a complete overhaul over AMD's older architectures. It resembles absolutely nothing if it's former IMC (for better or for worse) and is currently lacking in many features. Simply put: Overclocking Ryzen's memory isn't an easy task, even for veterans. It requires a lot of tricks and luck, far more than any other platform I've encountered. 
     
    When I get additional time, I'll amend some of this guide to try to make it easier to understand, as well as add my experiences and additional tricks to save time during this process. Good luck everyone, hope it helps. 
  21. Informative
    James09 reacted to MageTank in Comprehensive Memory Overclocking Guide   
    So, a slight update. AMD has released a new AGESA revision, that has unlocked access to secondary and tertiary timings. This may take time for your specific board to receive the update, but it's exciting news nonetheless. https://community.amd.com/community/gaming/blog/2017/05/25/community-update-4-lets-talk-dram
     
    With access to these timings, it means we can make ANY IC's work with Ryzen, and should be able to fine-tune the performance to achieve much better latency and bandwidth results. I will be working on the Ryzen part of the guide while I have time off work, and I'll update that as soon as I can. For now, I'll post this AGESA information in the Ryzen half of the thread, and guide people with information regarding specific timings should they decide to tinker with ram manually. 
  22. Informative
    James09 reacted to MageTank in Comprehensive Memory Overclocking Guide   
    Yes, I've been currently stress testing a 3466 C14 setup on Ryzen with my previous stress test methodology and it seems to be working exactly the same. I will say this. For as much of a headache Ryzen is when it comes to general memory overclocking, once you find something that posts, it's pretty difficult to make it crash outright. The hard part is finding something the board/IMC tolerates in order to post in the first place, lol. 
  23. Like
    James09 reacted to 1soup in i5 3570k Bottleneck?   
    I like the future upgradeablity on the am4 platform, but I'm not entirely pleased with the current performance of their cpus in gaming. I don't create content, the largest multi-thread workload i do is turning my dvd/blu-ray collection digital.
     
    Even though I'd have to change the entire motherboard, I feel safer with a 7700k for a few years. Not many well known/trust worthy sites or youtubers are testing cpus with GTX 1060s to prove the 1600 can use the 1060 equally to the 7700k and vice versa. 
  24. Like
    James09 got a reaction from LooneyJuice in FX 9590 Cooling   
    Honestly, I feel like you would be better off with a beefier air cooler, assuming your case supports it. Alternatively, the best 120mm AIO would probably be Corsair's H80i v2 or one of the new NZXT coolers. 
     
    Though I haven't personally had any hands on experience with the 9590 specifically, its widely known to be a VERY hot chip. That being said, assuming you get a decent 120mm AIO in push pull, you should be ok at stock clocks at least.
     
    As for PSU, your best bet would be to find some reviews online when they did power testing on that chip, add in your 1060's power usage and add another 100W at least on top just to be on the safe side. Knowing how power hungry the 9590 is, total system consumption will likely end up in the 500W+ range all things considered. Thus, 650W "might" be a tad too low IMO.
  25. Like
    James09 reacted to Terra Firma in FX 9590 Cooling   
    Thank you, for a serious on topic answer. 
     
    I honestly kinda want to stick with water cooling, although I couldn't explain why. I love the NZXT coolers but they're pricey, so I think the H80 fits the bill
×