Jump to content


  • Content count

  • Joined

  • Last visited


This user doesn't have any awards

About MageTank

  • Title
    Fully Stable
  • Birthday October 27


  • CPU
    Core i7 7700K overclocked to 5.2ghz (delidded and supplied by @done12many2)
  • Motherboard
    ASRock Z270 Taichi
  • RAM
    32GB (2x16GB) G Skill Ripjaws V 3200mhz C14 (Overclocked to 3600mhz C14-14-14-28-CR2)
  • GPU
    EVGA GTX 1080 Ti Hybrid FTW3
  • Case
    Cooler Master MasterCase Pro 5
  • Storage
    Samsung 850 Evo M.2 500GB
  • PSU
    EVGA 650W Supernova G2
  • Display(s)
    Dell S2417DG 165hz G-Sync TN
  • Cooling
    EVGA CLC 280 AIO
  • Keyboard
    Logitech G810 Orion Spectrum
  • Mouse
    Logitech G403
  • Sound
    Sennheiser Game One
  • Operating System
    Windows 10

Profile Information

  • Gender
  • Location
    United States, Ohio
  • Interests
    Gaming, Computer Hardware
  • Occupation
    Slim Jim Enthusiast

Contact Methods

  • Battle.net

Recent Profile Visitors

6,340 profile views
  1. I've gone into great details over this in the past. Rather than repeat that, I'll provide the mountain of evidence from my previous posts: Basically, the TL:DR is, Geekbench is not reliable because it only grabs DMI strings for CPU speeds, not the actual clock speeds. This means you cannot know the CPU clock speed or memory clock speed, so it's impossible to directly compare scores from one Geekbench result to another, let alone extrapolate that data to compare against another benchmark or any real life workloads. Very rarely would I consider information "meaningless", but with Geekbench, it tends to be the case. Unless you happen to see the bench be ran with your own eyes to see the clock speeds beforehand, you will never truly know the results, lol. I really wish it was similar to CPU-Z, and would grab current clock values during the sysinfo part of the benchmark.
  2. The "safety" of the voltage depends on the IC's in the kit, and what they were designed to run at. If we are talking safety in the context of your CPU, then keep VCCIO/VCCSA under 1.3v each for 24/7 (I personally recommend keeping below 1.25v, but it won't really cause damage to run 1.3v, just not efficient). If the context is of the memory itself, again, the IC's are important to know. I'd say most can tolerate 1.4-1.45v each without any real risk to the longevity of the memory, but it's not a question of what is safe, but rather, what is needed. Some kits just scale poorly with voltage, and no amount of voltage will improve their performance. My Samsung B-Die kit (3200 C14 Dual rank) does not scale beyond 1.39v. Even at 1.5v, I cannot get the timings any tighter, or clocks any higher. Some kits on the other hand, scale dramatically all the way up to 2v+. Some XMP's these days even come with 1.45v and 1.5v profiles, and from what I see, have been tested and approved at those voltages, though with their pricing and limited availability, we have zero statistics on longevity. I can say that I have personally ran DDR4 at 1.45v on my PANRAM Hynix kit for over a year with zero degradation or damage, and they were rated for 1.2v. As with all overclocking, your mileage is going to vary.
  3. Exactly. Nothing looks out of the ordinary in that screenshot, and as long as performance is not suffering, you should be fine.
  4. As for the "issue", I don't really see one. Everything looks normal. You are not swapping to pagefile, nor are you caching an obscene amount of memory. If you start getting warnings that you are running low on memory while performing normal tasks, then you should take a look to see what might be causing it, but it looks good from what I can see.
  5. Compilation of Ryzen APU reviews

    Even with his logic of replacing the CPU itself down the line, I still don't personally see it as a net loss of $170, but rather an investment that down the line, doubles as a diagnostic tool. Having a spare CPU that also has integrated graphics, allows you to diagnose both a CPU failure and a GPU failure, without needing two different products. Seeing as this Ryzen CPU performs as well as the R5 1400 while only costing $5-$10 more, it's safe to say it's a worthy investment to get the integrated graphics and have that available for future diagnostics, even long after you've bought a higher end CPU and GPU. I use a 7700k and 1080 Ti, but I also keep both my i5 6600T and a 750 Ti lying around for extra diagnostic purposes. I guess it really depends on your individual outlook, but I personally don't see having backup hardware as a net loss from an investment standpoint.
  6. Compilation of Ryzen APU reviews

    Indeed it is, and it's certainly a superior CPU from a multi-threading standpoint, but for gaming, it hardly yields any advantage in framerates, and lacks an iGPU, which is arguably the greatest appeal of APU's in the first place. Even with only 8 PCIe lanes being used for the GPU, you are not going to be bottlenecked by that limitation, nor would you see a significant boost in performance from the additional 2 cores in every application you test it on. For the majority of gaming titles, a quad core will suffice. There were reviews showing the 2400G with a 1080 Ti, against other Ryzen and Intel CPU's. https://www.pcper.com/reviews/Processors/AMD-Ryzen-5-2400G-and-Ryzen-3-2200G-Review-Return-APU/Discrete-Gaming-Tests I'd say, while it still performs worse than the i3 8100, it's still matching the R5 1400 in relative CPU performance, while offering a very competitive iGPU at only $20 more that will certainly tide people over to invest more of their money into a good GPU in the future. Using your very own logic of investing in better hardware... If one was getting say, $1000 back from their taxes, and planned to use the entire refund on building a PC, would they not be better off in the long run by investing in a solid motherboard, this $170 CPU, decent ram, and then using the majority of the funds on a good GPU and monitor (the monitor being the more important of the two in my opinion, G-Sync and Freesync are amazing)? Throwing all of your money into a CPU might work well if you make enough money to handle these "one component at a time" upgrade intervals, but most people would rather have a complete system up front to serve as a foundation, then upgrade that as the need arises. If this were literally just a year ago, I'd be agreeing with you 100%. I have spent a lot of time on this forum talking people out of investing in the antiquated FX series from AMD, out of the fear that they would throw their money into a dead platform with no future, only to need to upgrade later on to salvage their gaming experience, but this is no longer the case. AMD's budget options are still very competitive with Intel's mid-range gaming solutions, and depending on the titles that people enjoy, these APU's offer plenty of performance to make for a great gaming experience. I've seen people achieve 100+fps on overwatch with tweaked settings. Overwatch, even on it's lowest settings, still looks quite lovely and runs smoothly. The same can be said for most MOBA's and competitive shooters like counterstrike. Will it play modern AAA RPG's on lifelike realism settings? Certainly not, but it's a matter of picking the right tool for the job. If your APU is no longer cutting it for you in the future, you can invest in a much better GPU, and when the CPU portion of that APU becomes a bottleneck, you still have a strong platform with a ton of upgrade options to choose from. I know in that situation, you see the original $170 as a net loss that could have been better spent on buying superior hardware in the first place, but I see it as a product that served it's purpose, and will then remain on the shelf as both a CPU and GPU diagnostic tool
  7. Compilation of Ryzen APU reviews

    I don't think anyone here is questioning your logic that it's best to invest in something good the first time, instead of repeatedly investing in mediocre things. People are simply stating that those on a budget are better off buying this APU for $170 on a platform that has valid CPU upgrade paths, rather than trying to buy say, an Intel i5 AND a GT 1030 that would almost certainly cost you more money up front, whilst barely out-performing in specific titles. Logic being, if you are already looking for a CPU and GPU at the same time, but are in a tight spot, you can buy this product that has both, then upgrade your GPU later. From the benches I've seen, this as a standalone quad core is still quite potent when paired with a dGPU, and exists on a platform where 8c, 16t CPU's can be purchased in the future if ever ones demand for a higher end CPU arises. The problem with the "it's better to save and invest in superior hardware" logic, is that it implies people already have the hardware they need to tide themselves over to upgrade to something that is strong enough outright. This is not always the case for first time system builders, and those wishing to enter the PC gaming realm without spending a fortune up front, the typical consumer of these kinds of products. Those upgrading from older platforms can likely stick it out for a better hardware configuration if what they currently have is good enough to last them that long, and in that situation, I agree with you entirely.
  8. Compilation of Ryzen APU reviews

    Hmm.. these little things seem quite potent for the price. I may grab one of these as a one-stop shop for my Ryzen portion of the memory overclocking guide. Would be nice to demonstrate both the performance uplift of ram on the APU side of things, as well as the uplift when using just the CPU portion of the chip. Hopefully Micro Center adds these to their bundle list, as you'd be able to snag the 2400G for $140, which is quite the bargain for a quad core. All in all, very impressed by what these reviews are showing thus far.
  9. Hey do you have some articles and such that are reliable and good about the performance differences between single and dual channel? mostly for gaming but for other stuff as well... sony vegas and such.


    I have a problem that I just can't figure out with my motherboard maybe it is defective but i don't want to go through the hassle of rma it just yet... my Gigabyte Z370M D3H simply can not work on dual channel memory configuration, it fails to boot... however if I place the 2 Trident Z sticks on single channel configuration it works "perfectly".


    I have it using the standard xmp profile 3200mhz 16 18 18 38 2N... and as I said it is only working in single channel... I wanted to know whether the hassle to have it in dual channel would be worth...


    Thank you kindly for the attention [:

    1. Show previous comments  3 more
    2. Princess Cadence

      Princess Cadence

      When I said "rma the board" was just meaning take more seriously troubleshooting ^^ an expression per say, what you said makes perfect sense, I just want to ask if running it as is could be harmful in the short run? I will try what you said just might leave it to Sunday or next weekend... if any thing as I said the system is functioning 100% perfectly as it should, literally just the dual channel is down... even as I said the XMP profile kick in just fine on single channel....

    3. MageTank


      Nothing wrong with using single channel. In fact, I am currently running single channel on the gaming laptop I built for my mother. Have been running it like that since early December with no issues. My personal tests only showed a deficit of 5% anyways, so it's not that substantial of a difference in performance. I plan on revisiting memory's impact on performance once I get the free time, so I'll include single channel testing in that post. In fact, a friend of mine is wanting to livestream some benchmarking on Twitch, so I might just do that on video for people to see the impact in real-time. 

    4. Princess Cadence

      Princess Cadence

      That would be awesome I would totally check it out, and yes since my display is modest, 2560x1080p 100hz, it is not like I need to squeeze every little bit possible out of my system... also my productivity on my my little pony fan content has increased so much with the i7 8700 from the old i7 6700 I don't dare complain about my memory not being dual channel xD


      I will still find time to try loosen up the pressure and retry booting with it in dual channel configuration and let you know how it goes, thank you so much for your time and attention Mage! can't give ya enough likes on the replies ^^

  10. Depends on the amount of overhead, and the title. Tomb Raider for example, scaled better with lower latency, while Metro LL scaled almost linearly with better bandwidth. I would say 2800 C14 was the point where I hit diminishing returns, having gotten the best minimum framerates with the least amount of effort, and anything higher was basically negligible. Granted, I only had a GTX 770 at the time, so I was unable to see differences in average framerates (GPU bottleneck), so I can also go back and test with this 1080 Ti to demonstrate any differences in that regard. Memory overclocking isn't as simple as changing frequency and primary timings. Both of these values impact arguably the most important "timings", which are RTL/IO-L. RTL (Round Trip Latency) is dependent on virtually every timing, from primaries all the way down to tertiary timings, and is also heavily dependent on frequency. If you increase frequency, but loosen your timings, your RTL will either remain the same, or worse, raise in value, which results in either no performance boost at all, or a negative impact on performance. It's why I recommend people overclock all of their timings manually, in order to get their performance benefits without risking any of it to terrible auto-training from the motherboard. Especially if you have an ASRock board, lol.
  11. I'd have to take DigitalFoundry over TechPowerUp, seeing as DF actually provides a testing methodology to test against. TPU just gives graphs with no context as to how they obtained the numbers, where in the game it was tested, and (the most important part) what the minimum framerates/frametime looked like. If need be, I'll go pickup an 8700k and Z370 board to test this, though I am pretty sure the result will look identical to what I've already tested on Skylake and Kaby Lake.
  12. He was demonstrating the difference in frametime performance on both AMD and Intel platforms. He replied to me as the context of my post was about minimum framerates and frametime, his post makes perfect sense in that regard. The graph you linked is worthless without those metrics. Spend less time insulting people, and more time reading yourself. This is pretty much inline with what I've seen in my personal testing, though it does depend on the title. Most titles see a 10% boost in minimum framerates, while some (Fallout 4 for example) can see upwards of 20% boosts. It depends entirely on the amount of CPU overhead that is present at any given time in the game. Seeing as CoffeeLake is the exact same as Skylake and Kaby Lake from an architecture standpoint, it's safe to assume it scales in the exact same way that they do. Overall, I'd say most consumers are better off buying the cheapest ram they can if they are on Intel platforms, and just overclock manually. I have tested dozens of kits from every IC on the market, and have yet to find a kit that couldn't do 3000mhz on even the worst IMC's on Intel. It takes extra work, but the end result is satisfying, and can save you a decent amount of money that is better spent elsewhere in the build. For AMD users, it's difficult to make Micron or Hynix kits comply most of the time, and the IMC really prefers Samsung B-Die for easier overclocking. That's not to say you can't overclock Micron or Hynix IC's on Ryzen, you certainly can, it's just going to be far more difficult, and you will likely never achieve the performance that a B-Die kit would have under the same effort. I do believe DigitalFoundry did a test with the 8700k and various memory speeds, and included a frametime graph in the video: Notice the orange line is far more stable than the rest. While it still dips eventually, it still has far less dips than what the others do, which makes for a better gaming experience overall. Nobody really notices when their framerate is steady, but almost everyone notices when their framerate starts to dip. Minimizing these dips in minimum framerates is key to a better gaming experience overall. The fact that it sees a decent boost in average framerates is simply icing on the cake:
  13. Memory manufacturers do not test ram on every possible motherboard that is released. Each and every motherboard has a different trace topology that impacts the signaling of the ram. They typically validate the ram itself, then leave it up to the board manufacturers to validate memory on their specific boards. This is why each motherboard has a QVL list. All that being said, I do agree with you that it's likely not his board. The board itself is rated for 4133mhz+, and I've seen people clock to 4333 on that specific board just fine in a 1DPC setup with single rank sticks. It's more likely that either his CPU's IMC is a dud, or he needs to change the DIMM slots his ram is located in, for better signaling (slots 2 and 4 preferred). @jefftm95 Care to link your memory kit to me? I am curious as to how many sticks you are using, and whether or not they are dual rank or single rank. There could be some issues with tertiary timings holding you back, and it may even be a simple fix.
  14. 2800mhz is typically where diminishing returns kicked in during my testing, so I would say it's a very fair compromise assuming you can make it stable at that frequency. Also, be mindful of your cache (uncore) frequency when memory overclocking. A higher uncore multiplier will improve IMC performance, but will also make it more difficult to overclock your ram in the process. Relaxing your cache clock can actually make memory overclocking a little easier, at a slight cost in latency. Try to find the best balance to where you can achieve the highest bandwidth without sacrificing any latency.
  15. None of this is helpful, considering they only tested average framerates. Memory speed has a significant impact on minimum framerates, which is arguably the most important framerate for ones gaming experience. If they would include a frametime test, or at the very least, monitor the lowest dips on average at each speed, we would at least see a more substantial difference in performance. I'd say, memory pricing on Intel matters less not because you don't NEED faster ram, but because cheaper ram is easier to overclock for free (if you have a Z series board). Ryzen is far pickier with the types of IC's it needs for memory overclocking, while even the worst Micron kits are easy to OC on Intel. Both still benefit greatly from higher memory clocks, and lower memory latency.